NASA Astrophysics Data System (ADS)
Chen, Shuhong; Tan, Zhong
2007-11-01
In this paper, we consider the nonlinear elliptic systems under controllable growth condition. We use a new method introduced by Duzaar and Grotowski, for proving partial regularity for weak solutions, based on a generalization of the technique of harmonic approximation. We extend previous partial regularity results under the natural growth condition to the case of the controllable growth condition, and directly establishing the optimal Hölder exponent for the derivative of a weak solution.
NASA Technical Reports Server (NTRS)
Treiman, Allan H.
1995-01-01
A thermochemical model of the activities of species in carbonate-rich melts would be useful in quantifying chemical equilibria between carbonatite magmas and vapors and in extrapolating liquidus equilibria to unexplored PTX. A regular-solution model of Ca-rich carbonate melts is developed here, using the fact that they are ionic liquids, and can be treated (to a first approximation) as interpenetrating regular solutions of cations and of anions. Thermochemical data on systems of alkali metal cations with carbonate and other anions are drawn from the literature; data on systems with alkaline earth (and other) cations and carbonate (and other) anions are derived here from liquidus phase equilibria. The model is validated in that all available data (at 1 kbar) are consistent with single values for the melting temperature and heat of fusion for calcite, and all liquidi are consistent with the liquids acting as regular solutions. At 1 kbar, the metastable congruent melting temperature of calcite (CaCO3) is inferred to be 1596 K, with (Delta)bar-H(sub fus)(calcite) = 31.5 +/- 1 kJ/mol. Regular solution interaction parameters (W) for Ca(2+) and alkali metal cations are in the range -3 to -12 kJ/sq mol; W for Ca(2+)-Ba(2+) is approximately -11 kJ/sq mol; W for Ca(2+)-Mg(2+) is approximately -40 kJ/sq mol, and W for Ca(2+)-La(3+) is approximately +85 kJ/sq mol. Solutions of carbonate and most anions (including OH(-), F(-), and SO4(2-)) are nearly ideal, with W between 0(ideal) and -2.5 kJ/sq mol. The interaction of carbonate and phosphate ions is strongly nonideal, which is consistent with the suggestion of carbonate-phosphate liquid immiscibility. Interaction of carbonate and sulfide ions is also nonideal and suggestive of carbonate-sulfide liquid immiscibility. Solution of H2O, for all but the most H2O-rich compositions, can be modeled as a disproportionation to hydronium (H3O(+)) and hydroxyl (OH(-)) ions with W for Ca(2+)-H3O(+) (approximately) equals 33 kJ/sq mol. The regular-solution model of carbonate melts can be applied to problems of carbonatite magma + vapor equilibria and of extrapolating liquidus equilibria to unstudied systems. Calculations on one carbonatite (the Husereau dike, Oka complex, Quebec, Canada) show that the anion solution of its magma contained an OH mole fraction of (approximately) 0.07, although the vapor in equilibrium with the magma had P(H2O) = 8.5 x P(CO2). F in carbonatite systems is calculated to be strongly partitioned into the magma (as F(-)) relative to coexisting vapor. In the Husereau carbonatite magma, the anion solution contained an F(-) mole fraction of (approximately) 6 x 10(exp -5).
Meulenbroek, Bernard; Ebert, Ute; Schäfer, Lothar
2005-11-04
The dynamics of ionization fronts that generate a conducting body are in the simplest approximation equivalent to viscous fingering without regularization. Going beyond this approximation, we suggest that ionization fronts can be modeled by a mixed Dirichlet-Neumann boundary condition. We derive exact uniformly propagating solutions of this problem in 2D and construct a single partial differential equation governing small perturbations of these solutions. For some parameter value, this equation can be solved analytically, which shows rigorously that the uniformly propagating solution is linearly convectively stable and that the asymptotic relaxation is universal and exponential in time.
Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule
NASA Astrophysics Data System (ADS)
Jin, Qinian; Wang, Wei
2018-03-01
The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.
Regularized Chapman-Enskog expansion for scalar conservation laws
NASA Technical Reports Server (NTRS)
Schochet, Steven; Tadmor, Eitan
1990-01-01
Rosenau has recently proposed a regularized version of the Chapman-Enskog expansion of hydrodynamics. This regularized expansion resembles the usual Navier-Stokes viscosity terms at law wave-numbers, but unlike the latter, it has the advantage of being a bounded macroscopic approximation to the linearized collision operator. The behavior of Rosenau regularization of the Chapman-Enskog expansion (RCE) is studied in the context of scalar conservation laws. It is shown that thie RCE model retains the essential properties of the usual viscosity approximation, e.g., existence of traveling waves, monotonicity, upper-Lipschitz continuity..., and at the same time, it sharpens the standard viscous shock layers. It is proved that the regularized RCE approximation converges to the underlying inviscid entropy solution as its mean-free-path epsilon approaches 0, and the convergence rate is estimated.
NASA Astrophysics Data System (ADS)
Jia, Zhongxiao; Yang, Yanfei
2018-05-01
In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).
Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.
Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan
2014-01-01
Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.
A hybrid Pade-Galerkin technique for differential equations
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1993-01-01
A three-step hybrid analysis technique, which successively uses the regular perturbation expansion method, the Pade expansion method, and then a Galerkin approximation, is presented and applied to some model boundary value problems. In the first step of the method, the regular perturbation method is used to construct an approximation to the solution in the form of a finite power series in a small parameter epsilon associated with the problem. In the second step of the method, the series approximation obtained in step one is used to construct a Pade approximation in the form of a rational function in the parameter epsilon. In the third step, the various powers of epsilon which appear in the Pade approximation are replaced by new (unknown) parameters (delta(sub j)). These new parameters are determined by requiring that the residual formed by substituting the new approximation into the governing differential equation is orthogonal to each of the perturbation coordinate functions used in step one. The technique is applied to model problems involving ordinary or partial differential equations. In general, the technique appears to provide good approximations to the solution even when the perturbation and Pade approximations fail to do so. The method is discussed and topics for future investigations are indicated.
Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model
Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan
2014-01-01
Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389
Spectral Regularization Algorithms for Learning Large Incomplete Matrices.
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-03-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-01-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465
NASA Technical Reports Server (NTRS)
Cockrell, C. R.
1989-01-01
Numerical solutions of the differential equation which describe the electric field within an inhomogeneous layer of permittivity, upon which a perpendicularly-polarized plane wave is incident, are considered. Richmond's method and the Runge-Kutta method are compared for linear and exponential profiles of permittivities. These two approximate solutions are also compared with the exact solutions.
NASA Astrophysics Data System (ADS)
Bause, Markus
2008-02-01
In this work we study mixed finite element approximations of Richards' equation for simulating variably saturated subsurface flow and simultaneous reactive solute transport. Whereas higher order schemes have proved their ability to approximate reliably reactive solute transport (cf., e.g. [Bause M, Knabner P. Numerical simulation of contaminant biodegradation by higher order methods and adaptive time stepping. Comput Visual Sci 7;2004:61-78]), the Raviart- Thomas mixed finite element method ( RT0) with a first order accurate flux approximation is popular for computing the underlying water flow field (cf. [Bause M, Knabner P. Computation of variably saturated subsurface flow by adaptive mixed hybrid finite element methods. Adv Water Resour 27;2004:565-581, Farthing MW, Kees CE, Miller CT. Mixed finite element methods and higher order temporal approximations for variably saturated groundwater flow. Adv Water Resour 26;2003:373-394, Starke G. Least-squares mixed finite element solution of variably saturated subsurface flow problems. SIAM J Sci Comput 21;2000:1869-1885, Younes A, Mosé R, Ackerer P, Chavent G. A new formulation of the mixed finite element method for solving elliptic and parabolic PDE with triangular elements. J Comp Phys 149;1999:148-167, Woodward CS, Dawson CN. Analysis of expanded mixed finite element methods for a nonlinear parabolic equation modeling flow into variably saturated porous media. SIAM J Numer Anal 37;2000:701-724]). This combination might be non-optimal. Higher order techniques could increase the accuracy of the flow field calculation and thereby improve the prediction of the solute transport. Here, we analyse the application of the Brezzi- Douglas- Marini element ( BDM1) with a second order accurate flux approximation to elliptic, parabolic and degenerate problems whose solutions lack the regularity that is assumed in optimal order error analyses. For the flow field calculation a superiority of the BDM1 approach to the RT0 one is observed, which however is less significant for the accompanying solute transport.
The numerical calculation of laminar boundary-layer separation
NASA Technical Reports Server (NTRS)
Klineberg, J. M.; Steger, J. L.
1974-01-01
Iterative finite-difference techniques are developed for integrating the boundary-layer equations, without approximation, through a region of reversed flow. The numerical procedures are used to calculate incompressible laminar separated flows and to investigate the conditions for regular behavior at the point of separation. Regular flows are shown to be characterized by an integrable saddle-type singularity that makes it difficult to obtain numerical solutions which pass continuously into the separated region. The singularity is removed and continuous solutions ensured by specifying the wall shear distribution and computing the pressure gradient as part of the solution. Calculated results are presented for several separated flows and the accuracy of the method is verified. A computer program listing and complete solution case are included.
A new approach to blind deconvolution of astronomical images
NASA Astrophysics Data System (ADS)
Vorontsov, S. V.; Jefferies, S. M.
2017-05-01
We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.
A regularization method for extrapolation of solar potential magnetic fields
NASA Technical Reports Server (NTRS)
Gary, G. A.; Musielak, Z. E.
1992-01-01
The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.
NASA Astrophysics Data System (ADS)
Xu, Qiuju; Belmonte, Andrew; deForest, Russ; Liu, Chun; Tan, Zhong
2017-04-01
In this paper, we study a fitness gradient system for two populations interacting via a symmetric game. The population dynamics are governed by a conservation law, with a spatial migration flux determined by the fitness. By applying the Galerkin method, we establish the existence, regularity and uniqueness of global solutions to an approximate system, which retains most of the interesting mathematical properties of the original fitness gradient system. Furthermore, we show that a Turing instability occurs for equilibrium states of the fitness gradient system, and its approximations.
Numerical modeling of the radiative transfer in a turbid medium using the synthetic iteration.
Budak, Vladimir P; Kaloshin, Gennady A; Shagalov, Oleg V; Zheltov, Victor S
2015-07-27
In this paper we propose the fast, but the accurate algorithm for numerical modeling of light fields in the turbid media slab. For the numerical solution of the radiative transfer equation (RTE) it is required its discretization based on the elimination of the solution anisotropic part and the replacement of the scattering integral by a finite sum. The solution regular part is determined numerically. A good choice of the method of the solution anisotropic part elimination determines the high convergence of the algorithm in the mean square metric. The method of synthetic iterations can be used to improve the convergence in the uniform metric. A significant increase in the solution accuracy with the use of synthetic iterations allows applying the two-stream approximation for the regular part determination. This approach permits to generalize the proposed method in the case of an arbitrary 3D geometry of the medium.
Extended Hansen solubility approach: naphthalene in individual solvents.
Martin, A; Wu, P L; Adjei, A; Beerbower, A; Prausnitz, J M
1981-11-01
A multiple regression method using Hansen partial solubility parameters, delta D, delta p, and delta H, was used to reproduce the solubilities of naphthalene in pure polar and nonpolar solvents and to predict its solubility in untested solvents. The method, called the extended Hansen approach, was compared with the extended Hildebrand solubility approach and the universal-functional-group-activity-coefficient (UNIFAC) method. The Hildebrand regular solution theory was also used to calculate naphthalene solubility. Naphthalene, an aromatic molecule having no side chains or functional groups, is "well-behaved', i.e., its solubility in active solvents known to interact with drug molecules is fairly regular. Because of its simplicity, naphthalene is a suitable solute with which to initiate the difficult study of solubility phenomena. The three methods tested (Hildebrand regular solution theory was introduced only for comparison of solubilities in regular solution) yielded similar results, reproducing naphthalene solubilities within approximately 30% of literature values. In some cases, however, the error was considerably greater. The UNIFAC calculation is superior in that it requires only the solute's heat of fusion, the melting point, and a knowledge of chemical structures of solute and solvent. The extended Hansen and extended Hildebrand methods need experimental solubility data on which to carry out regression analysis. The extended Hansen approach was the method of second choice because of its adaptability to solutes and solvents from various classes. Sample calculations are included to illustrate methods of predicting solubilities in untested solvents at various temperatures. The UNIFAC method was successful in this regard.
Reducing errors in the GRACE gravity solutions using regularization
NASA Astrophysics Data System (ADS)
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.
2012-09-01
The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4 solutions (RL04) from the Center for Space Research (CSR). Post-fit residual analysis shows that the regularized solutions fit the data to within the noise level of GRACE. A time series of filtered hydrological model is used to confirm that signal attenuation for basins in the Total Runoff Integrating Pathways (TRIP) database over 320 km radii is less than 1 cm equivalent water height RMS, which is within the noise level of GRACE.
A regularized vortex-particle mesh method for large eddy simulation
NASA Astrophysics Data System (ADS)
Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.
2017-11-01
We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.
Explicit B-spline regularization in diffeomorphic image registration
Tustison, Nicholas J.; Avants, Brian B.
2013-01-01
Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline “flavored” diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140
NASA Astrophysics Data System (ADS)
Annaby, M. H.; Asharabi, R. M.
2018-01-01
In a remarkable note of Chadan [Il Nuovo Cimento 39, 697-703 (1965)], the author expanded both the regular wave function and the Jost function of the quantum scattering problem using an interpolation theorem of Valiron [Bull. Sci. Math. 49, 181-192 (1925)]. These expansions have a very slow rate of convergence, and applying them to compute the zeros of the Jost function, which lead to the important bound states, gives poor convergence rates. It is our objective in this paper to introduce several efficient interpolation techniques to compute the regular wave solution as well as the Jost function and its zeros approximately. This work continues and improves the results of Chadan and other related studies remarkably. Several worked examples are given with illustrations and comparisons with existing methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonatsos, Dennis; Karampagia, S.; Casten, R. F.
2011-05-15
Using a contraction of the SU(3) algebra to the algebra of the rigid rotator in the large-boson-number limit of the interacting boson approximation (IBA) model, a line is found inside the symmetry triangle of the IBA, along which the SU(3) symmetry is preserved. The line extends from the SU(3) vertex to near the critical line of the first-order shape/phase transition separating the spherical and prolate deformed phases, and it lies within the Alhassid-Whelan arc of regularity, the unique valley of regularity connecting the SU(3) and U(5) vertices in the midst of chaotic regions. In addition to providing an explanation formore » the existence of the arc of regularity, the present line represents an example of an analytically determined approximate symmetry in the interior of the symmetry triangle of the IBA. The method is applicable to algebraic models possessing subalgebras amenable to contraction. This condition is equivalent to algebras in which the equilibrium ground state and its rotational band become energetically isolated from intrinsic excitations, as typified by deformed solutions to the IBA for large numbers of valence nucleons.« less
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
Nonlinear second order evolution inclusions with noncoercive viscosity term
NASA Astrophysics Data System (ADS)
Papageorgiou, Nikolaos S.; Rădulescu, Vicenţiu D.; Repovš, Dušan D.
2018-04-01
In this paper we deal with a second order nonlinear evolution inclusion, with a nonmonotone, noncoercive viscosity term. Using a parabolic regularization (approximation) of the problem and a priori bounds that permit passing to the limit, we prove that the problem has a solution.
Fighting Violence without Violence.
ERIC Educational Resources Information Center
Rowicki, Mark A.; Martin, William C.
Violence is becoming the number one problem in United States schools. Approximately 20 percent of high school students regularly carry guns and other weapons. Several nonviolent measures are appropriate to reduce violence in schools; but only the implementation of multiple ideas and measures, not "quick fix" solutions, will curb…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lusanna, Luca
2004-08-19
The four (electro-magnetic, weak, strong and gravitational) interactions are described by singular Lagrangians and by Dirac-Bergmann theory of Hamiltonian constraints. As a consequence a subset of the original configuration variables are gauge variables, not determined by the equations of motion. Only at the Hamiltonian level it is possible to separate the gauge variables from the deterministic physical degrees of freedom, the Dirac observables, and to formulate a well posed Cauchy problem for them both in special and general relativity. Then the requirement of causality dictates the choice of retarded solutions at the classical level. However both the problems of themore » classical theory of the electron, leading to the choice of (1/2) (retarded + advanced) solutions, and the regularization of quantum field theory, leading to the Feynman propagator, introduce anticipatory aspects. The determination of the relativistic Darwin potential as a semi-classical approximation to the Lienard-Wiechert solution for particles with Grassmann-valued electric charges, regularizing the Coulomb self-energies, shows that these anticipatory effects live beyond the semi-classical approximation (tree level) under the form of radiative corrections, at least for the electro-magnetic interaction.Talk and 'best contribution' at The Sixth International Conference on Computing Anticipatory Systems CASYS'03, Liege August 11-16, 2003.« less
Hössjer, Ola; Tyvand, Peder A; Miloh, Touvia
2016-02-01
The classical Kimura solution of the diffusion equation is investigated for a haploid random mating (Wright-Fisher) model, with one-way mutations and initial-value specified by the founder population. The validity of the transient diffusion solution is checked by exact Markov chain computations, using a Jordan decomposition of the transition matrix. The conclusion is that the one-way diffusion model mostly works well, although the rate of convergence depends on the initial allele frequency and the mutation rate. The diffusion approximation is poor for mutation rates so low that the non-fixation boundary is regular. When this happens we perturb the diffusion solution around the non-fixation boundary and obtain a more accurate approximation that takes quasi-fixation of the mutant allele into account. The main application is to quantify how fast a specific genetic variant of the infinite alleles model is lost. We also discuss extensions of the quasi-fixation approach to other models with small mutation rates. Copyright © 2015 Elsevier Inc. All rights reserved.
Srivastava, Madhur; Freed, Jack H
2017-11-16
Regularization is often utilized to elicit the desired physical results from experimental data. The recent development of a denoising procedure yielding about 2 orders of magnitude in improvement in SNR obviates the need for regularization, which achieves a compromise between canceling effects of noise and obtaining an estimate of the desired physical results. We show how singular value decomposition (SVD) can be employed directly on the denoised data, using pulse dipolar electron spin resonance experiments as an example. Such experiments are useful in measuring distances and their distributions, P(r) between spin labels on proteins. In noise-free model cases exact results are obtained, but even a small amount of noise (e.g., SNR = 850 after denoising) corrupts the solution. We develop criteria that precisely determine an optimum approximate solution, which can readily be automated. This method is applicable to any signal that is currently processed with regularization of its SVD analysis.
NASA Astrophysics Data System (ADS)
Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.
2017-07-01
The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.
NASA Astrophysics Data System (ADS)
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; Birkholzer, Jens T.
2017-11-01
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1-D, 2-D, and 3-D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, td. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, td0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the first two terms for high-accuracy approximations (with less than 10-7 relative error) for 1-D isotropic (spheres, cylinders, slabs) and 2-D/3-D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1-D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2-D/3-D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; ...
2017-10-24
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less
A spatially adaptive total variation regularization method for electrical resistance tomography
NASA Astrophysics Data System (ADS)
Song, Xizi; Xu, Yanbin; Dong, Feng
2015-12-01
The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.
An efficient and flexible Abel-inversion method for noisy data
NASA Astrophysics Data System (ADS)
Antokhin, Igor I.
2016-12-01
We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.
Estimates of the Modeling Error of the α -Models of Turbulence in Two and Three Space Dimensions
NASA Astrophysics Data System (ADS)
Dunca, Argus A.
2017-12-01
This report investigates the convergence rate of the weak solutions w^{α } of the Leray-α , modified Leray-α , Navier-Stokes-α and the zeroth ADM turbulence models to a weak solution u of the Navier-Stokes equations. It is assumed that this weak solution u of the NSE belongs to the space L^4(0, T; H^1) . It is shown that under this regularity condition the error u-w^{α } is O(α ) in the norms L^2(0, T; H^1) and L^{∞}(0, T; L^2) , thus improving related known results. It is also shown that the averaged error \\overline{u}-\\overline{w^{α }} is higher order, O(α ^{1.5}) , in the same norms, therefore the α -regularizations considered herein approximate better filtered flow structures than the exact (unfiltered) flow velocities.
NASA Astrophysics Data System (ADS)
Sakata, Ayaka; Xu, Yingying
2018-03-01
We analyse a linear regression problem with nonconvex regularization called smoothly clipped absolute deviation (SCAD) under an overcomplete Gaussian basis for Gaussian random data. We propose an approximate message passing (AMP) algorithm considering nonconvex regularization, namely SCAD-AMP, and analytically show that the stability condition corresponds to the de Almeida-Thouless condition in spin glass literature. Through asymptotic analysis, we show the correspondence between the density evolution of SCAD-AMP and the replica symmetric (RS) solution. Numerical experiments confirm that for a sufficiently large system size, SCAD-AMP achieves the optimal performance predicted by the replica method. Through replica analysis, a phase transition between replica symmetric and replica symmetry breaking (RSB) region is found in the parameter space of SCAD. The appearance of the RS region for a nonconvex penalty is a significant advantage that indicates the region of smooth landscape of the optimization problem. Furthermore, we analytically show that the statistical representation performance of the SCAD penalty is better than that of \
Local error estimates for discontinuous solutions of nonlinear hyperbolic equations
NASA Technical Reports Server (NTRS)
Tadmor, Eitan
1989-01-01
Let u(x,t) be the possibly discontinuous entropy solution of a nonlinear scalar conservation law with smooth initial data. Suppose u sub epsilon(x,t) is the solution of an approximate viscosity regularization, where epsilon greater than 0 is the small viscosity amplitude. It is shown that by post-processing the small viscosity approximation u sub epsilon, pointwise values of u and its derivatives can be recovered with an error as close to epsilon as desired. The analysis relies on the adjoint problem of the forward error equation, which in this case amounts to a backward linear transport with discontinuous coefficients. The novelty of this approach is to use a (generalized) E-condition of the forward problem in order to deduce a W(exp 1,infinity) energy estimate for the discontinuous backward transport equation; this, in turn, leads one to an epsilon-uniform estimate on moments of the error u(sub epsilon) - u. This approach does not follow the characteristics and, therefore, applies mutatis mutandis to other approximate solutions such as E-difference schemes.
NASA Astrophysics Data System (ADS)
Validi, AbdoulAhad
2014-03-01
This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.
Renormalization Group Theory of Bolgiano Scaling in Boussinesq Turbulence
NASA Technical Reports Server (NTRS)
Rubinstein, Robert
1994-01-01
Bolgiano scaling in Boussinesq turbulence is analyzed using the Yakhot-Orszag renormalization group. For this purpose, an isotropic model is introduced. Scaling exponents are calculated by forcing the temperature equation so that the temperature variance flux is constant in the inertial range. Universal amplitudes associated with the scaling laws are computed by expanding about a logarithmic theory. Connections between this formalism and the direct interaction approximation are discussed. It is suggested that the Yakhot-Orszag theory yields a lowest order approximate solution of a regularized direct interaction approximation which can be corrected by a simple iterative procedure.
Optimal guidance law development for an advanced launch system
NASA Technical Reports Server (NTRS)
Calise, Anthony J.; Leung, Martin S. K.
1995-01-01
The objective of this research effort was to develop a real-time guidance approach for launch vehicles ascent to orbit injection. Various analytical approaches combined with a variety of model order and model complexity reduction have been investigated. Singular perturbation methods were first attempted and found to be unsatisfactory. The second approach based on regular perturbation analysis was subsequently investigated. It also fails because the aerodynamic effects (ignored in the zero order solution) are too large to be treated as perturbations. Therefore, the study demonstrates that perturbation methods alone (both regular and singular perturbations) are inadequate for use in developing a guidance algorithm for the atmospheric flight phase of a launch vehicle. During a second phase of the research effort, a hybrid analytic/numerical approach was developed and evaluated. The approach combines the numerical methods of collocation and the analytical method of regular perturbations. The concept of choosing intelligent interpolating functions is also introduced. Regular perturbation analysis allows the use of a crude representation for the collocation solution, and intelligent interpolating functions further reduce the number of elements without sacrificing the approximation accuracy. As a result, the combined method forms a powerful tool for solving real-time optimal control problems. Details of the approach are illustrated in a fourth order nonlinear example. The hybrid approach is then applied to the launch vehicle problem. The collocation solution is derived from a bilinear tangent steering law, and results in a guidance solution for the entire flight regime that includes both atmospheric and exoatmospheric flight phases.
Nonconvex Sparse Logistic Regression With Weakly Convex Regularization
NASA Astrophysics Data System (ADS)
Shen, Xinyue; Gu, Yuantao
2018-06-01
In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\\ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding sparse logistic regression problem, and study its local optimality conditions and the choice of the regularization parameter to exclude trivial solutions. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a necessary and sufficient local optimality condition is provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.
Application of thermodynamics to silicate crystalline solutions
NASA Technical Reports Server (NTRS)
Saxena, S. K.
1972-01-01
A review of thermodynamic relations is presented, describing Guggenheim's regular solution models, the simple mixture, the zeroth approximation, and the quasi-chemical model. The possibilities of retrieving useful thermodynamic quantities from phase equilibrium studies are discussed. Such quantities include the activity-composition relations and the free energy of mixing in crystalline solutions. Theory and results of the study of partitioning of elements in coexisting minerals are briefly reviewed. A thermodynamic study of the intercrystalline and intracrystalline ion exchange relations gives useful information on the thermodynamic behavior of the crystalline solutions involved. Such information is necessary for the solution of most petrogenic problems and for geothermometry. Thermodynamic quantities for tungstates (CaWO4-SrWO4) are calculated.
Sinc-Galerkin estimation of diffusivity in parabolic problems
NASA Technical Reports Server (NTRS)
Smith, Ralph C.; Bowers, Kenneth L.
1991-01-01
A fully Sinc-Galerkin method for the numerical recovery of spatially varying diffusion coefficients in linear partial differential equations is presented. Because the parameter recovery problems are inherently ill-posed, an output error criterion in conjunction with Tikhonov regularization is used to formulate them as infinite-dimensional minimization problems. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which displays an exponential convergence rate and is valid on the infinite time interval. The minimization problems are then solved via a quasi-Newton/trust region algorithm. The L-curve technique for determining an approximate value of the regularization parameter is briefly discussed, and numerical examples are given which show the applicability of the method both for problems with noise-free data as well as for those whose data contains white noise.
ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION
HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG
2011-01-01
We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme are demonstrated with FETK through comparisons with the original regularization approach for a model problem. The convergence and accuracy of the overall AFEM algorithm is also illustrated by numerical approximation of electrostatic solvation energy for an insulin protein. PMID:21949541
Two-level schemes for the advection equation
NASA Astrophysics Data System (ADS)
Vabishchevich, Petr N.
2018-06-01
The advection equation is the basis for mathematical models of continuum mechanics. In the approximate solution of nonstationary problems it is necessary to inherit main properties of the conservatism and monotonicity of the solution. In this paper, the advection equation is written in the symmetric form, where the advection operator is the half-sum of advection operators in conservative (divergent) and non-conservative (characteristic) forms. The advection operator is skew-symmetric. Standard finite element approximations in space are used. The standard explicit two-level scheme for the advection equation is absolutely unstable. New conditionally stable regularized schemes are constructed, on the basis of the general theory of stability (well-posedness) of operator-difference schemes, the stability conditions of the explicit Lax-Wendroff scheme are established. Unconditionally stable and conservative schemes are implicit schemes of the second (Crank-Nicolson scheme) and fourth order. The conditionally stable implicit Lax-Wendroff scheme is constructed. The accuracy of the investigated explicit and implicit two-level schemes for an approximate solution of the advection equation is illustrated by the numerical results of a model two-dimensional problem.
Numerical solution of inverse scattering for near-field optics.
Bao, Gang; Li, Peijun
2007-06-01
A novel regularized recursive linearization method is developed for a two-dimensional inverse medium scattering problem that arises in near-field optics, which reconstructs the scatterer of an inhomogeneous medium located on a substrate from data accessible through photon scanning tunneling microscopy experiments. Based on multiple frequency scattering data, the method starts from the Born approximation corresponding to weak scattering at a low frequency, and each update is obtained by continuation on the wavenumber from solutions of one forward problem and one adjoint problem of the Helmholtz equation.
Nonlinear oscillator with power-form elastic-term: Fourier series expansion of the exact solution
NASA Astrophysics Data System (ADS)
Beléndez, Augusto; Francés, Jorge; Beléndez, Tarsicio; Bleda, Sergio; Pascual, Carolina; Arribas, Enrique
2015-05-01
A family of conservative, truly nonlinear, oscillators with integer or non-integer order nonlinearity is considered. These oscillators have only one odd power-form elastic-term and exact expressions for their period and solution were found in terms of Gamma functions and a cosine-Ateb function, respectively. Only for a few values of the order of nonlinearity, is it possible to obtain the periodic solution in terms of more common functions. However, for this family of conservative truly nonlinear oscillators we show in this paper that it is possible to obtain the Fourier series expansion of the exact solution, even though this exact solution is unknown. The coefficients of the Fourier series expansion of the exact solution are obtained as an integral expression in which a regularized incomplete Beta function appears. These coefficients are a function of the order of nonlinearity only and are computed numerically. One application of this technique is to compare the amplitudes for the different harmonics of the solution obtained using approximate methods with the exact ones computed numerically as shown in this paper. As an example, the approximate amplitudes obtained via a modified Ritz method are compared with the exact ones computed numerically.
Damageable contact between an elastic body and a rigid foundation
NASA Astrophysics Data System (ADS)
Campo, M.; Fernández, J. R.; Silva, A.
2009-02-01
In this work, the contact problem between an elastic body and a rigid obstacle is studied, including the development of material damage which results from internal compression or tension. The variational problem is formulated as a first-kind variational inequality for the displacements coupled with a parabolic partial differential equation for the damage field. The existence of a unique local weak solution is stated. Then, a fully discrete scheme is introduced using the finite element method to approximate the spatial variable and an Euler scheme to discretize the time derivatives. Error estimates are derived on the approximate solutions, from which the linear convergence of the algorithm is deduced under suitable regularity conditions. Finally, three two-dimensional numerical simulations are performed to demonstrate the accuracy and the behaviour of the scheme.
Regularized solution of a nonlinear problem in electromagnetic sounding
NASA Astrophysics Data System (ADS)
Piero Deidda, Gian; Fenu, Caterina; Rodriguez, Giuseppe
2014-12-01
Non destructive investigation of soil properties is crucial when trying to identify inhomogeneities in the ground or the presence of conductive substances. This kind of survey can be addressed with the aid of electromagnetic induction measurements taken with a ground conductivity meter. In this paper, starting from electromagnetic data collected by this device, we reconstruct the electrical conductivity of the soil with respect to depth, with the aid of a regularized damped Gauss-Newton method. We propose an inversion method based on the low-rank approximation of the Jacobian of the function to be inverted, for which we develop exact analytical formulae. The algorithm chooses a relaxation parameter in order to ensure the positivity of the solution and implements various methods for the automatic estimation of the regularization parameter. This leads to a fast and reliable algorithm, which is tested on numerical experiments both on synthetic data sets and on field data. The results show that the algorithm produces reasonable solutions in the case of synthetic data sets, even in the presence of a noise level consistent with real applications, and yields results that are compatible with those obtained by electrical resistivity tomography in the case of field data. Research supported in part by Regione Sardegna grant CRP2_686.
Particle dynamics around time conformal regular black holes via Noether symmetries
NASA Astrophysics Data System (ADS)
Jawad, Abdul; Umair Shahzad, M.
The time conformal regular black hole (RBH) solutions which are admitting the time conformal factor e𝜖g(t), where g(t) is an arbitrary function of time and 𝜖 is the perturbation parameter are being considered. The approximate Noether symmetries technique is being used for finding the function g(t) which leads to t α. The dynamics of particles around RBHs are also being discussed through symmetry generators which provide approximate energy as well as angular momentum of the particles. In addition, we analyze the motion of neutral and charged particles around two well known RBHs such as charged RBH using Fermi-Dirac distribution and Kehagias-Sftesos asymptotically flat RBH. We obtain the innermost stable circular orbit and corresponding approximate energy and angular momentum. The behavior of effective potential, effective force and escape velocity of the particles in the presence/absence of magnetic field for different values of angular momentum near horizons are also being analyzed. The stable and unstable regions of particle near horizons due to the effect of angular momentum and magnetic field are also explained.
Automatic Aircraft Collision Avoidance System and Method
NASA Technical Reports Server (NTRS)
Skoog, Mark (Inventor); Hook, Loyd (Inventor); McWherter, Shaun (Inventor); Willhite, Jaimie (Inventor)
2014-01-01
The invention is a system and method of compressing a DTM to be used in an Auto-GCAS system using a semi-regular geometric compression algorithm. In general, the invention operates by first selecting the boundaries of the three dimensional map to be compressed and dividing the three dimensional map data into regular areas. Next, a type of free-edged, flat geometric surface is selected which will be used to approximate terrain data of the three dimensional map data. The flat geometric surface is used to approximate terrain data for each regular area. The approximations are checked to determine if they fall within selected tolerances. If the approximation for a specific regular area is within specified tolerance, the data is saved for that specific regular area. If the approximation for a specific area falls outside the specified tolerances, the regular area is divided and a flat geometric surface approximation is made for each of the divided areas. This process is recursively repeated until all of the regular areas are approximated by flat geometric surfaces. Finally, the compressed three dimensional map data is provided to the automatic ground collision system for an aircraft.
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara
2012-10-01
Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.
Semismooth Newton method for gradient constrained minimization problem
NASA Astrophysics Data System (ADS)
Anyyeva, Serbiniyaz; Kunisch, Karl
2012-08-01
In this paper we treat a gradient constrained minimization problem, particular case of which is the elasto-plastic torsion problem. In order to get the numerical approximation to the solution we have developed an algorithm in an infinite dimensional space framework using the concept of the generalized (Newton) differentiation. Regularization was done in order to approximate the problem with the unconstrained minimization problem and to make the pointwise maximum function Newton differentiable. Using semismooth Newton method, continuation method was developed in function space. For the numerical implementation the variational equations at Newton steps are discretized using finite elements method.
Epidemic spreading in weighted networks: an edge-based mean-field solution.
Yang, Zimo; Zhou, Tao
2012-05-01
Weight distribution greatly impacts the epidemic spreading taking place on top of networks. This paper presents a study of a susceptible-infected-susceptible model on regular random networks with different kinds of weight distributions. Simulation results show that the more homogeneous weight distribution leads to higher epidemic prevalence, which, unfortunately, could not be captured by the traditional mean-field approximation. This paper gives an edge-based mean-field solution for general weight distribution, which can quantitatively reproduce the simulation results. This method could be applied to characterize the nonequilibrium steady states of dynamical processes on weighted networks.
On the singular perturbations for fractional differential equation.
Atangana, Abdon
2014-01-01
The goal of this paper is to examine the possible extension of the singular perturbation differential equation to the concept of fractional order derivative. To achieve this, we presented a review of the concept of fractional calculus. We make use of the Laplace transform operator to derive exact solution of singular perturbation fractional linear differential equations. We make use of the methodology of three analytical methods to present exact and approximate solution of the singular perturbation fractional, nonlinear, nonhomogeneous differential equation. These methods are including the regular perturbation method, the new development of the variational iteration method, and the homotopy decomposition method.
A hybrid perturbation Galerkin technique with applications to slender body theory
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1989-01-01
A two-step hybrid perturbation-Galerkin method to solve a variety of applied mathematics problems which involve a small parameter is presented. The method consists of: (1) the use of a regular or singular perturbation method to determine the asymptotic expansion of the solution in terms of the small parameter; (2) construction of an approximate solution in the form of a sum of the perturbation coefficient functions multiplied by (unknown) amplitudes (gauge functions); and (3) the use of the classical Bubnov-Galerkin method to determine these amplitudes. This hybrid method has the potential of overcoming some of the drawbacks of the perturbation method and the Bubnov-Galerkin method when they are applied by themselves, while combining some of the good features of both. The proposed method is applied to some singular perturbation problems in slender body theory. The results obtained from the hybrid method are compared with approximate solutions obtained by other methods, and the degree of applicability of the hybrid method to broader problem areas is discussed.
A hybrid perturbation Galerkin technique with applications to slender body theory
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1987-01-01
A two step hybrid perturbation-Galerkin method to solve a variety of applied mathematics problems which involve a small parameter is presented. The method consists of: (1) the use of a regular or singular perturbation method to determine the asymptotic expansion of the solution in terms of the small parameter; (2) construction of an approximate solution in the form of a sum of the perturbation coefficient functions multiplied by (unknown) amplitudes (gauge functions); and (3) the use of the classical Bubnov-Galerkin method to determine these amplitudes. This hybrid method has the potential of overcoming some of the drawbacks of the perturbation method and the Bubnov-Galerkin method when they are applied by themselves, while combining some of the good features of both. The proposed method is applied to some singular perturbation problems in slender body theory. The results obtained from the hybrid method are compared with approximate solutions obtained by other methods, and the degree of applicability of the hybrid method to broader problem areas is discussed.
On the persistence of spatiotemporal oscillations generated by invasion
NASA Astrophysics Data System (ADS)
Kay, A. L.; Sherratt, J. A.
1999-10-01
Many systems in biology and chemistry are oscillatory, with a stable, spatially homogeneous steady state which consists of periodic temporal oscillations in the interacting species, and such systems have been extensively studied on infinite or semi-infinite spatial domains. We consider the effect of a finite domain, with zero-flux boundary conditions, on the behaviour of solutions to oscillatory reaction-diffusion equations after invasion. We begin by considering numerical simulations of various oscillatory predatory-prey systems. We conclude that when regular spatiotemporal oscillations are left in the wake of invasion, these die out, beginning with a decrease in the spatial frequency of the oscillations at one boundary, which then propagates across the domain. The long-time solution in this case is purely temporal oscillations, corresponding to the limit cycle of the kinetics. Contrastingly, when irregular spatiotemporal oscillations are left in the wake of invasion, they persist, even in very long time simulations. To study this phenomenon in more detail, we consider the {lambda}-{omega} class of reaction-diffusion systems. Numerical simulations show that these systems also exhibit die-out of regular spatiotemporal oscillations and persistence of irregular spatiotemporal oscillations. Exploiting the mathematical simplicity of the {lambda}-{omega} form, we derive analytically an approximation to the transition fronts in r and {theta}x which occur during the die-out of the regular oscillations. We then use this approximation to describe how the die-out occurs, and to derive a measure of its rate, as a function of parameter values. We discuss applications of our results to ecology, calcium signalling and chemistry.
NASA Astrophysics Data System (ADS)
Sumin, M. I.
2015-06-01
A parametric nonlinear programming problem in a metric space with an operator equality constraint in a Hilbert space is studied assuming that its lower semicontinuous value function at a chosen individual parameter value has certain subdifferentiability properties in the sense of nonlinear (nonsmooth) analysis. Such subdifferentiability can be understood as the existence of a proximal subgradient or a Fréchet subdifferential. In other words, an individual problem has a corresponding generalized Kuhn-Tucker vector. Under this assumption, a stable sequential Kuhn-Tucker theorem in nondifferential iterative form is proved and discussed in terms of minimizing sequences on the basis of the dual regularization method. This theorem provides necessary and sufficient conditions for the stable construction of a minimizing approximate solution in the sense of Warga in the considered problem, whose initial data can be approximately specified. A substantial difference of the proved theorem from its classical same-named analogue is that the former takes into account the possible instability of the problem in the case of perturbed initial data and, as a consequence, allows for the inherited instability of classical optimality conditions. This theorem can be treated as a regularized generalization of the classical Uzawa algorithm to nonlinear programming problems. Finally, the theorem is applied to the "simplest" nonlinear optimal control problem, namely, to a time-optimal control problem.
Parameter identification in ODE models with oscillatory dynamics: a Fourier regularization approach
NASA Astrophysics Data System (ADS)
Chiara D'Autilia, Maria; Sgura, Ivonne; Bozzini, Benedetto
2017-12-01
In this paper we consider a parameter identification problem (PIP) for data oscillating in time, that can be described in terms of the dynamics of some ordinary differential equation (ODE) model, resulting in an optimization problem constrained by the ODEs. In problems with this type of data structure, simple application of the direct method of control theory (discretize-then-optimize) yields a least-squares cost function exhibiting multiple ‘low’ minima. Since in this situation any optimization algorithm is liable to fail in the approximation of a good solution, here we propose a Fourier regularization approach that is able to identify an iso-frequency manifold {{ S}} of codimension-one in the parameter space \
Nguyen, N; Milanfar, P; Golub, G
2001-01-01
In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.
Total-variation based velocity inversion with Bregmanized operator splitting algorithm
NASA Astrophysics Data System (ADS)
Zand, Toktam; Gholami, Ali
2018-04-01
Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.
Approximate isotropic cloak for the Maxwell equations
NASA Astrophysics Data System (ADS)
Ghosh, Tuhin; Tarikere, Ashwin
2018-05-01
We construct a regular isotropic approximate cloak for the Maxwell system of equations. The method of transformation optics has enabled the design of electromagnetic parameters that cloak a region from external observation. However, these constructions are singular and anisotropic, making practical implementation difficult. Thus, regular approximations to these cloaks have been constructed that cloak a given region to any desired degree of accuracy. In this paper, we show how to construct isotropic approximations to these regularized cloaks using homogenization techniques so that one obtains cloaking of arbitrary accuracy with regular and isotropic parameters.
Learning Representation and Control in Markov Decision Processes
2013-10-21
π. Figure 3 shows that Drazin bases outperforms the other bases on a two-room MDP. However, a drawback of Drazin bases is that they are...stochastic matrices. One drawback of diffusion wavelets is that it can gen- erate a large number of overcomplete bases, which needs to be effectively...proposed in [52], overcoming some of the drawbacks of LARS-TD. An approximate linear programming for finding l1 regularized solutions of the Bellman
NASA Astrophysics Data System (ADS)
Heumann, Holger; Rapetti, Francesca
2017-04-01
Existing finite element implementations for the computation of free-boundary axisymmetric plasma equilibria approximate the unknown poloidal flux function by standard lowest order continuous finite elements with discontinuous gradients. As a consequence, the location of critical points of the poloidal flux, that are of paramount importance in tokamak engineering, is constrained to nodes of the mesh leading to undesired jumps in transient problems. Moreover, recent numerical results for the self-consistent coupling of equilibrium with resistive diffusion and transport suggest the necessity of higher regularity when approximating the flux map. In this work we propose a mortar element method that employs two overlapping meshes. One mesh with Cartesian quadrilaterals covers the vacuum chamber domain accessible by the plasma and one mesh with triangles discretizes the region outside. The two meshes overlap in a narrow region. This approach gives the flexibility to achieve easily and at low cost higher order regularity for the approximation of the flux function in the domain covered by the plasma, while preserving accurate meshing of the geometric details outside this region. The continuity of the numerical solution in the region of overlap is weakly enforced by a mortar-like mapping.
Bayesian Inversion of 2D Models from Airborne Transient EM Data
NASA Astrophysics Data System (ADS)
Blatter, D. B.; Key, K.; Ray, A.
2016-12-01
The inherent non-uniqueness in most geophysical inverse problems leads to an infinite number of Earth models that fit observed data to within an adequate tolerance. To resolve this ambiguity, traditional inversion methods based on optimization techniques such as the Gauss-Newton and conjugate gradient methods rely on an additional regularization constraint on the properties that an acceptable model can possess, such as having minimal roughness. While allowing such an inversion scheme to converge on a solution, regularization makes it difficult to estimate the uncertainty associated with the model parameters. This is because regularization biases the inversion process toward certain models that satisfy the regularization constraint and away from others that don't, even when both may suitably fit the data. By contrast, a Bayesian inversion framework aims to produce not a single `most acceptable' model but an estimate of the posterior likelihood of the model parameters, given the observed data. In this work, we develop a 2D Bayesian framework for the inversion of transient electromagnetic (TEM) data. Our method relies on a reversible-jump Markov Chain Monte Carlo (RJ-MCMC) Bayesian inverse method with parallel tempering. Previous gradient-based inversion work in this area used a spatially constrained scheme wherein individual (1D) soundings were inverted together and non-uniqueness was tackled by using lateral and vertical smoothness constraints. By contrast, our work uses a 2D model space of Voronoi cells whose parameterization (including number of cells) is fully data-driven. To make the problem work practically, we approximate the forward solution for each TEM sounding using a local 1D approximation where the model is obtained from the 2D model by retrieving a vertical profile through the Voronoi cells. The implicit parsimony of the Bayesian inversion process leads to the simplest models that adequately explain the data, obviating the need for explicit smoothness constraints. In addition, credible intervals in model space are directly obtained, resolving some of the uncertainty introduced by regularization. An example application shows how the method can be used to quantify the uncertainty in airborne EM soundings for imaging subglacial brine channels and groundwater systems.
On the Singular Perturbations for Fractional Differential Equation
Atangana, Abdon
2014-01-01
The goal of this paper is to examine the possible extension of the singular perturbation differential equation to the concept of fractional order derivative. To achieve this, we presented a review of the concept of fractional calculus. We make use of the Laplace transform operator to derive exact solution of singular perturbation fractional linear differential equations. We make use of the methodology of three analytical methods to present exact and approximate solution of the singular perturbation fractional, nonlinear, nonhomogeneous differential equation. These methods are including the regular perturbation method, the new development of the variational iteration method, and the homotopy decomposition method. PMID:24683357
Compressed modes for variational problems in mathematics and physics
Ozoliņš, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley
2013-01-01
This article describes a general formalism for obtaining spatially localized (“sparse”) solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger’s equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support (“compressed modes”). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. PMID:24170861
Compressed modes for variational problems in mathematics and physics.
Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley
2013-11-12
This article describes a general formalism for obtaining spatially localized ("sparse") solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support ("compressed modes"). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size.
Regularization and Approximation of a Class of Evolution Problems in Applied Mathematics
1991-01-01
8217 DT)IG AD-A242 223 FINAL REPORT Nov61991:ti -ll IN IImI 1OV1 Ml99 1 REGULARIZATION AND APPROXIMATION OF A-CLASS OF EVOLUTION -PROBLEMS IN APPLIED...The University of Texas at Austin Austin, TX 78712 91 10 30 050 FINAL REPORT "Regularization and Approximation of a Class of Evolution Problems in...micro-structured parabolic system. A mathematical analysis of the regularized equations-has been developed to support our approach. Supporting
Hesford, Andrew J.; Chew, Weng C.
2010-01-01
The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438
Quantifying non-linear dynamics of mass-springs in series oscillators via asymptotic approach
NASA Astrophysics Data System (ADS)
Starosta, Roman; Sypniewska-Kamińska, Grażyna; Awrejcewicz, Jan
2017-05-01
Dynamical regular response of an oscillator with two serially connected springs with nonlinear characteristics of cubic type and governed by a set of differential-algebraic equations (DAEs) is studied. The classical approach of the multiple scales method (MSM) in time domain has been employed and appropriately modified to solve the governing DAEs of two systems, i.e. with one- and two degrees-of-freedom. The approximate analytical solutions have been verified by numerical simulations.
NASA Astrophysics Data System (ADS)
Bai, Bing
2012-03-01
There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.
The effect of solute on the homogeneous crystal nucleation frequency in metallic melts
NASA Technical Reports Server (NTRS)
Thompson, C. V.; Spaepen, F.
1982-01-01
A complete calculation that extends the classical theory for crystal nucleation in pure melts to binary alloys has been made. Using a regular solution model, approximate expressions have been developed for the free energy change upon crystallization as a function of solute concentration. They are used, together with model-based estimates of the interfacial tension, to calculate the nucleation frequency. The predictions of the theory for the maximum attainable undercooling are compared with existing experimental results for non-glass forming alloys. The theory is also applied to several easy glass-forming alloys (Pd-Si, Au-Si, Fe-B) for qualitative comparison with the present experimental experience on the ease of glass formation, and for assessment of the potential for formation of the glass in bulk.
Selection of regularization parameter for l1-regularized damage detection
NASA Astrophysics Data System (ADS)
Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing
2018-06-01
The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.
Saddlepoint Approximations in Conditional Inference
1990-06-11
Then the inverse transform can be written as (%, Y) = (T, q(T, Z)) for some function q. When the transform is not one to one, the domain should be...general regularity conditions described at the beginning of this section hold and that the solution t1 in (9) exists. Denote the inverse transform by (X, Y...density hn(t 0 l z) are desired. Then the inverse transform (Y, ) = (T, q(T, Z)) exists and the variable v in the cumulant generating function K(u, v
Hydrodynamical Aspects of the Formation of Spiral-Vortical Structures in Rotating Gaseous Disks
NASA Astrophysics Data System (ADS)
Elizarova, T. G.; Zlotnik, A. A.; Istomina, M. A.
2018-01-01
This paper is dedicated to numerical simulations of spiral-vortical structures in rotating gaseous disks using a simple model based on two-dimensional, non-stationary, barotropic Euler equations with a body force. The results suggest the possibility of a purely hydrodynamical basis for the formation and evolution of such structures. New, axially symmetric, stationary solutions of these equations are derived that modify known approximate solutions. These solutions with added small perturbations are used as initial data in the non-stationary problem, whose solution demonstrates the formation of density arms with bifurcation. The associated redistribution of angular momentum is analyzed. The correctness of laboratory experiments using shallow water to describe the formation of large-scale vortical structures in thin gaseous disks is confirmed. The computations are based on a special quasi-gas-dynamical regularization of the Euler equations in polar coordinates.
Optimal guidance law development for an advanced launch system
NASA Technical Reports Server (NTRS)
Calise, Anthony J.; Hodges, Dewey H.; Leung, Martin S.; Bless, Robert R.
1991-01-01
The proposed investigation on a Matched Asymptotic Expansion (MAE) method was carried out. It was concluded that the method of MAE is not applicable to launch vehicle ascent trajectory optimization due to a lack of a suitable stretched variable. More work was done on the earlier regular perturbation approach using a piecewise analytic zeroth order solution to generate a more accurate approximation. In the meantime, a singular perturbation approach using manifold theory is also under current investigation. Work on a general computational environment based on the use of MACSYMA and the weak Hamiltonian finite element method continued during this period. This methodology is capable of the solution of a large class of optimal control problems.
Shi, C; Gao, S; Gun, S
1997-06-01
The sample is digested with 6% NaOH solution and an amount of 50 microl is used for protein content analysis by the method of Comassie Brilliant Blue G250, the residual is diluted with equal 0.4% Lathanurm-EDTA solution. Its Calcium magensium and potassium content are determined by AAS. With quick-pulsed nebulization technique. When a self-made micro-sampling device is used, 20microl of sample volume is needed and it is only the 1/10 approximately 1/20 of the sample volume required for conventional determination. Sensitivity, precision and rate of recovery agree well with those using regular wet ashing method.
Wang, Ya-Xuan; Gao, Ying-Lian; Liu, Jin-Xing; Kong, Xiang-Zhen; Li, Hai-Jun
2017-09-01
Identifying differentially expressed genes from the thousands of genes is a challenging task. Robust principal component analysis (RPCA) is an efficient method in the identification of differentially expressed genes. RPCA method uses nuclear norm to approximate the rank function. However, theoretical studies showed that the nuclear norm minimizes all singular values, so it may not be the best solution to approximate the rank function. The truncated nuclear norm is defined as the sum of some smaller singular values, which may achieve a better approximation of the rank function than nuclear norm. In this paper, a novel method is proposed by replacing nuclear norm of RPCA with the truncated nuclear norm, which is named robust principal component analysis regularized by truncated nuclear norm (TRPCA). The method decomposes the observation matrix of genomic data into a low-rank matrix and a sparse matrix. Because the significant genes can be considered as sparse signals, the differentially expressed genes are viewed as the sparse perturbation signals. Thus, the differentially expressed genes can be identified according to the sparse matrix. The experimental results on The Cancer Genome Atlas data illustrate that the TRPCA method outperforms other state-of-the-art methods in the identification of differentially expressed genes.
Source term identification in atmospheric modelling via sparse optimization
NASA Astrophysics Data System (ADS)
Adam, Lukas; Branda, Martin; Hamburger, Thomas
2015-04-01
Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the concept of sparsity. In the paper, we summarize several optimization techniques which are used for finding sparse solutions and propose their modifications to handle selected constraints such as nonnegativity constraints and simple linear constraints, for example the minimal or maximal amount of total release. These techniques range from successive convex approximations to solution of one nonconvex problem. On simple examples, we explain these techniques and compare them from the point of implementation simplicity, approximation capability and convergence properties. Finally, these methods will be applied on the European Tracer Experiment (ETEX) data and the results will be compared with the current state of arts techniques such as regularized least squares or Bayesian approach. The obtained results show the surprisingly good results of these techniques. This research is supported by EEA/Norwegian Financial Mechanism under project 7F14287 STRADI.
The unsaturated flow in porous media with dynamic capillary pressure
NASA Astrophysics Data System (ADS)
Milišić, Josipa-Pina
2018-05-01
In this paper we consider a degenerate pseudoparabolic equation for the wetting saturation of an unsaturated two-phase flow in porous media with dynamic capillary pressure-saturation relationship where the relaxation parameter depends on the saturation. Following the approach given in [13] the existence of a weak solution is proved using Galerkin approximation and regularization techniques. A priori estimates needed for passing to the limit when the regularization parameter goes to zero are obtained by using appropriate test-functions, motivated by the fact that considered PDE allows a natural generalization of the classical Kullback entropy. Finally, a special care was given in obtaining an estimate of the mixed-derivative term by combining the information from the capillary pressure with the obtained a priori estimates on the saturation.
NASA Technical Reports Server (NTRS)
Smith, R. C.; Bowers, K. L.
1991-01-01
A fully Sinc-Galerkin method for recovering the spatially varying stiffness and damping parameters in Euler-Bernoulli beam models is presented. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which converges exponentially and is valid on the infinite time interval. Hence the method avoids the time-stepping which is characteristic of many of the forward schemes which are used in parameter recovery algorithms. Tikhonov regularization is used to stabilize the resulting inverse problem, and the L-curve method for determining an appropriate value of the regularization parameter is briefly discussed. Numerical examples are given which demonstrate the applicability of the method for both individual and simultaneous recovery of the material parameters.
Backscattering and Nonparaxiality Arrest Collapse of Damped Nonlinear Waves
NASA Technical Reports Server (NTRS)
Fibich, G.; Ilan, B.; Tsynkov, S.
2002-01-01
The critical nonlinear Schrodinger equation (NLS) models the propagation of intense laser light in Kerr media. This equation is derived from the more comprehensive nonlinear Helmholtz equation (NLH) by employing the paraxial approximation and neglecting the backscattered waves. It is known that if the input power of the laser beam (i.e., L(sub 2) norm of the initial solution) is sufficiently high, then the NLS model predicts that the beam will self-focus to a point (i.e.. collapse) at a finite propagation distance. Mathematically, this behavior corresponds to the formation of a singularity in the solution of the NLS. A key question which has been open for many years is whether the solution to the NLH, i.e., the 'parent' equation, may nonetheless exist and remain regular everywhere, in particular for those initial conditions (input powers) that lead to blowup in the NLS. In the current study, we address this question by introducing linear damping into both models and subsequently comparing the numerical solutions of the damped NLH (boundary-value problem) with the corresponding solutions of the damped NLS (initial-value problem). Linear damping is introduced in much the same way as done when analyzing the classical constant-coefficient Helmholtz equation using the limiting absorption principle. Numerically, we have found that it provides a very efficient tool for controlling the solutions of both the NLH and NHS. In particular, we have been able to identify initial conditions for which the NLS solution does become singular. whereas the NLH solution still remains regular everywhere. We believe that our finding of a larger domain of existence for the NLH than that for the NLS is accounted for by precisely those mechanisms, that have been neglected when deriving the NLS from the NLH, i.e., nonparaxiality and backscattering.
An Onsager Singularity Theorem for Turbulent Solutions of Compressible Euler Equations
NASA Astrophysics Data System (ADS)
Drivas, Theodore D.; Eyink, Gregory L.
2017-12-01
We prove that bounded weak solutions of the compressible Euler equations will conserve thermodynamic entropy unless the solution fields have sufficiently low space-time Besov regularity. A quantity measuring kinetic energy cascade will also vanish for such Euler solutions, unless the same singularity conditions are satisfied. It is shown furthermore that strong limits of solutions of compressible Navier-Stokes equations that are bounded and exhibit anomalous dissipation are weak Euler solutions. These inviscid limit solutions have non-negative anomalous entropy production and kinetic energy dissipation, with both vanishing when solutions are above the critical degree of Besov regularity. Stationary, planar shocks in Euclidean space with an ideal-gas equation of state provide simple examples that satisfy the conditions of our theorems and which demonstrate sharpness of our L 3-based conditions. These conditions involve space-time Besov regularity, but we show that they are satisfied by Euler solutions that possess similar space regularity uniformly in time.
Compressive sensing of signals generated in plastic scintillators in a novel J-PET instrument
NASA Astrophysics Data System (ADS)
Raczyński, L.; Moskal, P.; Kowalski, P.; Wiślicki, W.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Rudy, Z.; Rundel, O.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zieliński, M.; Zoń, N.
2015-06-01
The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The discussed detector offers improvement of the Time of Flight (TOF) resolution due to the use of fast plastic scintillators and dedicated electronics allowing for sampling in the voltage domain of signals with durations of few nanoseconds. In this paper we show that recovery of the whole signal, based on only a few samples, is possible. In order to do that, we incorporate the training signals into the Tikhonov regularization framework and we perform the Principal Component Analysis decomposition, which is well known for its compaction properties. The method yields a simple closed form analytical solution that does not require iterative processing. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This is the key to introduce and prove the formula for calculations of the signal recovery error. In this paper we show that an average recovery error is approximately inversely proportional to the number of acquired samples.
Robust approximate optimal guidance strategies for aeroassisted orbital transfer missions
NASA Astrophysics Data System (ADS)
Ilgen, Marc R.
This thesis presents the application of game theoretic and regular perturbation methods to the problem of determining robust approximate optimal guidance laws for aeroassisted orbital transfer missions with atmospheric density and navigated state uncertainties. The optimal guidance problem is reformulated as a differential game problem with the guidance law designer and Nature as opposing players. The resulting equations comprise the necessary conditions for the optimal closed loop guidance strategy in the presence of worst case parameter variations. While these equations are nonlinear and cannot be solved analytically, the presence of a small parameter in the equations of motion allows the method of regular perturbations to be used to solve the equations approximately. This thesis is divided into five parts. The first part introduces the class of problems to be considered and presents results of previous research. The second part then presents explicit semianalytical guidance law techniques for the aerodynamically dominated region of flight. These guidance techniques are applied to unconstrained and control constrained aeroassisted plane change missions and Mars aerocapture missions, all subject to significant atmospheric density variations. The third part presents a guidance technique for aeroassisted orbital transfer problems in the gravitationally dominated region of flight. Regular perturbations are used to design an implicit guidance technique similar to the second variation technique but that removes the need for numerically computing an optimal trajectory prior to flight. This methodology is then applied to a set of aeroassisted inclination change missions. In the fourth part, the explicit regular perturbation solution technique is extended to include the class of guidance laws with partial state information. This methodology is then applied to an aeroassisted plane change mission using inertial measurements and subject to uncertainties in the initial value of the flight path angle. A summary of performance results for all these guidance laws is presented in the fifth part of this thesis along with recommendations for further research.
Properties of Solutions to the Irving-Mullineux Oscillator Equation
NASA Astrophysics Data System (ADS)
Mickens, Ronald E.
2002-10-01
A nonlinear differential equation is given in the book by Irving and Mullineux to model certain oscillatory phenomena.^1 They use a regular perturbation method^2 to obtain a first-approximation to the assumed periodic solution. However, their result is not uniformly valid and this means that the obtained solution is not periodic because of the presence of secular terms. We show their way of proceeding is not only incorrect, but that in fact the actual solution to this differential equation is a damped oscillatory function. Our proof uses the method of averaging^2,3 and the qualitative theory of differential equations for 2-dim systems. A nonstandard finite-difference scheme is used to calculate numerical solutions for the trajectories in phase-space. References: ^1J. Irving and N. Mullineux, Mathematics in Physics and Engineering (Academic, 1959); section 14.1. ^2R. E. Mickens, Nonlinear Oscillations (Cambridge University Press, 1981). ^3D. W. Jordan and P. Smith, Nonlinear Ordinary Differential Equations (Oxford, 1987).
Early-Time Solution of the Horizontal Unconfined Aquifer in the Buildup Phase
NASA Astrophysics Data System (ADS)
Gravanis, Elias; Akylas, Evangelos
2017-10-01
We derive the early-time solution of the Boussinesq equation for the horizontal unconfined aquifer in the buildup phase under constant recharge and zero inflow. The solution is expressed as a power series of a suitable similarity variable, which is constructed so that to satisfy the boundary conditions at both ends of the aquifer, that is, it is a polynomial approximation of the exact solution. The series turns out to be asymptotic and it is regularized by resummation techniques that are used to define divergent series. The outflow rate in this regime is linear in time, and the (dimensionless) coefficient is calculated to eight significant figures. The local error of the series is quantified by its deviation from satisfying the self-similar Boussinesq equation at every point. The local error turns out to be everywhere positive, hence, so is the integrated error, which in turn quantifies the degree of convergence of the series to the exact solution.
Sparse Solutions for Single Class SVMs: A Bi-Criterion Approach
NASA Technical Reports Server (NTRS)
Das, Santanu; Oza, Nikunj C.
2011-01-01
In this paper we propose an innovative learning algorithm - a variation of One-class nu Support Vector Machines (SVMs) learning algorithm to produce sparser solutions with much reduced computational complexities. The proposed technique returns an approximate solution, nearly as good as the solution set obtained by the classical approach, by minimizing the original risk function along with a regularization term. We introduce a bi-criterion optimization that helps guide the search towards the optimal set in much reduced time. The outcome of the proposed learning technique was compared with the benchmark one-class Support Vector machines algorithm which more often leads to solutions with redundant support vectors. Through out the analysis, the problem size for both optimization routines was kept consistent. We have tested the proposed algorithm on a variety of data sources under different conditions to demonstrate the effectiveness. In all cases the proposed algorithm closely preserves the accuracy of standard one-class nu SVMs while reducing both training time and test time by several factors.
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.
Construction of normal-regular decisions of Bessel typed special system
NASA Astrophysics Data System (ADS)
Tasmambetov, Zhaksylyk N.; Talipova, Meiramgul Zh.
2017-09-01
Studying a special system of differential equations in the separate production of the second order is solved by the degenerate hypergeometric function reducing to the Bessel functions of two variables. To construct a solution of this system near regular and irregular singularities, we use the method of Frobenius-Latysheva applying the concepts of rank and antirank. There is proved the basic theorem that establishes the existence of four linearly independent solutions of studying system type of Bessel. To prove the existence of normal-regular solutions we establish necessary conditions for the existence of such solutions. The existence and convergence of a normally regular solution are shown using the notion of rank and antirank.
Zaru, Alessandro; Maccioni, Paola; Colombo, Giancarlo; Gessa, Gian Luigi
2013-10-01
Craving for chocolate is a common phenomenon, which may evolve to an addictive-like behaviour and contribute to obesity. Nepicastat is a selective dopamine β-hydroxylase (DBH) inhibitor that suppresses cocaine-primed reinstatement of cocaine seeking in rats. We verified whether nepicastat was able to modify the reinforcing and motivational properties of a chocolate solution and to prevent the reinstatement of chocolate seeking in rats. Nepicastat (25, 50 and 100 mg/kg, intraperitoneal) produced a dose-related inhibition of operant self-administration of the chocolate solution in rats under fixed-ratio 10 (FR10) and progressive-ratio schedules of reinforcement, measures of the reinforcing and motivational properties of the chocolate solution, respectively. The effect of nepicastat on the reinstatement of chocolate seeking was studied in rats in which lever-responding had been extinguished by removing the chocolate solution for approximately 8 d. Nepicastat dose-dependently suppressed the reinstatement of lever-responding triggered by a 'priming' of the chocolate solution together with cues previously associated with the availability of the reward. In a separate group of food-restricted rats trained to lever-respond for regular food pellets, nepicastat reduced FR10 lever-responding with the same potency as for the chocolate solution. Spontaneous locomotor activity was not modified by nepicastat doses that reduced self-administration of the chocolate solution and regular food pellets and suppressed the reinstatement of chocolate seeking. The results indicate that nepicastat reduces motivation to food consumption sustained by appetite or palatability. Moreover, the results suggest that DBH inhibitors may be a new class of pharmacological agents potentially useful in the prevention of relapse to food seeking in human dieters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bielecki, J.; Scholz, M.; Drozdowicz, K.
A method of tomographic reconstruction of the neutron emissivity in the poloidal cross section of the Joint European Torus (JET, Culham, UK) tokamak was developed. Due to very limited data set (two projection angles, 19 lines of sight only) provided by the neutron emission profile monitor (KN3 neutron camera), the reconstruction is an ill-posed inverse problem. The aim of this work consists in making a contribution to the development of reliable plasma tomography reconstruction methods that could be routinely used at JET tokamak. The proposed method is based on Phillips-Tikhonov regularization and incorporates a priori knowledge of the shape ofmore » normalized neutron emissivity profile. For the purpose of the optimal selection of the regularization parameters, the shape of normalized neutron emissivity profile is approximated by the shape of normalized electron density profile measured by LIDAR or high resolution Thomson scattering JET diagnostics. In contrast with some previously developed methods of ill-posed plasma tomography reconstruction problem, the developed algorithms do not include any post-processing of the obtained solution and the physical constrains on the solution are imposed during the regularization process. The accuracy of the method is at first evaluated by several tests with synthetic data based on various plasma neutron emissivity models (phantoms). Then, the method is applied to the neutron emissivity reconstruction for JET D plasma discharge #85100. It is demonstrated that this method shows good performance and reliability and it can be routinely used for plasma neutron emissivity reconstruction on JET.« less
Majorization Minimization by Coordinate Descent for Concave Penalized Generalized Linear Models
Jiang, Dingfeng; Huang, Jian
2013-01-01
Recent studies have demonstrated theoretical attractiveness of a class of concave penalties in variable selection, including the smoothly clipped absolute deviation and minimax concave penalties. The computation of the concave penalized solutions in high-dimensional models, however, is a difficult task. We propose a majorization minimization by coordinate descent (MMCD) algorithm for computing the concave penalized solutions in generalized linear models. In contrast to the existing algorithms that use local quadratic or local linear approximation to the penalty function, the MMCD seeks to majorize the negative log-likelihood by a quadratic loss, but does not use any approximation to the penalty. This strategy makes it possible to avoid the computation of a scaling factor in each update of the solutions, which improves the efficiency of coordinate descent. Under certain regularity conditions, we establish theoretical convergence property of the MMCD. We implement this algorithm for a penalized logistic regression model using the SCAD and MCP penalties. Simulation studies and a data example demonstrate that the MMCD works sufficiently fast for the penalized logistic regression in high-dimensional settings where the number of covariates is much larger than the sample size. PMID:25309048
Seghouane, Abd-Krim; Iqbal, Asif
2017-09-01
Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.
Surface tension and density of Si-Ge melts
NASA Astrophysics Data System (ADS)
Ricci, Enrica; Amore, Stefano; Giuranno, Donatella; Novakovic, Rada; Tuissi, Ausonio; Sobczak, Natalia; Nowak, Rafal; Korpala, Bartłomiej; Bruzda, Grzegorz
2014-06-01
In this work, the surface tension and density of Si-Ge liquid alloys were determined by the pendant drop method. Over the range of measurements, both properties show a linear temperature dependence and a nonlinear concentration dependence. Indeed, the density decreases with increasing silicon content exhibiting positive deviation from ideality, while the surface tension increases and deviates negatively with respect to the ideal solution model. Taking into account the Si-Ge phase diagram, a simple lens type, the surface tension behavior of the Si-Ge liquid alloys was analyzed in the framework of the Quasi-Chemical Approximation for the Regular Solutions model. The new experimental results were compared with a few data available in the literature, obtained by the containerless method.
Peristaltic motion of magnetohydrodynamic viscous fluid in a curved circular tube
NASA Astrophysics Data System (ADS)
Yasmeen, Shagufta; Okechi, Nnamdi Fidelis; Anjum, Hafiz Junaid; Asghar, Saleem
In this paper we investigate the peristaltic flow of viscous fluid through three-dimensional curved tube in the presence of the applied magnetic field. We present a mathematical model and an asymptotic solution for the three dimensional Navier-Stokes equations under the assumption of small inertial forces and long wavelength approximation. The effects of the curvature of the tube are investigated with particular interest. The solution is sought in terms of regular perturbation expansion for small curvature parameter. It is noted that the velocity field is more sensitive to the curvature of tube in comparison to the pressure gradient. It is shown that peristaltic magnetohydrodynamic (MHD) flow in a straight tube is the limiting case of this study.
A variational regularization of Abel transform for GPS radio occultation
NASA Astrophysics Data System (ADS)
Wee, Tae-Kwon
2018-04-01
In the Global Positioning System (GPS) radio occultation (RO) technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI) is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR) proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the mean refractivity deserting the first guess. In the lowest few kilometers that AI produces large negative refractivity bias, VR reduces the refractivity bias substantially with the aid of the background, which in this study is the operational forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF). It is concluded based on the results presented in this study that VR offers a definite advantage over AI in the quality of refractivity.
Sound propagation in a duct of periodic wall structure. [numerical analysis
NASA Technical Reports Server (NTRS)
Kurze, U.
1978-01-01
A boundary condition, which accounts for the coupling in the sections behind the duct boundary, is given for the sound-absorbing duct with a periodic structure of the wall lining and using regular partition walls. The soundfield in the duct is suitably described by the method of differences. For locally active walls this renders an explicit approximate solution for the propagation constant. Coupling may be accounted for by the method of differences in a clear manner. Numerical results agree with measurements and yield information which has technical applications.
Lax Integrability and the Peakon Problem for the Modified Camassa-Holm Equation
NASA Astrophysics Data System (ADS)
Chang, Xiangke; Szmigielski, Jacek
2018-02-01
Peakons are special weak solutions of a class of nonlinear partial differential equations modelling non-linear phenomena such as the breakdown of regularity and the onset of shocks. We show that the natural concept of weak solutions in the case of the modified Camassa-Holm equation studied in this paper is dictated by the distributional compatibility of its Lax pair and, as a result, it differs from the one proposed and used in the literature based on the concept of weak solutions used for equations of the Burgers type. Subsequently, we give a complete construction of peakon solutions satisfying the modified Camassa-Holm equation in the sense of distributions; our approach is based on solving certain inverse boundary value problem, the solution of which hinges on a combination of classical techniques of analysis involving Stieltjes' continued fractions and multi-point Padé approximations. We propose sufficient conditions needed to ensure the global existence of peakon solutions and analyze the large time asymptotic behaviour whose special features include a formation of pairs of peakons that share asymptotic speeds, as well as Toda-like sorting property.
Three-dimensional inversion of multisource array electromagnetic data
NASA Astrophysics Data System (ADS)
Tartaras, Efthimios
Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.
CFD analysis of turbopump volutes
NASA Technical Reports Server (NTRS)
Ascoli, Edward P.; Chan, Daniel C.; Darian, Armen; Hsu, Wayne W.; Tran, Ken
1993-01-01
An effort is underway to develop a procedure for the regular use of CFD analysis in the design of turbopump volutes. Airflow data to be taken at NASA Marshall will be used to validate the CFD code and overall procedure. Initial focus has been on preprocessing (geometry creation, translation, and grid generation). Volute geometries have been acquired electronically and imported into the CATIA CAD system and RAGGS (Rockwell Automated Grid Generation System) via the IGES standard. An initial grid topology has been identified and grids have been constructed for turbine inlet and discharge volutes. For CFD analysis of volutes to be used regularly, a procedure must be defined to meet engineering design needs in a timely manner. Thus, a compromise must be established between making geometric approximations, the selection of grid topologies, and possible CFD code enhancements. While the initial grid developed approximated the volute tongue with a zero thickness, final computations should more accurately account for the geometry in this region. Additionally, grid topologies will be explored to minimize skewness and high aspect ratio cells that can affect solution accuracy and slow code convergence. Finally, as appropriate, code modifications will be made to allow for new grid topologies in an effort to expedite the overall CFD analysis process.
A regularization corrected score method for nonlinear regression models with covariate error.
Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna
2013-03-01
Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. Copyright © 2013, The International Biometric Society.
Regular black holes: Electrically charged solutions, Reissner-Nordstroem outside a de Sitter core
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lemos, Jose P. S.; Zanchin, Vilson T.; Centro de Ciencias Naturais e Humanas, Universidade Federal do ABC, Rua Santa Adelia, 166, 09210-170, Santo Andre, Sao Paulo
2011-06-15
To have the correct picture of a black hole as a whole, it is of crucial importance to understand its interior. The singularities that lurk inside the horizon of the usual Kerr-Newman family of black hole solutions signal an endpoint to the physical laws and, as such, should be substituted in one way or another. A proposal that has been around for sometime is to replace the singular region of the spacetime by a region containing some form of matter or false vacuum configuration that can also cohabit with the black hole interior. Black holes without singularities are called regularmore » black holes. In the present work, regular black hole solutions are found within general relativity coupled to Maxwell's electromagnetism and charged matter. We show that there are objects which correspond to regular charged black holes, whose interior region is de Sitter, whose exterior region is Reissner-Nordstroem, and the boundary between both regions is made of an electrically charged spherically symmetric coat. There are several types of solutions: regular nonextremal black holes with a null matter boundary, regular nonextremal black holes with a timelike matter boundary, regular extremal black holes with a timelike matter boundary, and regular overcharged stars with a timelike matter boundary. The main physical and geometrical properties of such charged regular solutions are analyzed.« less
Representation of the exact relativistic electronic Hamiltonian within the regular approximation
NASA Astrophysics Data System (ADS)
Filatov, Michael; Cremer, Dieter
2003-12-01
The exact relativistic Hamiltonian for electronic states is expanded in terms of energy-independent linear operators within the regular approximation. An effective relativistic Hamiltonian has been obtained, which yields in lowest order directly the infinite-order regular approximation (IORA) rather than the zeroth-order regular approximation method. Further perturbational expansion of the exact relativistic electronic energy utilizing the effective Hamiltonian leads to new methods based on ordinary (IORAn) or double [IORAn(2)] perturbation theory (n: order of expansion), which provide improved energies in atomic calculations. Energies calculated with IORA4 and IORA3(2) are accurate up to c-20. Furthermore, IORA is improved by using the IORA wave function to calculate the Rayleigh quotient, which, if minimized, leads to the exact relativistic energy. The outstanding performance of this new IORA method coined scaled IORA is documented in atomic and molecular calculations.
NASA Astrophysics Data System (ADS)
Akiya, Shunta; Kikuchi, Tatsuya; Natsui, Shungo; Suzuki, Ryosuke O.
2017-05-01
Anodizing of aluminum in an arsenic acid solution is reported for the fabrication of anodic porous alumina. The highest potential difference (voltage) without oxide burning increased as the temperature and the concentration of the arsenic acid solution decreased, and a high anodizing potential difference of 340 V was achieved. An ordered porous alumina with several tens of cells was formed in 0.1-0.5 M arsenic acid solutions at 310-340 V for 20 h. However, the regularity of the porous alumina was not improved via anodizing for 72 h. No pore sealing behavior of the porous alumina was observed upon immersion in boiling distilled water, and it may be due to the formation of an insoluble complex on the oxide surface. The porous alumina consisted of two different layers: a hexagonal alumina layer that contained arsenic from the electrolyte and a pure alumina honeycomb skeleton. The porous alumina exhibited a white photoluminescence emission at approximately 515 nm under UV irradiation at 254 nm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Simonetto, Andrea; Dhople, Sairaj
This paper focuses on power distribution networks featuring inverter-interfaced distributed energy resources (DERs), and develops feedback controllers that drive the DER output powers to solutions of time-varying AC optimal power flow (OPF) problems. Control synthesis is grounded on primal-dual-type methods for regularized Lagrangian functions, as well as linear approximations of the AC power-flow equations. Convergence and OPF-solution-tracking capabilities are established while acknowledging: i) communication-packet losses, and ii) partial updates of control signals. The latter case is particularly relevant since it enables asynchronous operation of the controllers where DER setpoints are updated at a fast time scale based on local voltagemore » measurements, and information on the network state is utilized if and when available, based on communication constraints. As an application, the paper considers distribution systems with high photovoltaic integration, and demonstrates that the proposed framework provides fast voltage-regulation capabilities, while enabling the near real-time pursuit of solutions of AC OPF problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Simonetto, Andrea; Dhople, Sairaj
This paper focuses on power distribution networks featuring inverter-interfaced distributed energy resources (DERs), and develops feedback controllers that drive the DER output powers to solutions of time-varying AC optimal power flow (OPF) problems. Control synthesis is grounded on primal-dual-type methods for regularized Lagrangian functions, as well as linear approximations of the AC power-flow equations. Convergence and OPF-solution-tracking capabilities are established while acknowledging: i) communication-packet losses, and ii) partial updates of control signals. The latter case is particularly relevant since it enables asynchronous operation of the controllers where DER setpoints are updated at a fast time scale based on local voltagemore » measurements, and information on the network state is utilized if and when available, based on communication constraints. As an application, the paper considers distribution systems with high photovoltaic integration, and demonstrates that the proposed framework provides fast voltage-regulation capabilities, while enabling the near real-time pursuit of solutions of AC OPF problems.« less
Homentcovschi, Dorel; Murray, Bruce T.; Miles, Ronald N.
2013-01-01
There are a number of applications for microstructure devices consisting of a regular pattern of perforations, and many of these utilize fluid damping. For the analysis of viscous damping and for calculating the spring force in some cases, it is possible to take advantage of the regular hole pattern by assuming periodicity. Here a model is developed to determine these quantities based on the solution of the Stokes' equations for the air flow. Viscous damping is directly related to thermal-mechanical noise. As a result, the design of perforated microstructures with minimal viscous damping is of real practical importance. A method is developed to calculate the damping coefficient in microstructures with periodic perforations. The result can be used to minimize squeeze film damping. Since micromachined devices have finite dimensions, the periodic model for the perforated microstructure has to be associated with the calculation of some frame (edge) corrections. Analysis of the edge corrections has also been performed. Results from analytical formulas and numerical simulations match very well with published measured data. PMID:24058267
NASA Astrophysics Data System (ADS)
Schaffrin, Burkhard
2008-02-01
In a linear Gauss-Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov-Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.
Homentcovschi, Dorel; Murray, Bruce T; Miles, Ronald N
2013-10-15
There are a number of applications for microstructure devices consisting of a regular pattern of perforations, and many of these utilize fluid damping. For the analysis of viscous damping and for calculating the spring force in some cases, it is possible to take advantage of the regular hole pattern by assuming periodicity. Here a model is developed to determine these quantities based on the solution of the Stokes' equations for the air flow. Viscous damping is directly related to thermal-mechanical noise. As a result, the design of perforated microstructures with minimal viscous damping is of real practical importance. A method is developed to calculate the damping coefficient in microstructures with periodic perforations. The result can be used to minimize squeeze film damping. Since micromachined devices have finite dimensions, the periodic model for the perforated microstructure has to be associated with the calculation of some frame (edge) corrections. Analysis of the edge corrections has also been performed. Results from analytical formulas and numerical simulations match very well with published measured data.
Uncertainty propagation in orbital mechanics via tensor decomposition
NASA Astrophysics Data System (ADS)
Sun, Yifei; Kumar, Mrinal
2016-03-01
Uncertainty forecasting in orbital mechanics is an essential but difficult task, primarily because the underlying Fokker-Planck equation (FPE) is defined on a relatively high dimensional (6-D) state-space and is driven by the nonlinear perturbed Keplerian dynamics. In addition, an enormously large solution domain is required for numerical solution of this FPE (e.g. encompassing the entire orbit in the x-y-z subspace), of which the state probability density function (pdf) occupies a tiny fraction at any given time. This coupling of large size, high dimensionality and nonlinearity makes for a formidable computational task, and has caused the FPE for orbital uncertainty propagation to remain an unsolved problem. To the best of the authors' knowledge, this paper presents the first successful direct solution of the FPE for perturbed Keplerian mechanics. To tackle the dimensionality issue, the time-varying state pdf is approximated in the CANDECOMP/PARAFAC decomposition tensor form where all the six spatial dimensions as well as the time dimension are separated from one other. The pdf approximation for all times is obtained simultaneously via the alternating least squares algorithm. Chebyshev spectral differentiation is employed for discretization on account of its spectral ("super-fast") convergence rate. To facilitate the tensor decomposition and control the solution domain size, system dynamics is expressed using spherical coordinates in a noninertial reference frame. Numerical results obtained on a regular personal computer are compared with Monte Carlo simulations.
On the regularity criterion of weak solutions for the 3D MHD equations
NASA Astrophysics Data System (ADS)
Gala, Sadek; Ragusa, Maria Alessandra
2017-12-01
The paper deals with the 3D incompressible MHD equations and aims at improving a regularity criterion in terms of the horizontal gradient of velocity and magnetic field. It is proved that the weak solution ( u, b) becomes regular provided that ( \
NASA Astrophysics Data System (ADS)
Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley
2014-03-01
We will describe a general formalism for obtaining spatially localized (``sparse'') solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an L1 regularization term to the variational principle, which is shown to yield solutions with compact support (``compressed modes''). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. In addition, we introduce an L1 regularized variational framework for developing a spatially localized basis, compressed plane waves (CPWs), that spans the eigenspace of a differential operator, for instance, the Laplace operator. Our approach generalizes the concept of plane waves to an orthogonal real-space basis with multiresolution capabilities. Supported by NSF Award DMR-1106024 (VO), DOE Contract No. DE-FG02-05ER25710 (RC) and ONR Grant No. N00014-11-1-719 (SO).
Spectral/ hp element methods: Recent developments, applications, and perspectives
NASA Astrophysics Data System (ADS)
Xu, Hui; Cantwell, Chris D.; Monteserin, Carlos; Eskilsson, Claes; Engsig-Karup, Allan P.; Sherwin, Spencer J.
2018-02-01
The spectral/ hp element method combines the geometric flexibility of the classical h-type finite element technique with the desirable numerical properties of spectral methods, employing high-degree piecewise polynomial basis functions on coarse finite element-type meshes. The spatial approximation is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate a C 0 - continuous expansion. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral/ hp element method and provides an overview of its application to computational fluid dynamics. In particular, it focuses on the use of the spectral/ hp element method in transitional flows and ocean engineering. Finally, some of the major challenges to be overcome in order to use the spectral/ hp element method in more complex science and engineering applications are discussed.
NASA Astrophysics Data System (ADS)
Arsenault, Louis-François; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.
2017-11-01
We present a supervised machine learning approach to the inversion of Fredholm integrals of the first kind as they arise, for example, in the analytic continuation problem of quantum many-body physics. The approach provides a natural regularization for the ill-conditioned inverse of the Fredholm kernel, as well as an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. Applying machine learning to this database generates a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We suggest that the methodology will be similarly effective for other problems involving a formally ill-conditioned inversion of an integral operator, provided that the forward problem can be efficiently solved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu Benzhuo; Holst, Michael J.; Center for Theoretical Biological Physics, University of California San Diego, La Jolla, CA 92093
2010-09-20
In this paper we developed accurate finite element methods for solving 3-D Poisson-Nernst-Planck (PNP) equations with singular permanent charges for simulating electrodiffusion in solvated biomolecular systems. The electrostatic Poisson equation was defined in the biomolecules and in the solvent, while the Nernst-Planck equation was defined only in the solvent. We applied a stable regularization scheme to remove the singular component of the electrostatic potential induced by the permanent charges inside biomolecules, and formulated regular, well-posed PNP equations. An inexact-Newton method was used to solve the coupled nonlinear elliptic equations for the steady problems; while an Adams-Bashforth-Crank-Nicolson method was devised formore » time integration for the unsteady electrodiffusion. We numerically investigated the conditioning of the stiffness matrices for the finite element approximations of the two formulations of the Nernst-Planck equation, and theoretically proved that the transformed formulation is always associated with an ill-conditioned stiffness matrix. We also studied the electroneutrality of the solution and its relation with the boundary conditions on the molecular surface, and concluded that a large net charge concentration is always present near the molecular surface due to the presence of multiple species of charged particles in the solution. The numerical methods are shown to be accurate and stable by various test problems, and are applicable to real large-scale biophysical electrodiffusion problems.« less
Lu, Benzhuo; Holst, Michael J.; McCammon, J. Andrew; Zhou, Y. C.
2010-01-01
In this paper we developed accurate finite element methods for solving 3-D Poisson-Nernst-Planck (PNP) equations with singular permanent charges for electrodiffusion in solvated biomolecular systems. The electrostatic Poisson equation was defined in the biomolecules and in the solvent, while the Nernst-Planck equation was defined only in the solvent. We applied a stable regularization scheme to remove the singular component of the electrostatic potential induced by the permanent charges inside biomolecules, and formulated regular, well-posed PNP equations. An inexact-Newton method was used to solve the coupled nonlinear elliptic equations for the steady problems; while an Adams-Bashforth-Crank-Nicolson method was devised for time integration for the unsteady electrodiffusion. We numerically investigated the conditioning of the stiffness matrices for the finite element approximations of the two formulations of the Nernst-Planck equation, and theoretically proved that the transformed formulation is always associated with an ill-conditioned stiffness matrix. We also studied the electroneutrality of the solution and its relation with the boundary conditions on the molecular surface, and concluded that a large net charge concentration is always present near the molecular surface due to the presence of multiple species of charged particles in the solution. The numerical methods are shown to be accurate and stable by various test problems, and are applicable to real large-scale biophysical electrodiffusion problems. PMID:21709855
Lu, Benzhuo; Holst, Michael J; McCammon, J Andrew; Zhou, Y C
2010-09-20
In this paper we developed accurate finite element methods for solving 3-D Poisson-Nernst-Planck (PNP) equations with singular permanent charges for electrodiffusion in solvated biomolecular systems. The electrostatic Poisson equation was defined in the biomolecules and in the solvent, while the Nernst-Planck equation was defined only in the solvent. We applied a stable regularization scheme to remove the singular component of the electrostatic potential induced by the permanent charges inside biomolecules, and formulated regular, well-posed PNP equations. An inexact-Newton method was used to solve the coupled nonlinear elliptic equations for the steady problems; while an Adams-Bashforth-Crank-Nicolson method was devised for time integration for the unsteady electrodiffusion. We numerically investigated the conditioning of the stiffness matrices for the finite element approximations of the two formulations of the Nernst-Planck equation, and theoretically proved that the transformed formulation is always associated with an ill-conditioned stiffness matrix. We also studied the electroneutrality of the solution and its relation with the boundary conditions on the molecular surface, and concluded that a large net charge concentration is always present near the molecular surface due to the presence of multiple species of charged particles in the solution. The numerical methods are shown to be accurate and stable by various test problems, and are applicable to real large-scale biophysical electrodiffusion problems.
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410
Estimating parameter of influenza transmission using regularized least square
NASA Astrophysics Data System (ADS)
Nuraini, N.; Syukriah, Y.; Indratno, S. W.
2014-02-01
Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.
NASA Astrophysics Data System (ADS)
Tuan, Nguyen Huy; Van Au, Vo; Khoa, Vo Anh; Lesnic, Daniel
2017-05-01
The identification of the population density of a logistic equation backwards in time associated with nonlocal diffusion and nonlinear reaction, motivated by biology and ecology fields, is investigated. The diffusion depends on an integral average of the population density whilst the reaction term is a global or local Lipschitz function of the population density. After discussing the ill-posedness of the problem, we apply the quasi-reversibility method to construct stable approximation problems. It is shown that the regularized solutions stemming from such method not only depend continuously on the final data, but also strongly converge to the exact solution in L 2-norm. New error estimates together with stability results are obtained. Furthermore, numerical examples are provided to illustrate the theoretical results.
NASA Astrophysics Data System (ADS)
Neves, J. C. S.
2017-06-01
In this work, we have deformed regular black holes which possess a general mass term described by a function which generalizes the Bardeen and Hayward mass functions. By using linear constraints in the energy-momentum tensor to generate metrics, the solutions presented in this work are either regular or singular. That is, within this approach, it is possible to generate regular or singular black holes from regular or singular black holes. Moreover, contrary to the Bardeen and Hayward regular solutions, the deformed regular black holes may violate the weak energy condition despite the presence of the spherical symmetry. Some comments on accretion of deformed black holes in cosmological scenarios are made.
FAST TRACK COMMUNICATION: Regularized Kerr-Newman solution as a gravitating soliton
NASA Astrophysics Data System (ADS)
Burinskii, Alexander
2010-10-01
The charged, spinning and gravitating soliton is realized as a regular solution of the Kerr-Newman (KN) field coupled with a chiral Higgs model. A regular core of the solution is formed by a domain wall bubble interpolating between the external KN solution and a flat superconducting interior. An internal electromagnetic (em) field is expelled to the boundary of the bubble by the Higgs field. The solution reveals two new peculiarities: (i) the Higgs field is oscillating, similar to the known oscillon models; (ii) the em field forms on the edge of the bubble a Wilson loop, resulting in quantization of the total angular momentum.
Regularized quasinormal modes for plasmonic resonators and open cavities
NASA Astrophysics Data System (ADS)
Kamandar Dezfouli, Mohsen; Hughes, Stephen
2018-03-01
Optical mode theory and analysis of open cavities and plasmonic particles is an essential component of optical resonator physics, offering considerable insight and efficiency for connecting to classical and quantum optical properties such as the Purcell effect. However, obtaining the dissipative modes in normalized form for arbitrarily shaped open-cavity systems is notoriously difficult, often involving complex spatial integrations, even after performing the necessary full space solutions to Maxwell's equations. The formal solutions are termed quasinormal modes, which are known to diverge in space, and additional techniques are frequently required to obtain more accurate field representations in the far field. In this work, we introduce a finite-difference time-domain technique that can be used to obtain normalized quasinormal modes using a simple dipole-excitation source, and an inverse Green function technique, in real frequency space, without having to perform any spatial integrations. Moreover, we show how these modes are naturally regularized to ensure the correct field decay behavior in the far field, and thus can be used at any position within and outside the resonator. We term these modes "regularized quasinormal modes" and show the reliability and generality of the theory by studying the generalized Purcell factor of dipole emitters near metallic nanoresonators, hybrid devices with metal nanoparticles coupled to dielectric waveguides, as well as coupled cavity-waveguides in photonic crystals slabs. We also directly compare our results with full-dipole simulations of Maxwell's equations without any approximations, and show excellent agreement.
Solving the Rational Polynomial Coefficients Based on L Curve
NASA Astrophysics Data System (ADS)
Zhou, G.; Li, X.; Yue, T.; Huang, W.; He, C.; Huang, Y.
2018-05-01
The rational polynomial coefficients (RPC) model is a generalized sensor model, which can achieve high approximation accuracy. And it is widely used in the field of photogrammetry and remote sensing. Least square method is usually used to determine the optimal parameter solution of the rational function model. However the distribution of control points is not uniform or the model is over-parameterized, which leads to the singularity of the coefficient matrix of the normal equation. So the normal equation becomes ill conditioned equation. The obtained solutions are extremely unstable and even wrong. The Tikhonov regularization can effectively improve and solve the ill conditioned equation. In this paper, we calculate pathological equations by regularization method, and determine the regularization parameters by L curve. The results of the experiments on aerial format photos show that the accuracy of the first-order RPC with the equal denominators has the highest accuracy. The high order RPC model is not necessary in the processing of dealing with frame images, as the RPC model and the projective model are almost the same. The result shows that the first-order RPC model is basically consistent with the strict sensor model of photogrammetry. Orthorectification results both the firstorder RPC model and Camera Model (ERDAS9.2 platform) are similar to each other, and the maximum residuals of X and Y are 0.8174 feet and 0.9272 feet respectively. This result shows that RPC model can be used in the aerial photographic compensation replacement sensor model.
A trade-off solution between model resolution and covariance in surface-wave inversion
Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.
2010-01-01
Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.
NASA Astrophysics Data System (ADS)
Filatov, Michael; Cremer, Dieter
2005-02-01
The regular approximation to the normalized elimination of the small component (NESC) in the modified Dirac equation has been developed and presented in matrix form. The matrix form of the infinite-order regular approximation (IORA) expressions, obtained in [Filatov and Cremer, J. Chem. Phys. 118, 6741 (2003)] using the resolution of the identity, is the exact matrix representation and corresponds to the zeroth-order regular approximation to NESC (NESC-ZORA). Because IORA (=NESC-ZORA) is a variationally stable method, it was used as a suitable starting point for the development of the second-order regular approximation to NESC (NESC-SORA). As shown for hydrogenlike ions, NESC-SORA energies are closer to the exact Dirac energies than the energies from the fifth-order Douglas-Kroll approximation, which is much more computationally demanding than NESC-SORA. For the application of IORA (=NESC-ZORA) and NESC-SORA to many-electron systems, the number of the two-electron integrals that need to be evaluated (identical to the number of the two-electron integrals of a full Dirac-Hartree-Fock calculation) was drastically reduced by using the resolution of the identity technique. An approximation was derived, which requires only the two-electron integrals of a nonrelativistic calculation. The accuracy of this approach was demonstrated for heliumlike ions. The total energy based on the approximate integrals deviates from the energy calculated with the exact integrals by less than 5×10-9hartree units. NESC-ZORA and NESC-SORA can easily be implemented in any nonrelativistic quantum chemical program. Their application is comparable in cost with that of nonrelativistic methods. The methods can be run with density functional theory and any wave function method. NESC-SORA has the advantage that it does not imply a picture change.
Model reduction method using variable-separation for stochastic saddle point problems
NASA Astrophysics Data System (ADS)
Jiang, Lijian; Li, Qiuqi
2018-02-01
In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.
Bayesian Recurrent Neural Network for Language Modeling.
Chien, Jen-Tzung; Ku, Yuan-Chu
2016-02-01
A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.
Partial regularity of weak solutions to a PDE system with cubic nonlinearity
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Xu, Xiangsheng
2018-04-01
In this paper we investigate regularity properties of weak solutions to a PDE system that arises in the study of biological transport networks. The system consists of a possibly singular elliptic equation for the scalar pressure of the underlying biological network coupled to a diffusion equation for the conductance vector of the network. There are several different types of nonlinearities in the system. Of particular mathematical interest is a term that is a polynomial function of solutions and their partial derivatives and this polynomial function has degree three. That is, the system contains a cubic nonlinearity. Only weak solutions to the system have been shown to exist. The regularity theory for the system remains fundamentally incomplete. In particular, it is not known whether or not weak solutions develop singularities. In this paper we obtain a partial regularity theorem, which gives an estimate for the parabolic Hausdorff dimension of the set of possible singular points.
An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng Jinchao; Qin Chenghu; Jia Kebin
2011-11-15
Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less
Improvements in GRACE Gravity Fields Using Regularization
NASA Astrophysics Data System (ADS)
Save, H.; Bettadpur, S.; Tapley, B. D.
2008-12-01
The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or spatial smoothing.
NASA Astrophysics Data System (ADS)
Save, H.; Bettadpur, S. V.
2013-12-01
It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.
Noise effects in nonlinear biochemical signaling
NASA Astrophysics Data System (ADS)
Bostani, Neda; Kessler, David A.; Shnerb, Nadav M.; Rappel, Wouter-Jan; Levine, Herbert
2012-01-01
It has been generally recognized that stochasticity can play an important role in the information processing accomplished by reaction networks in biological cells. Most treatments of that stochasticity employ Gaussian noise even though it is a priori obvious that this approximation can violate physical constraints, such as the positivity of chemical concentrations. Here, we show that even when such nonphysical fluctuations are rare, an exact solution of the Gaussian model shows that the model can yield unphysical results. This is done in the context of a simple incoherent-feedforward model which exhibits perfect adaptation in the deterministic limit. We show how one can use the natural separation of time scales in this model to yield an approximate model, that is analytically solvable, including its dynamical response to an environmental change. Alternatively, one can employ a cutoff procedure to regularize the Gaussian result.
NASA Astrophysics Data System (ADS)
Chamorro, Diego; Lemarié-Rieusset, Pierre-Gilles; Mayoufi, Kawther
2018-04-01
We study the role of the pressure in the partial regularity theory for weak solutions of the Navier-Stokes equations. By introducing the notion of dissipative solutions, due to D uchon and R obert (Nonlinearity 13:249-255, 2000), we will provide a generalization of the Caffarelli, Kohn and Nirenberg theory. Our approach sheels new light on the role of the pressure in this theory in connection to Serrin's local regularity criterion.
Early-time solution of the horizontal unconfined aquifer in the build-up phase
NASA Astrophysics Data System (ADS)
Gravanis, Elias; Akylas, Evangelos
2017-04-01
The Boussinesq equation is a dynamical equation for the free surface of saturated subsurface flows over an impervious bed. Boussinesq equation is non-linear. The non-linearity comes from the reduction of the dimensionality of the problem: The flow is assumed to be vertically homogeneous, therefore the flow rate through a cross section of the flow is proportional to the free surface height times the hydraulic gradient, which is assumed to be equal to the slope of the free surface (Dupuit approximation). In general, 'vertically' means normally on the bed; combining the Dupuit approximation with the continuity equation leads to the Boussinesq equation. There are very few transient exact solutions. Self- similar solutions have been constructed in the past by various authors. A power series type of solution was derived for a self-similar Boussinesq equation by Barenblatt in 1990. That type of solution has generated a certain amount of literature. For the unconfined flow case for zero recharge rate Boussinesq derived for the horizontal aquifer an exact solution assuming separation of variables. This is actually an exact asymptotic solution of the horizontal aquifer recession phase for late times. The kinematic wave is an interesting solution obtained by dropping the non-linear term in the Boussinesq equation. Although it is an approximate solution, and holds well only for small values of the Henderson and Wooding λ parameter (that is, for steep slopes, high conductivity or small recharge rate), it becomes less and less approximate for smaller values of the parameter, that is, it is asymptotically exact with respect to that parameter. In the present work we consider the case of the unconfined subsurface flow over horizontal bed in the build-up phase under constant recharge rate. This is a case with an infinite Henderson and Wooding parameter, that is, it is the limiting case where the non-linear term is present in the Boussinesq while the linear spatial derivative term goes away. Nonetheless, no analogue of the kinematic wave or the Boussinesq separable solution exists in this case. The late time state of the build-up phase under constant recharge rate is very simply the steady state solution. Our aim is to construct the early time asymptotic solution of this problem. The solution is expressed as a power series of a suitable similarity variable, which is constructed so that to satisfy the boundary conditions at both ends of the aquifer, that is, it is a polynomial approximation of the exact solution. The series turn out to be asymptotic and it is regularized by re-summation techniques which are used to define divergent series. The outflow rate in this regime is linear in time, and the (dimensionless) coefficient is calculated to eight significant figures. The local error of the series is quantified by its deviation from satisfying the self-similar Boussinesq equation at every point. The local error turns out to be everywhere positive, hence, so is the integrated error, which in turn quantifies the degree of convergence of the series to the exact solution.
Regular black holes in f(T) Gravity through a nonlinear electrodynamics source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Junior, Ednaldo L.B.; Rodrigues, Manuel E.; Houndjo, Mahouton J.S., E-mail: ednaldobarrosjr@gmail.com, E-mail: esialg@gmail.com, E-mail: sthoundjo@yahoo.fr
2015-10-01
We seek to obtain a new class of exact solutions of regular black holes in f(T) Gravity with non-linear electrodynamics material content, with spherical symmetry in 4D. The equations of motion provide the regaining of various solutions of General Relativity, as a particular case where the function f(T)=T. We developed a powerful method for finding exact solutions, where we get the first new class of regular black holes solutions in the f(T) Theory, where all the geometrics scalars disappear at the origin of the radial coordinate and are finite everywhere, as well as a new class of singular black holes.
Regularized wave equation migration for imaging and data reconstruction
NASA Astrophysics Data System (ADS)
Kaplan, Sam T.
The reflection seismic experiment results in a measurement (reflection seismic data) of the seismic wavefield. The linear Born approximation to the seismic wavefield leads to a forward modelling operator that we use to approximate reflection seismic data in terms of a scattering potential. We consider approximations to the scattering potential using two methods: the adjoint of the forward modelling operator (migration), and regularized numerical inversion using the forward and adjoint operators. We implement two parameterizations of the forward modelling and migration operators: source-receiver and shot-profile. For both parameterizations, we find requisite Green's function using the split-step approximation. We first develop the forward modelling operator, and then find the adjoint (migration) operator by recognizing a Fredholm integral equation of the first kind. The resulting numerical system is generally under-determined, requiring prior information to find a solution. In source-receiver migration, the parameterization of the scattering potential is understood using the migration imaging condition, and this encourages us to apply sparse prior models to the scattering potential. To that end, we use both a Cauchy prior and a mixed Cauchy-Gaussian prior, finding better resolved estimates of the scattering potential than are given by the adjoint. In shot-profile migration, the parameterization of the scattering potential has its redundancy in multiple active energy sources (i.e. shots). We find that a smallest model regularized inverse representation of the scattering potential gives a more resolved picture of the earth, as compared to the simpler adjoint representation. The shot-profile parameterization allows us to introduce a joint inversion to further improve the estimate of the scattering potential. Moreover, it allows us to introduce a novel data reconstruction algorithm so that limited data can be interpolated/extrapolated. The linearized operators are expensive, encouraging their parallel implementation. For the source-receiver parameterization of the scattering potential this parallelization is non-trivial. Seismic data is typically corrupted by various types of noise. Sparse coding can be used to suppress noise prior to migration. It is a method that stems from information theory and that we apply to noise suppression in seismic data.
Sythesis of MCMC and Belief Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Sungsoo; Chertkov, Michael; Shin, Jinwoo
Markov Chain Monte Carlo (MCMC) and Belief Propagation (BP) are the most popular algorithms for computational inference in Graphical Models (GM). In principle, MCMC is an exact probabilistic method which, however, often suffers from exponentially slow mixing. In contrast, BP is a deterministic method, which is typically fast, empirically very successful, however in general lacking control of accuracy over loopy graphs. In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i.e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC. Our framework is based on the Loop Calculus (LC) approach whichmore » allows to express the BP error as a sum of weighted generalized loops. Although the full series is computationally intractable, it is known that a truncated series, summing up all 2-regular loops, is computable in polynomial-time for planar pair-wise binary GMs and it also provides a highly accurate approximation empirically. Motivated by this, we first propose a polynomial-time approximation MCMC scheme for the truncated series of general (non-planar) pair-wise binary models. Our main idea here is to use the Worm algorithm, known to provide fast mixing in other (related) problems, and then design an appropriate rejection scheme to sample 2-regular loops. Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series. The main novelty underlying our design is in utilizing the concept of cycle basis, which provides an efficient decomposition of the generalized loops. In essence, the proposed MCMC schemes run on transformed GM built upon the non-trivial BP solution, and our experiments show that this synthesis of BP and MCMC outperforms both direct MCMC and bare BP schemes.« less
Regularity theory for general stable operators
NASA Astrophysics Data System (ADS)
Ros-Oton, Xavier; Serra, Joaquim
2016-06-01
We establish sharp regularity estimates for solutions to Lu = f in Ω ⊂Rn, L being the generator of any stable and symmetric Lévy process. Such nonlocal operators L depend on a finite measure on S n - 1, called the spectral measure. First, we study the interior regularity of solutions to Lu = f in B1. We prove that if f is Cα then u belong to C α + 2 s whenever α + 2 s is not an integer. In case f ∈L∞, we show that the solution u is C2s when s ≠ 1 / 2, and C 2 s - ɛ for all ɛ > 0 when s = 1 / 2. Then, we study the boundary regularity of solutions to Lu = f in Ω, u = 0 in Rn ∖ Ω, in C 1 , 1 domains Ω. We show that solutions u satisfy u /ds ∈C s - ɛ (Ω ‾) for all ɛ > 0, where d is the distance to ∂Ω. Finally, we show that our results are sharp by constructing two counterexamples.
Vitrification-based cryopreservation of Drosophila embryos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreuders, P.D.; Mazur, P.
1994-12-31
Currently, over 30,000 strains of Drosophila melanogaster are maintained by geneticists through regular transfer of breeding stocks. A more cost effective solution is to cryopreserve their embryos. Cooling and warming rates >10,000{degrees}C/min. are required to prevent chilling injury. To avoid the lethal intracellular ice normally produced at such high cooling rates, it is necessary to use {ge}50% (w/w) concentrations of glass-inducing solutes to vitrify the embryos. Differential scanning calorimetry (DSC) is used to develop and evaluate ethylene glycol and polyvinyl pyrrolidone based vitrification solutions. The resulting solution consists of 8.5M ethylene glycol + 10% polyvinylpyrrolidone in D-20 Drosophila culture medium.more » A two stage method is used for the introduction and concentration of these solutes within the embryo. The method reduces the exposure time to the solution and, consequently, reduces toxicity. Both DSC and freezing experiments suggest that, while twelve-hour embryos will vitrify using cooling rates >200{degrees}C/min., they will devitrify and be killed with even moderately rapid warming rates of {approximately}1,900{degrees}C/min. Very rapid warming ({approximately}100,000{degrees}C/min.) results in variable numbers of successfully cryopreserved embryos. This sensitivity to warming rite is typical of devitrification. The variability in survival is reduced using embryos of a precisely determined embryonic stage. The vitrification of the older, fifteen-hour, embryos yields an optimized hatching rate of 68%, with 35 - 40% of the resulting larvae developing to normal adults. This Success rite in embryos of this age may reflect a reduced sensitivity to limited devitrification or a more even distribution of the ethylene glycol within the embryo.« less
NASA Astrophysics Data System (ADS)
Boschi, Lapo
2006-10-01
I invert a large set of teleseismic phase-anomaly observations, to derive tomographic maps of fundamental-mode surface wave phase velocity, first via ray theory, then accounting for finite-frequency effects through scattering theory, in the far-field approximation and neglecting mode coupling. I make use of a multiple-resolution pixel parametrization which, in the assumption of sufficient data coverage, should be adequate to represent strongly oscillatory Fréchet kernels. The parametrization is finer over North America, a region particularly well covered by the data. For each surface-wave mode where phase-anomaly observations are available, I derive a wide spectrum of plausible, differently damped solutions; I then conduct a trade-off analysis, and select as optimal solution model the one associated with the point of maximum curvature on the trade-off curve. I repeat this exercise in both theoretical frameworks, to find that selected scattering and ray theoretical phase-velocity maps are coincident in pattern, and differ only slightly in amplitude.
On the membrane approximation in isothermal film casting
NASA Astrophysics Data System (ADS)
Hagen, Thomas
2014-08-01
In this work, a one-dimensional model for isothermal film casting is studied. Film casting is an important engineering process to manufacture thin films and sheets from a highly viscous polymer melt. The model equations account for variations in film width and film thickness, and arise from thinness and kinematic assumptions for the free liquid film. The first aspect of our study is a rigorous discussion of the existence and uniqueness of stationary solutions. This objective is approached via the argument principle, exploiting the homotopy invariance of a family of analytic functions. As our second objective, we analyze the linearization of the governing equations about stationary solutions. It is shown that solutions for the associated boundary-initial value problem are given by a strongly continuous semigroup of bounded linear operators. To reach this result, we cast the relevant Cauchy problem in a more accessible form. These transformed equations allow us insight into the regularity of the semigroup, thus yielding the validity of the spectral mapping theorem for the semigroup and the spectrally determined growth property.
NASA Astrophysics Data System (ADS)
Kanaun, S.; Markov, A.
2017-06-01
An efficient numerical method for solution of static problems of elasticity for an infinite homogeneous medium containing inhomogeneities (cracks and inclusions) is developed. Finite number of heterogeneous inclusions and planar parallel cracks of arbitrary shapes is considered. The problem is reduced to a system of surface integral equations for crack opening vectors and volume integral equations for stress tensors inside the inclusions. For the numerical solution of these equations, a class of Gaussian approximating functions is used. The method based on these functions is mesh free. For such functions, the elements of the matrix of the discretized system are combinations of explicit analytical functions and five standard 1D-integrals that can be tabulated. Thus, the numerical integration is excluded from the construction of the matrix of the discretized problem. For regular node grids, the matrix of the discretized system has Toeplitz's properties, and Fast Fourier Transform technique can be used for calculation matrix-vector products of such matrices.
NASA Astrophysics Data System (ADS)
Salomatov, V. V.; Puzyrev, E. M.; Salomatov, A. V.
2018-05-01
A class of nonlinear problems of nonstationary radiative-convective heat transfer under the microwave action with a small penetration depth is considered in a stabilized coolant flow in a circular channel. The solutions to these problems are obtained, using asymptotic procedures at the stages of nonstationary and stationary convective heat transfer on the heat-radiating channel surface. The nonstationary and stationary stages of the solution are matched, using the "longitudinal coordinate-time" characteristic. The approximate solutions constructed on such principles correlate reliably with the exact ones at the limiting values of the operation parameters, as well as with numerical and experimental data of other researchers. An important advantage of these solutions is that they allow the determination of the main regularities of the microwave and thermal radiation influence on convective heat transfer in a channel even before performing cumbersome calculations. It is shown that, irrespective of the heat exchange regime (nonstationary or stationary), the Nusselt number decreases and the rate of the surface temperature change increases with increase in the intensity of thermal action.
Revisiting HgCl 2: A solution- and solid-state 199Hg NMR and ZORA-DFT computational study
NASA Astrophysics Data System (ADS)
Taylor, R. E.; Carver, Colin T.; Larsen, Ross E.; Dmitrenko, Olga; Bai, Shi; Dybowski, C.
2009-07-01
The 199Hg chemical-shift tensor of solid HgCl 2 was determined from spectra of polycrystalline materials, using static and magic-angle spinning (MAS) techniques at multiple spinning frequencies and field strengths. The chemical-shift tensor of solid HgCl 2 is axially symmetric ( η = 0) within experimental error. The 199Hg chemical-shift anisotropy (CSA) of HgCl 2 in a frozen solution in dimethylsulfoxide (DMSO) is significantly smaller than that of the solid, implying that the local electronic structure in the solid is different from that of the material in solution. The experimental chemical-shift results (solution and solid state) are compared with those predicted by density functional theory (DFT) calculations using the zeroth-order regular approximation (ZORA) to account for relativistic effects. 199Hg spin-lattice relaxation of HgCl 2 dissolved in DMSO is dominated by a CSA mechanism, but a second contribution to relaxation arises from ligand exchange. Relaxation in the solid state is independent of temperature, suggesting relaxation by paramagnetic impurities or defects.
NASA Astrophysics Data System (ADS)
Eric, H.
1982-12-01
The liquidus curves of the Sn-Te and Sn-SnS systems were evaluated by the regular associated solution model (RAS). The main assumption of this theory is the existence of species A, B and associated complexes AB in the liquid phase. Thermodynamic properties of the binary A-B system are derived by ternary regular solution equations. Calculations based on this model for the Sn-Te and Sn-SnS systems are in agreement with published data.
Manna, Kausik; Panda, Amiya Kumar
2009-12-01
Interaction of pinacyanol chloride (PIN) with pure and binary mixtures of cetyltrimethylammonium bromide (CTAB) and sodium deoxycholate (NaDC) was spectroscopically studied. Interaction of PIN with pure NaDC produced a blue shifted metachromatic band (at approximately 502 nm), which gradually shifted to higher wavelength region as the concentration of NaDC increased in the pre-micellar stage. For CTAB only intensity of both the bands increased without any shift. Mixed surfactant systems behaved differently than the pure components. Absorbance of monomeric band with a slight red-shift, and a simultaneous decrease in the absorbance of dimeric band of PIN, were observed for all the combinations in the post-micellar region. PIN-micelle binding constant (K(b)) for pure as well as mixed was determined from spectral data using Benesi-Hildebrand equation. Using the idea of Regular Solution Theory, micellar aggregates were assumed to be predominant than other aggregated state, like vesicles. Aggregation number was determined by fluorescence quenching method. Spectral analyses were also done to evaluate CMC values. Rubinigh's model for Regular Solution Theory was employed to evaluate the interaction parameters and micellar composition. Strong synergistic interaction between the oppositely charged surfactants was noted. Bulkier nature of NaDC lowered down its access in mixed micellar system.
Ivanov, J.; Miller, R.D.; Markiewicz, R.D.; Xia, J.
2008-01-01
We apply the P-wave refraction-tomography method to seismic data collected with a landstreamer. Refraction-tomography inversion solutions were determined using regularization parameters that provided the most realistic near-surface solutions that best matched the dipping layer structure of nearby outcrops. A reasonably well matched solution was obtained using an unusual set of optimal regularization parameters. In comparison, the use of conventional regularization parameters did not provide as realistic results. Thus, we consider that even if there is only qualitative a-priori information about a site (i.e., visual) - in the case of the East Canyon Dam, Utah - it might be possible to minimize the refraction nonuniqueness by estimating the most appropriate regularization parameters.
NASA Astrophysics Data System (ADS)
Burman, Erik; Hansbo, Peter; Larson, Mats G.
2018-03-01
Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.
History matching by spline approximation and regularization in single-phase areal reservoirs
NASA Technical Reports Server (NTRS)
Lee, T. Y.; Kravaris, C.; Seinfeld, J.
1986-01-01
An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.
Wave drift damping acting on multiple circular cylinders (model tests)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kinoshita, Takeshi; Sunahara, Shunji; Bao, W.
1995-12-31
The wave drift damping for the slow drift motion of a four-column platform is experimentally investigated. The estimation of damping force of the slow drift motion of moored floating structures in ocean waves, is one of the most important topics. Bao et al. calculated an interaction of multiple circular cylinders based on the potential flow theory, and showed that the wave drift damping is significantly influenced by the interaction between cylinders. This calculation method assumes that the slow drift motion is approximately replaced by steady current, that is, structures on slow drift motion are supposed to be equivalent to onesmore » in both regular waves and slow current. To validate semi-analytical solutions of Bao et al., experiments were carried out. At first, added resistance due to waves acting on a structure composed of multiple (four) vertical circular cylinders fixed to a slowly moving carriage, was measured in regular waves. Next, the added resistance of the structure moored by linear spring to the slowly moving carriage were measured in regular waves. Furthermore, to validate the assumption that the slow drift motion is replaced by steady current, free decay tests in still water and in regular waves were compared with the simulation of the slow drift motion using the wave drift damping coefficient obtained by the added resistance tests.« less
NASA Astrophysics Data System (ADS)
Eliçabe, Guillermo E.
2013-09-01
In this work, an exact scattering model for a system of clusters of spherical particles, based on the Rayleigh-Gans approximation, has been parameterized in such a way that it can be solved in inverse form using Thikhonov Regularization to obtain the morphological parameters of the clusters. That is to say, the average number of particles per cluster, the size of the primary spherical units that form the cluster, and the Discrete Distance Distribution Function from which the z-average square radius of gyration of the system of clusters is obtained. The methodology is validated through a series of simulated and experimental examples of x-ray and light scattering that show that the proposed methodology works satisfactorily in unideal situations such as: presence of error in the measurements, presence of error in the model, and several types of unideallities present in the experimental cases.
Enriched reproducing kernel particle method for fractional advection-diffusion equation
NASA Astrophysics Data System (ADS)
Ying, Yuping; Lian, Yanping; Tang, Shaoqiang; Liu, Wing Kam
2018-06-01
The reproducing kernel particle method (RKPM) has been efficiently applied to problems with large deformations, high gradients and high modal density. In this paper, it is extended to solve a nonlocal problem modeled by a fractional advection-diffusion equation (FADE), which exhibits a boundary layer with low regularity. We formulate this method on a moving least-square approach. Via the enrichment of fractional-order power functions to the traditional integer-order basis for RKPM, leading terms of the solution to the FADE can be exactly reproduced, which guarantees a good approximation to the boundary layer. Numerical tests are performed to verify the proposed approach.
Revised Thomas-Fermi approximation for singular potentials
NASA Astrophysics Data System (ADS)
Dufty, James W.; Trickey, S. B.
2016-08-01
Approximations for the many-fermion free-energy density functional that include the Thomas-Fermi (TF) form for the noninteracting part lead to singular densities for singular external potentials (e.g., attractive Coulomb). This limitation of the TF approximation is addressed here by a formal map of the exact Euler equation for the density onto an equivalent TF form characterized by a modified Kohn-Sham potential. It is shown to be a "regularized" version of the Kohn-Sham potential, tempered by convolution with a finite-temperature response function. The resulting density is nonsingular, with the equilibrium properties obtained from the total free-energy functional evaluated at this density. This new representation is formally exact. Approximate expressions for the regularized potential are given to leading order in a nonlocality parameter, and the limiting behavior at high and low temperatures is described. The noninteracting part of the free energy in this approximation is the usual Thomas-Fermi functional. These results generalize and extend to finite temperatures the ground-state regularization by R. G. Parr and S. Ghosh [Proc. Natl. Acad. Sci. U.S.A. 83, 3577 (1986), 10.1073/pnas.83.11.3577] and by L. R. Pratt, G. G. Hoffman, and R. A. Harris [J. Chem. Phys. 88, 1818 (1988), 10.1063/1.454105] and formally systematize the finite-temperature regularization given by the latter authors.
NASA Astrophysics Data System (ADS)
Xing, Yanyuan; Yan, Yubin
2018-03-01
Gao et al. [11] (2014) introduced a numerical scheme to approximate the Caputo fractional derivative with the convergence rate O (k 3 - α), 0 < α < 1 by directly approximating the integer-order derivative with some finite difference quotients in the definition of the Caputo fractional derivative, see also Lv and Xu [20] (2016), where k is the time step size. Under the assumption that the solution of the time fractional partial differential equation is sufficiently smooth, Lv and Xu [20] (2016) proved by using energy method that the corresponding numerical method for solving time fractional partial differential equation has the convergence rate O (k 3 - α), 0 < α < 1 uniformly with respect to the time variable t. However, in general the solution of the time fractional partial differential equation has low regularity and in this case the numerical method fails to have the convergence rate O (k 3 - α), 0 < α < 1 uniformly with respect to the time variable t. In this paper, we first obtain a similar approximation scheme to the Riemann-Liouville fractional derivative with the convergence rate O (k 3 - α), 0 < α < 1 as in Gao et al. [11] (2014) by approximating the Hadamard finite-part integral with the piecewise quadratic interpolation polynomials. Based on this scheme, we introduce a time discretization scheme to approximate the time fractional partial differential equation and show by using Laplace transform methods that the time discretization scheme has the convergence rate O (k 3 - α), 0 < α < 1 for any fixed tn > 0 for smooth and nonsmooth data in both homogeneous and inhomogeneous cases. Numerical examples are given to show that the theoretical results are consistent with the numerical results.
Regularization of the double period method for experimental data processing
NASA Astrophysics Data System (ADS)
Belov, A. A.; Kalitkin, N. N.
2017-11-01
In physical and technical applications, an important task is to process experimental curves measured with large errors. Such problems are solved by applying regularization methods, in which success depends on the mathematician's intuition. We propose an approximation based on the double period method developed for smooth nonperiodic functions. Tikhonov's stabilizer with a squared second derivative is used for regularization. As a result, the spurious oscillations are suppressed and the shape of an experimental curve is accurately represented. This approach offers a universal strategy for solving a broad class of problems. The method is illustrated by approximating cross sections of nuclear reactions important for controlled thermonuclear fusion. Tables recommended as reference data are obtained. These results are used to calculate the reaction rates, which are approximated in a way convenient for gasdynamic codes. These approximations are superior to previously known formulas in the covered temperature range and accuracy.
Image registration using stationary velocity fields parameterized by norm-minimizing Wendland kernel
NASA Astrophysics Data System (ADS)
Pai, Akshay; Sommer, Stefan; Sørensen, Lauge; Darkner, Sune; Sporring, Jon; Nielsen, Mads
2015-03-01
Interpolating kernels are crucial to solving a stationary velocity field (SVF) based image registration problem. This is because, velocity fields need to be computed in non-integer locations during integration. The regularity in the solution to the SVF registration problem is controlled by the regularization term. In a variational formulation, this term is traditionally expressed as a squared norm which is a scalar inner product of the interpolating kernels parameterizing the velocity fields. The minimization of this term using the standard spline interpolation kernels (linear or cubic) is only approximative because of the lack of a compatible norm. In this paper, we propose to replace such interpolants with a norm-minimizing interpolant - the Wendland kernel which has the same computational simplicity like B-Splines. An application on the Alzheimer's disease neuroimaging initiative showed that Wendland SVF based measures separate (Alzheimer's disease v/s normal controls) better than both B-Spline SVFs (p<0.05 in amygdala) and B-Spline freeform deformation (p<0.05 in amygdala and cortical gray matter).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glatt-Holtz, Nathan, E-mail: negh@vt.edu; Kukavica, Igor, E-mail: kukavica@usc.edu; Ziane, Mohammed, E-mail: ziane@usc.edu
2014-05-15
We establish the continuity of the Markovian semigroup associated with strong solutions of the stochastic 3D Primitive Equations, and prove the existence of an invariant measure. The proof is based on new moment bounds for strong solutions. The invariant measure is supported on strong solutions and is furthermore shown to have higher regularity properties.
A dynamic regularized gradient model of the subgrid-scale stress tensor for large-eddy simulation
NASA Astrophysics Data System (ADS)
Vollant, A.; Balarac, G.; Corre, C.
2016-02-01
Large-eddy simulation (LES) solves only the large scales part of turbulent flows by using a scales separation based on a filtering operation. The solution of the filtered Navier-Stokes equations requires then to model the subgrid-scale (SGS) stress tensor to take into account the effect of scales smaller than the filter size. In this work, a new model is proposed for the SGS stress model. The model formulation is based on a regularization procedure of the gradient model to correct its unstable behavior. The model is developed based on a priori tests to improve the accuracy of the modeling for both structural and functional performances, i.e., the model ability to locally approximate the SGS unknown term and to reproduce enough global SGS dissipation, respectively. LES is then performed for a posteriori validation. This work is an extension to the SGS stress tensor of the regularization procedure proposed by Balarac et al. ["A dynamic regularized gradient model of the subgrid-scale scalar flux for large eddy simulations," Phys. Fluids 25(7), 075107 (2013)] to model the SGS scalar flux. A set of dynamic regularized gradient (DRG) models is thus made available for both the momentum and the scalar equations. The second objective of this work is to compare this new set of DRG models with direct numerical simulations (DNS), filtered DNS in the case of classic flows simulated with a pseudo-spectral solver and with the standard set of models based on the dynamic Smagorinsky model. Various flow configurations are considered: decaying homogeneous isotropic turbulence, turbulent plane jet, and turbulent channel flows. These tests demonstrate the stable behavior provided by the regularization procedure, along with substantial improvement for velocity and scalar statistics predictions.
NASA Astrophysics Data System (ADS)
Kim, Bong-Sik
Three dimensional (3D) Navier-Stokes-alpha equations are considered for uniformly rotating geophysical fluid flows (large Coriolis parameter f = 2O). The Navier-Stokes-alpha equations are a nonlinear dispersive regularization of usual Navier-Stokes equations obtained by Lagrangian averaging. The focus is on the existence and global regularity of solutions of the 3D rotating Navier-Stokes-alpha equations and the uniform convergence of these solutions to those of the original 3D rotating Navier-Stokes equations for large Coriolis parameters f as alpha → 0. Methods are based on fast singular oscillating limits and results are obtained for periodic boundary conditions for all domain aspect ratios, including the case of three wave resonances which yields nonlinear "2½-dimensional" limit resonant equations for f → 0. The existence and global regularity of solutions of limit resonant equations is established, uniformly in alpha. Bootstrapping from global regularity of the limit equations, the existence of a regular solution of the full 3D rotating Navier-Stokes-alpha equations for large f for an infinite time is established. Then, the uniform convergence of a regular solution of the 3D rotating Navier-Stokes-alpha equations (alpha ≠ 0) to the one of the original 3D rotating NavierStokes equations (alpha = 0) for f large but fixed as alpha → 0 follows; this implies "shadowing" of trajectories of the limit dynamical systems by those of the perturbed alpha-dynamical systems. All the estimates are uniform in alpha, in contrast with previous estimates in the literature which blow up as alpha → 0. Finally, the existence of global attractors as well as exponential attractors is established for large f and the estimates are uniform in alpha.
Regularity gradient estimates for weak solutions of singular quasi-linear parabolic equations
NASA Astrophysics Data System (ADS)
Phan, Tuoc
2017-12-01
This paper studies the Sobolev regularity for weak solutions of a class of singular quasi-linear parabolic problems of the form ut -div [ A (x , t , u , ∇u) ] =div [ F ] with homogeneous Dirichlet boundary conditions over bounded spatial domains. Our main focus is on the case that the vector coefficients A are discontinuous and singular in (x , t)-variables, and dependent on the solution u. Global and interior weighted W 1 , p (ΩT , ω)-regularity estimates are established for weak solutions of these equations, where ω is a weight function in some Muckenhoupt class of weights. The results obtained are even new for linear equations, and for ω = 1, because of the singularity of the coefficients in (x , t)-variables.
Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.; ...
2017-01-03
Over the last decade or so, reconstruction methods using ℓ 1 regularization, often categorized as compressed sensing (CS) algorithms, have significantly improved the capabilities of high fidelity imaging in electron tomography. The most popular ℓ 1 regularization approach within electron tomography has been total variation (TV) regularization. In addition to reducing unwanted noise, TV regularization encourages a piecewise constant solution with sparse boundary regions. In this paper we propose an alternative ℓ 1 regularization approach for electron tomography based on higher order total variation (HOTV). Like TV, the HOTV approach promotes solutions with sparse boundary regions. In smooth regions however,more » the solution is not limited to piecewise constant behavior. We demonstrate that this allows for more accurate reconstruction of a broader class of images – even those for which TV was designed for – particularly when dealing with pragmatic tomographic sampling patterns and very fine image features. In conclusion, we develop results for an electron tomography data set as well as a phantom example, and we also make comparisons with discrete tomography approaches.« less
Ferguson, Sherry A; Delclos, K Barry; Newbold, Retha R; Flynn, Katherine M
2009-01-01
Previous work in our laboratory indicated that lifelong dietary exposure to estrogen-like endocrine disrupters increased sodium solution intake in adult male and female rats. Here, we sought to discern the critical periods necessary for this alteration as well as establish the effects of lower dietary concentrations of genistein and nonylphenol. Male and female Sprague-Dawley rats (F0) consumed phytoestrogen-free chow containing 0, 5, 100, or 500 ppm genistein (approximately equal to 0.0, 0.4, 8.0, and 40.0 mg/kg/day) or 0, 25, 200, or 750 ppm nonylphenol (approximately equal to 0.0, 2.0, 16.0, and 60.0 mg/kg/day). Rats were mated within treatment groups and offspring (F1) maintained on the same diets. Mating for the F1, F2, and F3 (genistein only) was within treatment groups. At postnatal day (PND) 21, the F3 generation began to consume unadulterated phytoestrogen-free chow such that genistein exposure occurred only in utero and preweaning. The F4 generation was never directly exposed to genistein. On PNDs 65-68, intake of regular water and a 3.0% sodium chloride solution was measured for F1-F4 generations (genistein portion) or F1-F2 (nonylphenol portion). Although body weights were decreased by the highest dietary concentrations of genistein and nonylphenol, there were only minimal effects of exposure on sodium solution intake. As expected, intake was highest in female rats. With previous data, these results indicate that the dietary concentrations necessary to increase adult sodium solution intake in rats are greater than 500 ppm genistein and 750 ppm nonylphenol and such effects do not appear to increase across generations.
An analytical method for the inverse Cauchy problem of Lame equation in a rectangle
NASA Astrophysics Data System (ADS)
Grigor’ev, Yu
2018-04-01
In this paper, we present an analytical computational method for the inverse Cauchy problem of Lame equation in the elasticity theory. A rectangular domain is frequently used in engineering structures and we only consider the analytical solution in a two-dimensional rectangle, wherein a missing boundary condition is recovered from the full measurement of stresses and displacements on an accessible boundary. The essence of the method consists in solving three independent Cauchy problems for the Laplace and Poisson equations. For each of them, the Fourier series is used to formulate a first-kind Fredholm integral equation for the unknown function of data. Then, we use a Lavrentiev regularization method, and the termwise separable property of kernel function allows us to obtain a closed-form regularized solution. As a result, for the displacement components, we obtain solutions in the form of a sum of series with three regularization parameters. The uniform convergence and error estimation of the regularized solutions are proved.
WEAK GALERKIN METHODS FOR SECOND ORDER ELLIPTIC INTERFACE PROBLEMS
MU, LIN; WANG, JUNPING; WEI, GUOWEI; YE, XIU; ZHAO, SHAN
2013-01-01
Weak Galerkin methods refer to general finite element methods for partial differential equations (PDEs) in which differential operators are approximated by their weak forms as distributions. Such weak forms give rise to desirable flexibilities in enforcing boundary and interface conditions. A weak Galerkin finite element method (WG-FEM) is developed in this paper for solving elliptic PDEs with discontinuous coefficients and interfaces. Theoretically, it is proved that high order numerical schemes can be designed by using the WG-FEM with polynomials of high order on each element. Extensive numerical experiments have been carried to validate the WG-FEM for solving second order elliptic interface problems. High order of convergence is numerically confirmed in both L2 and L∞ norms for the piecewise linear WG-FEM. Special attention is paid to solve many interface problems, in which the solution possesses a certain singularity due to the nonsmoothness of the interface. A challenge in research is to design nearly second order numerical methods that work well for problems with low regularity in the solution. The best known numerical scheme in the literature is of order O(h) to O(h1.5) for the solution itself in L∞ norm. It is demonstrated that the WG-FEM of the lowest order, i.e., the piecewise constant WG-FEM, is capable of delivering numerical approximations that are of order O(h1.75) to O(h2) in the L∞ norm for C1 or Lipschitz continuous interfaces associated with a C1 or H2 continuous solution. PMID:24072935
NASA Astrophysics Data System (ADS)
Blanchet, Luc; Detweiler, Steven; Le Tiec, Alexandre; Whiting, Bernard F.
2010-03-01
The problem of a compact binary system whose components move on circular orbits is addressed using two different approximation techniques in general relativity. The post-Newtonian (PN) approximation involves an expansion in powers of v/c≪1, and is most appropriate for small orbital velocities v. The perturbative self-force analysis requires an extreme mass ratio m1/m2≪1 for the components of the binary. A particular coordinate-invariant observable is determined as a function of the orbital frequency of the system using these two different approximations. The post-Newtonian calculation is pushed up to the third post-Newtonian (3PN) order. It involves the metric generated by two point particles and evaluated at the location of one of the particles. We regularize the divergent self-field of the particle by means of dimensional regularization. We show that the poles ∝(d-3)-1 appearing in dimensional regularization at the 3PN order cancel out from the final gauge invariant observable. The 3PN analytical result, through first order in the mass ratio, and the numerical self-force calculation are found to agree well. The consistency of this cross cultural comparison confirms the soundness of both approximations in describing compact binary systems. In particular, it provides an independent test of the very different regularization procedures invoked in the two approximation schemes.
Spark formation as a moving boundary process
NASA Astrophysics Data System (ADS)
Ebert, Ute
2006-03-01
The growth process of spark channels recently becomes accessible through complementary methods. First, I will review experiments with nanosecond photographic resolution and with fast and well defined power supplies that appropriately resolve the dynamics of electric breakdown [1]. Second, I will discuss the elementary physical processes as well as present computations of spark growth and branching with adaptive grid refinement [2]. These computations resolve three well separated scales of the process that emerge dynamically. Third, this scale separation motivates a hierarchy of models on different length scales. In particular, I will discuss a moving boundary approximation for the ionization fronts that generate the conducting channel. The resulting moving boundary problem shows strong similarities with classical viscous fingering. For viscous fingering, it is known that the simplest model forms unphysical cusps within finite time that are suppressed by a regularizing condition on the moving boundary. For ionization fronts, we derive a new condition on the moving boundary of mixed Dirichlet-Neumann type (φ=ɛnφ) that indeed regularizes all structures investigated so far. In particular, we present compact analytical solutions with regularization, both for uniformly translating shapes and for their linear perturbations [3]. These solutions are so simple that they may acquire a paradigmatic role in the future. Within linear perturbation theory, they explicitly show the convective stabilization of a curved front while planar fronts are linearly unstable against perturbations of arbitrary wave length. [1] T.M.P. Briels, E.M. van Veldhuizen, U. Ebert, TU Eindhoven. [2] C. Montijn, J. Wackers, W. Hundsdorfer, U. Ebert, CWI Amsterdam. [3] B. Meulenbroek, U. Ebert, L. Schäfer, Phys. Rev. Lett. 95, 195004 (2005).
NASA Astrophysics Data System (ADS)
Rosestolato, M.; Święch, A.
2017-02-01
We study value functions which are viscosity solutions of certain Kolmogorov equations. Using PDE techniques we prove that they are C 1 + α regular on special finite dimensional subspaces. The problem has origins in hedging derivatives of risky assets in mathematical finance.
NASA Astrophysics Data System (ADS)
Lu, Dianchen; Seadawy, Aly R.; Ali, Asghar
2018-06-01
In this current work, we employ novel methods to find the exact travelling wave solutions of Modified Liouville equation and the Symmetric Regularized Long Wave equation, which are called extended simple equation and exp(-Ψ(ξ))-expansion methods. By assigning the different values to the parameters, different types of the solitary wave solutions are derived from the exact traveling wave solutions, which shows the efficiency and precision of our methods. Some solutions have been represented by graphical. The obtained results have several applications in physical science.
NASA Astrophysics Data System (ADS)
Maslakov, M. L.
2018-04-01
This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.
Contribution of the GOCE gradiometer components to regional gravity solutions
NASA Astrophysics Data System (ADS)
Naeimi, Majid; Bouman, Johannes
2017-05-01
The contribution of the GOCE gravity gradients to regional gravity field solutions is investigated in this study. We employ radial basis functions to recover the gravity field on regional scales over Amazon and Himalayas as our test regions. In the first step, four individual solutions based on the more accurate gravity gradient components Txx, Tyy, Tzz and Txz are derived. The Tzz component gives better solution than the other single-component solutions despite the less accuracy of Tzz compared to Txx and Tyy. Furthermore, we determine five more solutions based on several selected combinations of the gravity gradient components including a combined solution using the four gradient components. The Tzz and Tyy components are shown to be the main contributors in all combined solutions whereas the Txz adds the least value to the regional gravity solutions. We also investigate the contribution of the regularization term. We show that the contribution of the regularization significantly decreases as more gravity gradients are included. For the solution using all gravity gradients, regularization term contributes to about 5 per cent of the total solution. Finally, we demonstrate that in our test areas, regional gravity modelling based on GOCE data provide more reliable gravity signal in medium wavelengths as compared to pre-GOCE global gravity field models such as the EGM2008.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minesaki, Yukitaka
2015-01-01
We propose the discrete-time restricted four-body problem (d-R4BP), which approximates the orbits of the restricted four-body problem (R4BP). The d-R4BP is given as a special case of the discrete-time chain regularization of the general N-body problem published in Minesaki. Moreover, we analytically prove that the d-R4BP yields the correct orbits corresponding to the elliptic relative equilibrium solutions of the R4BP when the three primaries form an equilateral triangle at any time. Such orbits include the orbit of a relative equilibrium solution already discovered by Baltagiannis and Papadakis. Until the proof in this work, there has been no discrete analog thatmore » preserves the orbits of elliptic relative equilibrium solutions in the R4BP. For a long time interval, the d-R4BP can precisely compute some stable periodic orbits in the Sun–Jupiter–Trojan asteroid–spacecraft system that cannot necessarily be reproduced by other generic integrators.« less
Calculation of Gallium-metal-Arsenic phase diagrams
NASA Technical Reports Server (NTRS)
Scofield, J. D.; Davison, J. E.; Ray, A. E.; Smith, S. R.
1991-01-01
Electrical contacts and metallization to GaAs solar cells must survive at high temperatures for several minutes under specific mission scenarios. The determination of which metallizations or alloy systems that are able to withstand extreme thermal excursions with minimum degradation to solar cell performance can be predicted by properly calculated temperature constitution phase diagrams. A method for calculating a ternary diagram and its three constituent binary phase diagrams is briefly outlined and ternary phase diagrams for three Ga-As-X alloy systems are presented. Free energy functions of the liquid and solid phase are approximated by the regular solution theory. Phase diagrams calculated using this method are presented for the Ga-As-Ge and Ga-As-Ag systems.
Evasion of No-Hair Theorems and Novel Black-Hole Solutions in Gauss-Bonnet Theories
NASA Astrophysics Data System (ADS)
Antoniou, G.; Bakopoulos, A.; Kanti, P.
2018-03-01
We consider a general Einstein-scalar-Gauss-Bonnet theory with a coupling function f (ϕ ) . We demonstrate that black-hole solutions appear as a generic feature of this theory since a regular horizon and an asymptotically flat solution may be easily constructed under mild assumptions for f (ϕ ). We show that the existing no-hair theorems are easily evaded, and a large number of regular black-hole solutions with scalar hair are then presented for a plethora of coupling functions f (ϕ ).
Evasion of No-Hair Theorems and Novel Black-Hole Solutions in Gauss-Bonnet Theories.
Antoniou, G; Bakopoulos, A; Kanti, P
2018-03-30
We consider a general Einstein-scalar-Gauss-Bonnet theory with a coupling function f(ϕ). We demonstrate that black-hole solutions appear as a generic feature of this theory since a regular horizon and an asymptotically flat solution may be easily constructed under mild assumptions for f(ϕ). We show that the existing no-hair theorems are easily evaded, and a large number of regular black-hole solutions with scalar hair are then presented for a plethora of coupling functions f(ϕ).
Moncada, Marvin; Astete, Carlos; Sabliov, Cristina; Olson, Douglas; Boeneke, Charles; Aryana, Kayanush J
2015-09-01
Reducing particle size of salt to approximately 1.5 µm would increase its surface area, leading to increased dissolution rate in saliva and more efficient transfer of ions to taste buds, and hence, perhaps, a saltier perception of foods. This has a potential for reducing the salt level in surface-salted foods. Our objective was to develop a salt using a nano spray-drying method, to use the developed nano spray-dried salt in surface-salted cheese cracker manufacture, and to evaluate the microbiological and sensory characteristics of cheese crackers. Sodium chloride solution (3% wt/wt) was sprayed through a nano spray dryer. Particle sizes were determined by dynamic light scattering, and particle shapes were observed by scanning electron microscopy. Approximately 80% of the salt particles produced by the nano spray dryer, when drying a 3% (wt/wt) salt solution, were between 500 and 1,900 nm. Cheese cracker treatments consisted of 3 different salt sizes: regular salt with an average particle size of 1,500 µm; a commercially available Microsized 95 Extra Fine Salt (Cargill Salt, Minneapolis, MN) with an average particle size of 15 µm; and nano spray-dried salt with an average particle size of 1.5 µm, manufactured in our laboratory and 3 different salt concentrations (1, 1.5, and 2% wt/wt). A balanced incomplete block design was used to conduct consumer analysis of cheese crackers with nano spray-dried salt (1, 1.5, and 2%), Microsized salt (1, 1.5, and 2%) and regular 2% (control, as used by industry) using 476 participants at 1wk and 4mo. At 4mo, nano spray-dried salt treatments (1, 1.5, and 2%) had significantly higher preferred saltiness scores than the control (regular 2%). Also, at 4mo, nano spray-dried salt (1.5 and 2%) had significantly more just-about-right saltiness scores than control (regular 2%). Consumers' purchase intent increased by 25% for the nano spray-dried salt at 1.5% after they were notified about the 25% reduction in sodium content of the cheese cracker. We detected significantly lower yeast counts for nano spray-dried salt treatments (1, 1.5, and 2%) at 4mo compared with control (regular) salt (1, 1.5 and 2%). We detected no mold growth in any of the treatments at any time. At 4mo, we found no significant differences in sensory color, aroma, crunchiness, overall liking, or acceptability scores of cheese crackers using 1.5 and 1% nano spray-dried salt compared with control. Therefore, 25 to 50% less salt would be suitable for cheese crackers if the particle size of regular salt was reduced 3 log to form nano spray-dried salt. A 3-log reduction in sodium chloride particle size from regular salt to nano spray-dried salt increased saltiness, but a 1-log reduction in salt size from Microsized salt to nano spray-dried salt did not increase saltiness of surface-salted cheese crackers. The use of salt with reduced particle size by nano spray drying is recommended for use in surface-salted cheese crackers to reduce sodium intake. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Recent advancements in GRACE mascon regularization and uncertainty assessment
NASA Astrophysics Data System (ADS)
Loomis, B. D.; Luthcke, S. B.
2017-12-01
The latest release of the NASA Goddard Space Flight Center (GSFC) global time-variable gravity mascon product applies a new regularization strategy along with new methods for estimating noise and leakage uncertainties. The critical design component of mascon estimation is the construction of the applied regularization matrices, and different strategies exist between the different centers that produce mascon solutions. The new approach from GSFC directly applies the pre-fit Level 1B inter-satellite range-acceleration residuals in the design of time-dependent regularization matrices, which are recomputed at each step of our iterative solution method. We summarize this new approach, demonstrating the simultaneous increase in recovered time-variable gravity signal and reduction in the post-fit inter-satellite residual magnitudes, until solution convergence occurs. We also present our new approach for estimating mascon noise uncertainties, which are calibrated to the post-fit inter-satellite residuals. Lastly, we present a new technique for end users to quickly estimate the signal leakage errors for any selected grouping of mascons, and we test the viability of this leakage assessment procedure on the mascon solutions produced by other processing centers.
NASA Astrophysics Data System (ADS)
Chen, Qiujie; Chen, Wu; Shen, Yunzhong; Zhang, Xingfu; Hsu, Houze
2016-04-01
The existing unconstrained Gravity Recovery and Climate Experiment (GRACE) monthly solutions i.e. CSR RL05 from Center for Space Research (CSR), GFZ RL05a from GeoForschungsZentrum (GFZ), JPL RL05 from Jet Propulsion Laboratory (JPL), DMT-1 from Delft Institute of Earth Observation and Space Systems (DEOS), AIUB from Bern University, and Tongji-GRACE01 as well as Tongji-GRACE02 from Tongji University, are dominated by correlated noise (such as north-south stripe errors) in high degree coefficients. To suppress the correlated noise of the unconstrained GRACE solutions, one typical option is to use post-processing filters such as decorrelation filtering and Gaussian smoothing , which are quite effective to reduce the noise and convenient to be implemented. Unlike these post-processing methods, the CNES/GRGS monthly GRACE solutions from Centre National d'Etudes Spatiales (CNES) were developed by using regularization with Kaula rule, whose correlated noise are reduced to such a great extent that no decorrelation filtering is required. Actually, the previous studies demonstrated that the north-south stripes in the GRACE solutions are due to the poor sensitivity of gravity variation in east-west direction. In other words, the longitudinal sampling of GRACE mission is very sparse but the latitudinal sampling of GRACE mission is quite dense, indicating that the recoverability of the longitudinal gravity variation is poor or unstable, leading to the ill-conditioned monthly GRACE solutions. To stabilize the monthly solutions, we constructed the regularization matrices by minimizing the difference between the longitudinal and latitudinal gravity variations and applied them to derive a time series of regularized GRACE monthly solutions named RegTongji RL01 for the period Jan. 2003 to Aug. 2011 in this paper. The signal powers and noise level of RegTongji RL01 were analyzed in this paper, which shows that: (1) No smoothing or decorrelation filtering is required for RegTongji RL01 anymore. (2) The signal powers of RegTongji RL01 are obviously stronger than those of the filtered solutions but the noise levels of the regularized and filtered solutions are consistent, suggesting that RegTongji RL01 has the higher signal-to-noise ratio.
NASA Astrophysics Data System (ADS)
Edjlali, Ehsan; Bérubé-Lauzière, Yves
2018-01-01
We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.
Research of generalized wavelet transformations of Haar correctness in remote sensing of the Earth
NASA Astrophysics Data System (ADS)
Kazaryan, Maretta; Shakhramanyan, Mihail; Nedkov, Roumen; Richter, Andrey; Borisova, Denitsa; Stankova, Nataliya; Ivanova, Iva; Zaharinova, Mariana
2017-10-01
In this paper, Haar's generalized wavelet functions are applied to the problem of ecological monitoring by the method of remote sensing of the Earth. We study generalized Haar wavelet series and suggest the use of Tikhonov's regularization method for investigating them for correctness. In the solution of this problem, an important role is played by classes of functions that were introduced and described in detail by I.M. Sobol for studying multidimensional quadrature formulas and it contains functions with rapidly convergent series of wavelet Haar. A theorem on the stability and uniform convergence of the regularized summation function of the generalized wavelet-Haar series of a function from this class with approximate coefficients is proved. The article also examines the problem of using orthogonal transformations in Earth remote sensing technologies for environmental monitoring. Remote sensing of the Earth allows to receive from spacecrafts information of medium, high spatial resolution and to conduct hyperspectral measurements. Spacecrafts have tens or hundreds of spectral channels. To process the images, the device of discrete orthogonal transforms, and namely, wavelet transforms, was used. The aim of the work is to apply the regularization method in one of the problems associated with remote sensing of the Earth and subsequently to process the satellite images through discrete orthogonal transformations, in particular, generalized Haar wavelet transforms. General methods of research. In this paper, Tikhonov's regularization method, the elements of mathematical analysis, the theory of discrete orthogonal transformations, and methods for decoding of satellite images are used. Scientific novelty. The task of processing of archival satellite snapshots (images), in particular, signal filtering, was investigated from the point of view of an incorrectly posed problem. The regularization parameters for discrete orthogonal transformations were determined.
Process for High-Rate Fabrication of Alumina Nanotemplates
NASA Technical Reports Server (NTRS)
Myung, Nosang; Fleurial, Jean-Pierre; Yun, Minhee; West, William; Choi, Daniel
2007-01-01
An anodizing process, at an early stage of development at the time of reporting the information for this article, has shown promise as a means of fabricating alumina nanotemplates integrated with silicon wafers. Alumina nanotemplates are basically layers of alumina, typically several microns thick, in which are formed approximately regular hexagonal arrays of holes having typical diameters of the order of 10 to 100 nm. Interest in alumina nanotemplates has grown in recent years because they have been found to be useful as templates in the fabrication of nanoscale magnetic, electronic, optoelectronic, and other devices. The present anodizing process is attractive for the fabrication of alumina nanotemplates integrated with silicon wafers in two respects: (1) the process involves self-ordering of the holes; that is, the holes as formed by the process are spontaneously arranged in approximately regular hexagonal arrays; and (2) the rates of growth (that is, elongation) of the holes are high enough to make the process compatible with other processes used in the mass production of integrated circuits. In preparation for fabrication of alumina nanotemplates in this process, one first uses electron-beam evaporation to deposit thin films of titanium, followed by thin films of aluminum, on silicon wafers. Then the alumina nanotemplates are formed by anodizing the aluminum layers, as described below. In experiments in which the process was partially developed, the titanium films were 200 A thick and the aluminum films were 5 m thick. The aluminum films were oxidized to alumina, and the arrays of holes were formed by anodizing the aluminum in aqueous solutions of sulfuric and/or oxalic acid at room temperature (see figure). The diameters, spacings, and rates of growth of the holes were found to depend, variously, on the composition of the anodizing solution, the applied current, or the applied potential, as follows: In galvanostatically controlled anodizing, regardless of the chemical composition of the solution, relatively high current densities (50 to 100 mA/cm2) resulted in arrays of holes that were more nearly regular than were those formed at lower current densities. . The rates of elongation of the holes were found to depend linearly on the applied current density: the observed factor of proportionality was 1.2 (m/h)/(mA/cm2). For a given fixed current density and room temperature, the hole diameters were found to depend mainly on the chemical compositions of the anodizing solutions. The holes produced in sulfuric acid solutions were smaller than those produced in oxalic acid solutions. The arrays of holes produced in sulfuric acid were more ordered than were those produced in oxalic acid. . The breakdown voltage was found to decrease logarithmically with increasing concentration of sulfuric acid. The breakdown voltage was also found to decrease with temperature and to be accompanied by a decrease in hole diameter. The hole diameter was found to vary linearly with applied potential, with a slope of 2.1 nm/V. This slope differs from slopes (2.2 and 2.77 nm/V) reported for similar prior measurements on nanotemplates made from bulk aluminum. The differences among these slopes may be attributable to differences among impurities and defects in bulk and electron-beam-evaporated aluminum specimens.
Further investigation on "A multiplicative regularization for force reconstruction"
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.
Improving the Accuracy of the Chebyshev Rational Approximation Method Using Substeps
Isotalo, Aarno; Pusa, Maria
2016-05-01
The Chebyshev Rational Approximation Method (CRAM) for solving the decay and depletion of nuclides is shown to have a remarkable decrease in error when advancing the system with the same time step and microscopic reaction rates as the previous step. This property is exploited here to achieve high accuracy in any end-of-step solution by dividing a step into equidistant sub-steps. The computational cost of identical substeps can be reduced significantly below that of an equal number of regular steps, as the LU decompositions for the linear solves required in CRAM only need to be formed on the first substep. Themore » improved accuracy provided by substeps is most relevant in decay calculations, where there have previously been concerns about the accuracy and generality of CRAM. Lastly, with substeps, CRAM can solve any decay or depletion problem with constant microscopic reaction rates to an extremely high accuracy for all nuclides with concentrations above an arbitrary limit.« less
Resolution of the 1D regularized Burgers equation using a spatial wavelet approximation
NASA Technical Reports Server (NTRS)
Liandrat, J.; Tchamitchian, PH.
1990-01-01
The Burgers equation with a small viscosity term, initial and periodic boundary conditions is resolved using a spatial approximation constructed from an orthonormal basis of wavelets. The algorithm is directly derived from the notions of multiresolution analysis and tree algorithms. Before the numerical algorithm is described these notions are first recalled. The method uses extensively the localization properties of the wavelets in the physical and Fourier spaces. Moreover, the authors take advantage of the fact that the involved linear operators have constant coefficients. Finally, the algorithm can be considered as a time marching version of the tree algorithm. The most important point is that an adaptive version of the algorithm exists: it allows one to reduce in a significant way the number of degrees of freedom required for a good computation of the solution. Numerical results and description of the different elements of the algorithm are provided in combination with different mathematical comments on the method and some comparison with more classical numerical algorithms.
NASA Astrophysics Data System (ADS)
Jiang, Peng; Peng, Lihui; Xiao, Deyun
2007-06-01
This paper presents a regularization method by using different window functions as regularization for electrical capacitance tomography (ECT) image reconstruction. Image reconstruction for ECT is a typical ill-posed inverse problem. Because of the small singular values of the sensitivity matrix, the solution is sensitive to the measurement noise. The proposed method uses the spectral filtering properties of different window functions to make the solution stable by suppressing the noise in measurements. The window functions, such as the Hanning window, the cosine window and so on, are modified for ECT image reconstruction. Simulations with respect to five typical permittivity distributions are carried out. The reconstructions are better and some of the contours are clearer than the results from the Tikhonov regularization. Numerical results show that the feasibility of the image reconstruction algorithm using different window functions as regularization.
Simple picture for neutrino flavor transformation in supernovae
NASA Astrophysics Data System (ADS)
Duan, Huaiyu; Fuller, George M.; Qian, Yong-Zhong
2007-10-01
We can understand many recently discovered features of flavor evolution in dense, self-coupled supernova neutrino and antineutrino systems with a simple, physical scheme consisting of two quasistatic solutions. One solution closely resembles the conventional, adiabatic single-neutrino Mikheyev-Smirnov-Wolfenstein (MSW) mechanism, in that neutrinos and antineutrinos remain in mass eigenstates as they evolve in flavor space. The other solution is analogous to the regular precession of a gyroscopic pendulum in flavor space, and has been discussed extensively in recent works. Results of recent numerical studies are best explained with combinations of these solutions in the following general scenario: (1) Near the neutrino sphere, the MSW-like many-body solution obtains. (2) Depending on neutrino vacuum mixing parameters, luminosities, energy spectra, and the matter density profile, collective flavor transformation in the nutation mode develops and drives neutrinos away from the MSW-like evolution and toward regular precession. (3) Neutrino and antineutrino flavors roughly evolve according to the regular precession solution until neutrino densities are low. In the late stage of the precession solution, a stepwise swapping develops in the energy spectra of νe and νμ/ντ. We also discuss some subtle points regarding adiabaticity in flavor transformation in dense-neutrino systems.
Terminal attractors for addressable memory in neural networks
NASA Technical Reports Server (NTRS)
Zak, Michail
1988-01-01
A new type of attractors - terminal attractors - for an addressable memory in neural networks operating in continuous time is introduced. These attractors represent singular solutions of the dynamical system. They intersect (or envelope) the families of regular solutions while each regular solution approaches the terminal attractor in a finite time period. It is shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the weight matrix.
Thermodynamic Modeling of the YO(l.5)-ZrO2 System
NASA Technical Reports Server (NTRS)
Jacobson, Nathan S.; Liu, Zi-Kui; Kaufman, Larry; Zhang, Fan
2003-01-01
The YO1.5-ZrO2 system consists of five solid solutions, one liquid solution, and one intermediate compound. A thermodynamic description of this system is developed, which allows calculation of the phase diagram and thermodynamic properties. Two different solution models are used-a neutral species model with YO1.5 and ZrO2 as the components and a charged species model with Y(+3), Zr(+4), O(-2), and vacancies as components. For each model, regular and sub-regular solution parameters are derived fiom selected equilibrium phase and thermodynamic data.
Discrete Regularization for Calibration of Geologic Facies Against Dynamic Flow Data
NASA Astrophysics Data System (ADS)
Khaninezhad, Mohammad-Reza; Golmohammadi, Azarang; Jafarpour, Behnam
2018-04-01
Subsurface flow model calibration involves many more unknowns than measurements, leading to ill-posed problems with nonunique solutions. To alleviate nonuniqueness, the problem is regularized by constraining the solution space using prior knowledge. In certain sedimentary environments, such as fluvial systems, the contrast in hydraulic properties of different facies types tends to dominate the flow and transport behavior, making the effect of within facies heterogeneity less significant. Hence, flow model calibration in those formations reduces to delineating the spatial structure and connectivity of different lithofacies types and their boundaries. A major difficulty in calibrating such models is honoring the discrete, or piecewise constant, nature of facies distribution. The problem becomes more challenging when complex spatial connectivity patterns with higher-order statistics are involved. This paper introduces a novel formulation for calibration of complex geologic facies by imposing appropriate constraints to recover plausible solutions that honor the spatial connectivity and discreteness of facies models. To incorporate prior connectivity patterns, plausible geologic features are learned from available training models. This is achieved by learning spatial patterns from training data, e.g., k-SVD sparse learning or the traditional Principal Component Analysis. Discrete regularization is introduced as a penalty functions to impose solution discreteness while minimizing the mismatch between observed and predicted data. An efficient gradient-based alternating directions algorithm is combined with variable splitting to minimize the resulting regularized nonlinear least squares objective function. Numerical results show that imposing learned facies connectivity and discreteness as regularization functions leads to geologically consistent solutions that improve facies calibration quality.
Boundary Regularity for the Porous Medium Equation
NASA Astrophysics Data System (ADS)
Björn, Anders; Björn, Jana; Gianazza, Ugo; Siljander, Juhana
2018-05-01
We study the boundary regularity of solutions to the porous medium equation {u_t = Δ u^m} in the degenerate range {m > 1} . In particular, we show that in cylinders the Dirichlet problem with positive continuous boundary data on the parabolic boundary has a solution which attains the boundary values, provided that the spatial domain satisfies the elliptic Wiener criterion. This condition is known to be optimal, and it is a consequence of our main theorem which establishes a barrier characterization of regular boundary points for general—not necessarily cylindrical—domains in {{R}^{n+1}} . One of our fundamental tools is a new strict comparison principle between sub- and superparabolic functions, which makes it essential for us to study both nonstrict and strict Perron solutions to be able to develop a fruitful boundary regularity theory. Several other comparison principles and pasting lemmas are also obtained. In the process we obtain a rather complete picture of the relation between sub/superparabolic functions and weak sub/supersolutions.
Manifold optimization-based analysis dictionary learning with an ℓ1∕2-norm regularizer.
Li, Zhenni; Ding, Shuxue; Li, Yujie; Yang, Zuyuan; Xie, Shengli; Chen, Wuhui
2018-02-01
Recently there has been increasing attention towards analysis dictionary learning. In analysis dictionary learning, it is an open problem to obtain the strong sparsity-promoting solutions efficiently while simultaneously avoiding the trivial solutions of the dictionary. In this paper, to obtain the strong sparsity-promoting solutions, we employ the ℓ 1∕2 norm as a regularizer. The very recent study on ℓ 1∕2 norm regularization theory in compressive sensing shows that its solutions can give sparser results than using the ℓ 1 norm. We transform a complex nonconvex optimization into a number of one-dimensional minimization problems. Then the closed-form solutions can be obtained efficiently. To avoid trivial solutions, we apply manifold optimization to update the dictionary directly on the manifold satisfying the orthonormality constraint, so that the dictionary can avoid the trivial solutions well while simultaneously capturing the intrinsic properties of the dictionary. The experiments with synthetic and real-world data verify that the proposed algorithm for analysis dictionary learning can not only obtain strong sparsity-promoting solutions efficiently, but also learn more accurate dictionary in terms of dictionary recovery and image processing than the state-of-the-art algorithms. Copyright © 2017 Elsevier Ltd. All rights reserved.
High-resolution CSR GRACE RL05 mascons
NASA Astrophysics Data System (ADS)
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.
2016-10-01
The determination of the gravity model for the Gravity Recovery and Climate Experiment (GRACE) is susceptible to modeling errors, measurement noise, and observability issues. The ill-posed GRACE estimation problem causes the unconstrained GRACE RL05 solutions to have north-south stripes. We discuss the development of global equal area mascon solutions to improve the GRACE gravity information for the study of Earth surface processes. These regularized mascon solutions are developed with a 1° resolution using Tikhonov regularization in a geodesic grid domain. These solutions are derived from GRACE information only, and no external model or data is used to inform the constraints. The regularization matrix is time variable and will not bias or attenuate future regional signals to some past statistics from GRACE or other models. The resulting Center for Space Research (CSR) mascon solutions have no stripe errors and capture all the signals observed by GRACE within the measurement noise level. The solutions are not tailored for specific applications and are global in nature. This study discusses the solution approach and compares the resulting solutions with postprocessed results from the RL05 spherical harmonic solutions and other global mascon solutions for studies of Arctic ice sheet processes, ocean bottom pressure variation, and land surface total water storage change. This suite of comparisons leads to the conclusion that the mascon solutions presented here are an enhanced representation of the RL05 GRACE solutions and provide accurate surface-based gridded information that can be used without further processing.
Well-posedness of characteristic symmetric hyperbolic systems
NASA Astrophysics Data System (ADS)
Secchi, Paolo
1996-06-01
We consider the initial-boundary-value problem for quasi-linear symmetric hyperbolic systems with characteristic boundary of constant multiplicity. We show the well-posedness in Hadamard's sense (i.e., existence, uniqueness and continuous dependence of solutions on the data) of regular solutions in suitable functions spaces which take into account the loss of regularity in the normal direction to the characteristic boundary.
Krylov subspace iterative methods for boundary element method based near-field acoustic holography.
Valdivia, Nicolas; Williams, Earl G
2005-02-01
The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.
Analytic regularization of uniform cubic B-spline deformation fields.
Shackleford, James A; Yang, Qi; Lourenço, Ana M; Shusharina, Nadya; Kandasamy, Nagarajan; Sharp, Gregory C
2012-01-01
Image registration is inherently ill-posed, and lacks a unique solution. In the context of medical applications, it is desirable to avoid solutions that describe physically unsound deformations within the patient anatomy. Among the accepted methods of regularizing non-rigid image registration to provide solutions applicable to medical practice is the penalty of thin-plate bending energy. In this paper, we develop an exact, analytic method for computing the bending energy of a three-dimensional B-spline deformation field as a quadratic matrix operation on the spline coefficient values. Results presented on ten thoracic case studies indicate the analytic solution is between 61-1371x faster than a numerical central differencing solution.
Primordial cosmology in mimetic born-infeld gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouhmadi-Lopez, Mariam; Chen, Che -Yu; Chen, Pisin
Here, the Eddington-inspired-Born-Infeld (EiBI) model is reformulated within the mimetic approach. In the presence of a mimetic field, the model contains non-trivial vacuum solutions which could be free of spacetime singularity because of the Born-Infeld nature of the theory. We study a realistic primordial vacuum universe and prove the existence of regular solutions, such as primordial inflationary solutions of de Sitter type or bouncing solutions. Besides, the linear instabilities present in the EiBI model are found to be avoidable for some interesting bouncing solutions in which the physical metric as well as the auxiliary metric are regular at the backgroundmore » level.« less
Primordial cosmology in mimetic born-infeld gravity
Bouhmadi-Lopez, Mariam; Chen, Che -Yu; Chen, Pisin
2017-11-29
Here, the Eddington-inspired-Born-Infeld (EiBI) model is reformulated within the mimetic approach. In the presence of a mimetic field, the model contains non-trivial vacuum solutions which could be free of spacetime singularity because of the Born-Infeld nature of the theory. We study a realistic primordial vacuum universe and prove the existence of regular solutions, such as primordial inflationary solutions of de Sitter type or bouncing solutions. Besides, the linear instabilities present in the EiBI model are found to be avoidable for some interesting bouncing solutions in which the physical metric as well as the auxiliary metric are regular at the backgroundmore » level.« less
Cuesta, D; Varela, M; Miró, P; Galdós, P; Abásolo, D; Hornero, R; Aboy, M
2007-07-01
Body temperature is a classical diagnostic tool for a number of diseases. However, it is usually employed as a plain binary classification function (febrile or not febrile), and therefore its diagnostic power has not been fully developed. In this paper, we describe how body temperature regularity can be used for diagnosis. Our proposed methodology is based on obtaining accurate long-term temperature recordings at high sampling frequencies and analyzing the temperature signal using a regularity metric (approximate entropy). In this study, we assessed our methodology using temperature registers acquired from patients with multiple organ failure admitted to an intensive care unit. Our results indicate there is a correlation between the patient's condition and the regularity of the body temperature. This finding enabled us to design a classifier for two outcomes (survival or death) and test it on a dataset including 36 subjects. The classifier achieved an accuracy of 72%.
NASA Astrophysics Data System (ADS)
Fakis, Demetrios; Kalvouridis, Tilemahos
2017-09-01
The regular polygon problem of (N+1) bodies deals with the dynamics of a small body, natural or artificial, in the force field of N big bodies, the ν=N-1 of which have equal masses and form an imaginary regular ν -gon, while the Nth body with a different mass is located at the center of mass of the system. In this work, instead of considering Newtonian potentials and forces, we assume that the big bodies create quasi-homogeneous potentials, in the sense that we insert to the inverse square Newtonian law of gravitation an inverse cube corrective term, aiming to approximate various phenomena due to their shape or to the radiation emitting from the primaries. Based on this new consideration, we apply a general methodology in order to investigate by means of the zero-velocity surfaces, the regions where 3D motions of the small body are allowed, their evolutions and parametric variations, their topological bifurcations, as well as the existing trapping domains of the particle. Here we note that this process is definitely a fundamental step of great importance in the study of many dynamical systems characterized by a Jacobian-type integral of motion in the long way of searching for solutions of any kind.
Second-Order Two-Sided Estimates in Nonlinear Elliptic Problems
NASA Astrophysics Data System (ADS)
Cianchi, Andrea; Maz'ya, Vladimir G.
2018-05-01
Best possible second-order regularity is established for solutions to p-Laplacian type equations with {p \\in (1, ∞)} and a square-integrable right-hand side. Our results provide a nonlinear counterpart of the classical L 2-coercivity theory for linear problems, which is missing in the existing literature. Both local and global estimates are obtained. The latter apply to solutions to either Dirichlet or Neumann boundary value problems. Minimal regularity on the boundary of the domain is required, although our conclusions are new even for smooth domains. If the domain is convex, no regularity of its boundary is needed at all.
Dynamics of temporally localized states in passively mode-locked semiconductor lasers
NASA Astrophysics Data System (ADS)
Schelte, C.; Javaloyes, J.; Gurevich, S. V.
2018-05-01
We study the emergence and the stability of temporally localized structures in the output of a semiconductor laser passively mode locked by a saturable absorber in the long-cavity regime. For large yet realistic values of the linewidth enhancement factor, we disclose the existence of secondary dynamical instabilities where the pulses develop regular and subsequent irregular temporal oscillations. By a detailed bifurcation analysis we show that additional solution branches that consist of multipulse (molecules) solutions exist. We demonstrate that the various solution curves for the single and multipeak pulses can splice and intersect each other via transcritical bifurcations, leading to a complex web of solutions. Our analysis is based on a generic model of mode locking that consists of a time-delayed dynamical system, but also on a much more numerically efficient, yet approximate, partial differential equation. We compare the results of the bifurcation analysis of both models in order to assess up to which point the two approaches are equivalent. We conclude our analysis by the study of the influence of group velocity dispersion, which is only possible in the framework of the partial differential equation model, and we show that it may have a profound impact on the dynamics of the localized states.
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamba, Irene M.; ICES, The University of Texas at Austin, 201 E. 24th St., Stop C0200, Austin, TX 78712; Haack, Jeffrey R.
2014-08-01
We present the formulation of a conservative spectral method for the Boltzmann collision operator with anisotropic scattering cross-sections. The method is an extension of the conservative spectral method of Gamba and Tharkabhushanam [17,18], which uses the weak form of the collision operator to represent the collisional term as a weighted convolution in Fourier space. The method is tested by computing the collision operator with a suitably cut-off angular cross section and comparing the results with the solution of the Landau equation. We analytically study the convergence rate of the Fourier transformed Boltzmann collision operator in the grazing collisions limit tomore » the Fourier transformed Landau collision operator under the assumption of some regularity and decay conditions of the solution to the Boltzmann equation. Our results show that the angular singularity which corresponds to the Rutherford scattering cross section is the critical singularity for which a grazing collision limit exists for the Boltzmann operator. Additionally, we numerically study the differences between homogeneous solutions of the Boltzmann equation with the Rutherford scattering cross section and an artificial cross section, which give convergence to solutions of the Landau equation at different asymptotic rates. We numerically show the rate of the approximation as well as the consequences for the rate of entropy decay for homogeneous solutions of the Boltzmann equation and Landau equation.« less
NASA Astrophysics Data System (ADS)
Yan, Xiao-Yong; Han, Xiao-Pu; Zhou, Tao; Wang, Bing-Hong
2011-12-01
We propose a simplified human regular mobility model to simulate an individual's daily travel with three sequential activities: commuting to workplace, going to do leisure activities and returning home. With the assumption that the individual has a constant travel speed and inferior limit of time at home and in work, we prove that the daily moving area of an individual is an ellipse, and finally obtain an exact solution of the gyration radius. The analytical solution captures the empirical observation well.
Explicit error bounds for the α-quasi-periodic Helmholtz problem.
Lord, Natacha H; Mulholland, Anthony J
2013-10-01
This paper considers a finite element approach to modeling electromagnetic waves in a periodic diffraction grating. In particular, an a priori error estimate associated with the α-quasi-periodic transformation is derived. This involves the solution of the associated Helmholtz problem being written as a product of e(iαx) and an unknown function called the α-quasi-periodic solution. To begin with, the well-posedness of the continuous problem is examined using a variational formulation. The problem is then discretized, and a rigorous a priori error estimate, which guarantees the uniqueness of this approximate solution, is derived. In previous studies, the continuity of the Dirichlet-to-Neumann map has simply been assumed and the dependency of the regularity constant on the system parameters, such as the wavenumber, has not been shown. To address this deficiency, in this paper an explicit dependence on the wavenumber and the degree of the polynomial basis in the a priori error estimate is obtained. Since the finite element method is well known for dealing with any geometries, comparison of numerical results obtained using the α-quasi-periodic transformation with a lattice sum technique is then presented.
On epicardial potential reconstruction using regularization schemes with the L1-norm data term.
Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart
2011-01-07
The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.
NASA Astrophysics Data System (ADS)
Khusnutdinova, K. R.; Stepanyants, Y. A.; Tranter, M. R.
2018-02-01
We study solitary wave solutions of the fifth-order Korteweg-de Vries equation which contains, besides the traditional quadratic nonlinearity and third-order dispersion, additional terms including cubic nonlinearity and fifth order linear dispersion, as well as two nonlinear dispersive terms. An exact solitary wave solution to this equation is derived, and the dependence of its amplitude, width, and speed on the parameters of the governing equation is studied. It is shown that the derived solution can represent either an embedded or regular soliton depending on the equation parameters. The nonlinear dispersive terms can drastically influence the existence of solitary waves, their nature (regular or embedded), profile, polarity, and stability with respect to small perturbations. We show, in particular, that in some cases embedded solitons can be stable even with respect to interactions with regular solitons. The results obtained are applicable to surface and internal waves in fluids, as well as to waves in other media (plasma, solid waveguides, elastic media with microstructure, etc.).
Bardeen regular black hole with an electric source
NASA Astrophysics Data System (ADS)
Rodrigues, Manuel E.; Silva, Marcos V. de S.
2018-06-01
If some energy conditions on the stress-energy tensor are violated, is possible construct regular black holes in General Relativity and in alternative theories of gravity. This type of solution has horizons but does not present singularities. The first regular black hole was presented by Bardeen and can be obtained from Einstein equations in the presence of an electromagnetic field. E. Ayon-Beato and A. Garcia reinterpreted the Bardeen metric as a magnetic solution of General Relativity coupled to a nonlinear electrodynamics. In this work, we show that the Bardeen model may also be interpreted as a solution of Einstein equations in the presence of an electric source, whose electric field does not behave as a Coulomb field. We analyzed the asymptotic forms of the Lagrangian for the electric case and also analyzed the energy conditions.
Alves, Rui; Vilaprinyo, Ester; Hernádez-Bermejo, Benito; Sorribas, Albert
2008-01-01
There is a renewed interest in obtaining a systemic understanding of metabolism, gene expression and signal transduction processes, driven by the recent research focus on Systems Biology. From a biotechnological point of view, such a systemic understanding of how a biological system is designed to work can facilitate the rational manipulation of specific pathways in different cell types to achieve specific goals. Due to the intrinsic complexity of biological systems, mathematical models are a central tool for understanding and predicting the integrative behavior of those systems. Particularly, models are essential for a rational development of biotechnological applications and in understanding system's design from an evolutionary point of view. Mathematical models can be obtained using many different strategies. In each case, their utility will depend upon the properties of the mathematical representation and on the possibility of obtaining meaningful parameters from available data. In practice, there are several issues at stake when one has to decide which mathematical model is more appropriate for the study of a given problem. First, one needs a model that can represent the aspects of the system one wishes to study. Second, one must choose a mathematical representation that allows an accurate analysis of the system with respect to different aspects of interest (for example, robustness of the system, dynamical behavior, optimization of the system with respect to some production goal, parameter value determination, etc). Third, before choosing between alternative and equally appropriate mathematical representations for the system, one should compare representations with respect to easiness of automation for model set-up, simulation, and analysis of results. Fourth, one should also consider how to facilitate model transference and re-usability by other researchers and for distinct purposes. Finally, one factor that is important for all four aspects is the regularity in the mathematical structure of the equations because it facilitates computational manipulation. This regularity is a mark of kinetic representations based on approximation theory. The use of approximation theory to derive mathematical representations with regular structure for modeling purposes has a long tradition in science. In most applied fields, such as engineering and physics, those approximations are often required to obtain practical solutions to complex problems. In this paper we review some of the more popular mathematical representations that have been derived using approximation theory and are used for modeling in molecular systems biology. We will focus on formalisms that are theoretically supported by the Taylor Theorem. These include the Power-law formalism, the recently proposed (log)linear and Lin-log formalisms as well as some closely related alternatives. We will analyze the similarities and differences between these formalisms, discuss the advantages and limitations of each representation, and provide a tentative "road map" for their potential utilization for different problems.
NASA Astrophysics Data System (ADS)
Voloshinov, V. V.
2018-03-01
In computations related to mathematical programming problems, one often has to consider approximate, rather than exact, solutions satisfying the constraints of the problem and the optimality criterion with a certain error. For determining stopping rules for iterative procedures, in the stability analysis of solutions with respect to errors in the initial data, etc., a justified characteristic of such solutions that is independent of the numerical method used to obtain them is needed. A necessary δ-optimality condition in the smooth mathematical programming problem that generalizes the Karush-Kuhn-Tucker theorem for the case of approximate solutions is obtained. The Lagrange multipliers corresponding to the approximate solution are determined by solving an approximating quadratic programming problem.
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1982-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.
A Novel Hypercomplex Solution to Kepler's Problem
NASA Astrophysics Data System (ADS)
Condurache, C.; Martinuşi, V.
2007-05-01
By using a Sundman like regularization, we offer a unified solution to Kepler's problem by using hypercomplex numbers. The fundamental role in this paper is played by the Laplace-Runge-Lenz prime integral and by the hypercomplex numbers algebra. The procedure unifies and generalizes the regularizations offered by Levi-Civita and Kustaanheimo-Stiefel. Closed form hypercomplex expressions for the law of motion and velocity are deduced, together with inedite hypercomplex prime integrals.
Solution of linear systems by a singular perturbation technique
NASA Technical Reports Server (NTRS)
Ardema, M. D.
1976-01-01
An approximate solution is obtained for a singularly perturbed system of initial valued, time invariant, linear differential equations with multiple boundary layers. Conditions are stated under which the approximate solution converges uniformly to the exact solution as the perturbation parameter tends to zero. The solution is obtained by the method of matched asymptotic expansions. Use of the results for obtaining approximate solutions of general linear systems is discussed. An example is considered to illustrate the method and it is shown that the formulas derived give a readily computed uniform approximation.
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589
Macke, A; Mishchenko, M I
1996-07-20
We ascertain the usefulness of simple ice particle geometries for modeling the intensity distribution of light scattering by atmospheric ice particles. To this end, similarities and differences in light scattering by axis-equivalent, regular and distorted hexagonal cylindric, ellipsoidal, and circular cylindric ice particles are reported. All the results pertain to particles with sizes much larger than a wavelength and are based on a geometrical optics approximation. At a nonabsorbing wavelength of 0.55 µm, ellipsoids (circular cylinders) have a much (slightly) larger asymmetry parameter g than regular hexagonal cylinders. However, our computations show that only random distortion of the crystal shape leads to a closer agreement with g values as small as 0.7 as derived from some remote-sensing data analysis. This may suggest that scattering by regular particle shapes is not necessarily representative of real atmospheric ice crystals at nonabsorbing wavelengths. On the other hand, if real ice particles happen to be hexagonal, they may be approximated by circular cylinders at absorbing wavelengths.
NASA Astrophysics Data System (ADS)
Antokhin, I. I.
2017-06-01
We propose an efficient and flexible method for solving Fredholm and Abel integral equations of the first kind, frequently appearing in astrophysics. These equations present an ill-posed problem. Our method is based on solving them on a so-called compact set of functions and/or using Tikhonov's regularization. Both approaches are non-parametric and do not require any theoretic model, apart from some very loose a priori constraints on the unknown function. The two approaches can be used independently or in a combination. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact one, as the errors of input data tend to zero. Simulated and astrophysical examples are presented.
Perez, R. Navarro; Schunck, N.; Lasseri, R. -D.; ...
2017-07-05
Here, we describe the new version 3.00 of the code hfbtho that solves the nuclear Hartree–Fock (HF) or Hartree–Fock–Bogolyubov (HFB) problem by using the cylindrical transformed deformed harmonic oscillator basis. In the new version, we have implemented the following features: (i) the full Gogny force in both particle–hole and particle–particle channels, (ii) the calculation of the nuclear collective inertia at the perturbative cranking approximation, (iii) the calculation of fission fragment charge, mass and deformations based on the determination of the neck, (iv) the regularization of zero-range pairing forces, (v) the calculation of localization functions, (vi) a MPI interface for large-scalemore » mass table calculations.« less
Accuracy of AFM force distance curves via direct solution of the Euler-Bernoulli equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eppell, Steven J., E-mail: steven.eppell@case.edu; Liu, Yehe; Zypman, Fredy R.
2016-03-15
In an effort to improve the accuracy of force-separation curves obtained from atomic force microscope data, we compare force-separation curves computed using two methods to solve the Euler-Bernoulli equation. A recently introduced method using a direct sequential forward solution, Causal Time-Domain Analysis, is compared against a previously introduced Tikhonov Regularization method. Using the direct solution as a benchmark, it is found that the regularization technique is unable to reproduce accurate curve shapes. Using L-curve analysis and adjusting the regularization parameter, λ, to match either the depth or the full width at half maximum of the force curves, the two techniquesmore » are contrasted. Matched depths result in full width at half maxima that are off by an average of 27% and matched full width at half maxima produce depths that are off by an average of 109%.« less
Bayesian Inference for Generalized Linear Models for Spiking Neurons
Gerwinn, Sebastian; Macke, Jakob H.; Bethge, Matthias
2010-01-01
Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate. PMID:20577627
NASA Astrophysics Data System (ADS)
Faussurier, G.; Blancard, C.; Combis, P.; Decoster, A.; Videau, L.
2017-10-01
We present a model to calculate the electrical and thermal electronic conductivities in plasmas using the Chester-Thellung-Kubo-Greenwood approach coupled with the Kramers approximation. The divergence in photon energy at low values is eliminated using a regularization scheme with an effective energy-dependent electron-ion collision-frequency. Doing so, we interpolate smoothly between the Drude-like and the Spitzer-like regularizations. The model still satisfies the well-known sum rule over the electrical conductivity. Such kind of approximation is also naturally extended to the average-atom model. A particular attention is paid to the Lorenz number. Its nondegenerate and degenerate limits are given and the transition towards the Drude-like limit is proved in the Kramers approximation.
Investigation of multiple scattering effects in aerosols
NASA Technical Reports Server (NTRS)
Deepak, A.
1980-01-01
The results are presented of investigations on the various aspects of multiple scattering effects on visible and infrared laser beams transversing dense fog oil aerosols contained in a chamber (4' x 4' x 9'). The report briefly describes: (1) the experimental details and measurements; (2) analytical representation of the aerosol size distribution data by two analytical models (the regularized power law distribution and the inverse modified gamma distribution); (3) retrieval of aerosol size distributions from multispectral optical depth measurements by two methods (the two and three parameter fast table search methods and the nonlinear least squares method); (4) modeling of the effects of aerosol microphysical (coagulation and evaporation) and dynamical processes (gravitational settling) on the temporal behavior of aerosol size distribution, and hence on the extinction of four laser beams with wavelengths 0.44, 0.6328, 1.15, and 3.39 micrometers; and (5) the exact and approximate formulations for four methods for computing the effects of multiple scattering on the transmittance of laser beams in dense aerosols, all of which are based on the solution of the radiative transfer equation under the small angle approximation.
Investigation of multiple scattering effects in aerosols
NASA Astrophysics Data System (ADS)
Deepak, A.
1980-05-01
The results are presented of investigations on the various aspects of multiple scattering effects on visible and infrared laser beams transversing dense fog oil aerosols contained in a chamber (4' x 4' x 9'). The report briefly describes: (1) the experimental details and measurements; (2) analytical representation of the aerosol size distribution data by two analytical models (the regularized power law distribution and the inverse modified gamma distribution); (3) retrieval of aerosol size distributions from multispectral optical depth measurements by two methods (the two and three parameter fast table search methods and the nonlinear least squares method); (4) modeling of the effects of aerosol microphysical (coagulation and evaporation) and dynamical processes (gravitational settling) on the temporal behavior of aerosol size distribution, and hence on the extinction of four laser beams with wavelengths 0.44, 0.6328, 1.15, and 3.39 micrometers; and (5) the exact and approximate formulations for four methods for computing the effects of multiple scattering on the transmittance of laser beams in dense aerosols, all of which are based on the solution of the radiative transfer equation under the small angle approximation.
Optimal boundary regularity for a singular Monge-Ampère equation
NASA Astrophysics Data System (ADS)
Jian, Huaiyu; Li, You
2018-06-01
In this paper we study the optimal global regularity for a singular Monge-Ampère type equation which arises from a few geometric problems. We find that the global regularity does not depend on the smoothness of domain, but it does depend on the convexity of the domain. We introduce (a , η) type to describe the convexity. As a result, we show that the more convex is the domain, the better is the regularity of the solution. In particular, the regularity is the best near angular points.
Dontsov, E V
2016-12-01
This paper develops a closed-form approximate solution for a penny-shaped hydraulic fracture whose behaviour is determined by an interplay of three competing physical processes that are associated with fluid viscosity, fracture toughness and fluid leak-off. The primary assumption that permits one to construct the solution is that the fracture behaviour is mainly determined by the three-process multiscale tip asymptotics and the global fluid volume balance. First, the developed approximation is compared with the existing solutions for all limiting regimes of propagation. Then, a solution map, which indicates applicability regions of the limiting solutions, is constructed. It is also shown that the constructed approximation accurately captures the scaling that is associated with the transition from any one limiting solution to another. The developed approximation is tested against a reference numerical solution, showing that accuracy of the fracture width and radius predictions lie within a fraction of a per cent for a wide range of parameters. As a result, the constructed approximation provides a rapid solution for a penny-shaped hydraulic fracture, which can be used for quick fracture design calculations or as a reference solution to evaluate accuracy of various hydraulic fracture simulators.
NASA Astrophysics Data System (ADS)
Dontsov, E. V.
2016-12-01
This paper develops a closed-form approximate solution for a penny-shaped hydraulic fracture whose behaviour is determined by an interplay of three competing physical processes that are associated with fluid viscosity, fracture toughness and fluid leak-off. The primary assumption that permits one to construct the solution is that the fracture behaviour is mainly determined by the three-process multiscale tip asymptotics and the global fluid volume balance. First, the developed approximation is compared with the existing solutions for all limiting regimes of propagation. Then, a solution map, which indicates applicability regions of the limiting solutions, is constructed. It is also shown that the constructed approximation accurately captures the scaling that is associated with the transition from any one limiting solution to another. The developed approximation is tested against a reference numerical solution, showing that accuracy of the fracture width and radius predictions lie within a fraction of a per cent for a wide range of parameters. As a result, the constructed approximation provides a rapid solution for a penny-shaped hydraulic fracture, which can be used for quick fracture design calculations or as a reference solution to evaluate accuracy of various hydraulic fracture simulators.
Chemical interactions and thermodynamic studies in aluminum alloy/molten salt systems
NASA Astrophysics Data System (ADS)
Narayanan, Ramesh
The recycling of aluminum and aluminum alloys such as Used Beverage Container (UBC) is done under a cover of molten salt flux based on (NaCl-KCl+fluorides). The reactions of aluminum alloys with molten salt fluxes have been investigated. Thermodynamic calculations are performed in the alloy/salt flux systems which allow quantitative predictions of the equilibrium compositions. There is preferential reaction of Mg in Al-Mg alloy with molten salt fluxes, especially those containing fluorides like NaF. An exchange reaction between Al-Mg alloy and molten salt flux has been demonstrated. Mg from the Al-Mg alloy transfers into the salt flux while Na from the salt flux transfers into the metal. Thermodynamic calculations indicated that the amount of Na in metal increases as the Mg content in alloy and/or NaF content in the reacting flux increases. This is an important point because small amounts of Na have a detrimental effect on the mechanical properties of the Al-Mg alloy. The reactions of Al alloys with molten salt fluxes result in the formation of bluish purple colored "streamers". It was established that the streamer is liquid alkali metal (Na and K in the case of NaCl-KCl-NaF systems) dissipating into the melt. The melts in which such streamers were observed are identified. The metal losses occurring due to reactions have been quantified, both by thermodynamic calculations and experimentally. A computer program has been developed to calculate ternary phase diagrams in molten salt systems from the constituting binary phase diagrams, based on a regular solution model. The extent of deviation of the binary systems from regular solution has been quantified. The systems investigated in which good agreement was found between the calculated and experimental phase diagrams included NaF-KF-LiF, NaCl-NaF-NaI and KNOsb3-TINOsb3-LiNOsb3. Furthermore, an insight has been provided on the interrelationship between the regular solution parameters and the topology of the phase diagram. The isotherms are flat (i.e. no skewness) when the regular solution parameters are zero. When the regular solution parameters are non-zero, the isotherms are skewed. A regular solution model is not adequate to accurately model the molten salt systems used in recycling like NaCl-KCl-LiF and NaCl-KCl-NaF.
s-SMOOTH: Sparsity and Smoothness Enhanced EEG Brain Tomography
Li, Ying; Qin, Jing; Hsin, Yue-Loong; Osher, Stanley; Liu, Wentai
2016-01-01
EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of a brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in the image processing field. In order to handle 3D EEG source images, we propose a voxel-based Total Generalized Variation (vTGV) regularization that extends the definition of second-order TGV from 2D planar images to 3D irregular surfaces such as cortex surface. In addition, the ℓ1−2 regularization is utilized to promote sparsity on the current density itself. We demonstrate that ℓ1−2 regularization is able to enhance sparsity and accelerate computations than ℓ1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenarios. PMID:27965529
RES: Regularized Stochastic BFGS Algorithm
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Ghorai, Santanu; Mukherjee, Anirban; Dutta, Pranab K
2010-06-01
In this brief we have proposed the multiclass data classification by computationally inexpensive discriminant analysis through vector-valued regularized kernel function approximation (VVRKFA). VVRKFA being an extension of fast regularized kernel function approximation (FRKFA), provides the vector-valued response at single step. The VVRKFA finds a linear operator and a bias vector by using a reduced kernel that maps a pattern from feature space into the low dimensional label space. The classification of patterns is carried out in this low dimensional label subspace. A test pattern is classified depending on its proximity to class centroids. The effectiveness of the proposed method is experimentally verified and compared with multiclass support vector machine (SVM) on several benchmark data sets as well as on gene microarray data for multi-category cancer classification. The results indicate the significant improvement in both training and testing time compared to that of multiclass SVM with comparable testing accuracy principally in large data sets. Experiments in this brief also serve as comparison of performance of VVRKFA with stratified random sampling and sub-sampling.
NASA Astrophysics Data System (ADS)
Kaltenbacher, Barbara; Klassen, Andrej
2018-05-01
In this paper we provide a convergence analysis of some variational methods alternative to the classical Tikhonov regularization, namely Ivanov regularization (also called the method of quasi solutions) with some versions of the discrepancy principle for choosing the regularization parameter, and Morozov regularization (also called the method of the residuals). After motivating nonequivalence with Tikhonov regularization by means of an example, we prove well-definedness of the Ivanov and the Morozov method, convergence in the sense of regularization, as well as convergence rates under variational source conditions. Finally, we apply these results to some linear and nonlinear parameter identification problems in elliptic boundary value problems.
An overview of unconstrained free boundary problems
Figalli, Alessio; Shahgholian, Henrik
2015-01-01
In this paper, we present a survey concerning unconstrained free boundary problems of type where B1 is the unit ball, Ω is an unknown open set, F1 and F2 are elliptic operators (admitting regular solutions), and is a functions space to be specified in each case. Our main objective is to discuss a unifying approach to the optimal regularity of solutions to the above matching problems, and list several open problems in this direction. PMID:26261367
Analytical approximate solutions for a general class of nonlinear delay differential equations.
Căruntu, Bogdan; Bota, Constantin
2014-01-01
We use the polynomial least squares method (PLSM), which allows us to compute analytical approximate polynomial solutions for a very general class of strongly nonlinear delay differential equations. The method is tested by computing approximate solutions for several applications including the pantograph equations and a nonlinear time-delay model from biology. The accuracy of the method is illustrated by a comparison with approximate solutions previously computed using other methods.
A Unified Approach for Solving Nonlinear Regular Perturbation Problems
ERIC Educational Resources Information Center
Khuri, S. A.
2008-01-01
This article describes a simple alternative unified method of solving nonlinear regular perturbation problems. The procedure is based upon the manipulation of Taylor's approximation for the expansion of the nonlinear term in the perturbed equation. An essential feature of this technique is the relative simplicity used and the associated unified…
Regularities in Spearman's Law of Diminishing Returns.
ERIC Educational Resources Information Center
Jensen, Arthur R.
2003-01-01
Examined the assumption that Spearman's law acts unsystematically and approximately uniformly for various subtests of cognitive ability in an IQ test battery when high- and low-ability IQ groups are selected. Data from national standardization samples for Wechsler adult and child IQ tests affirm regularities in Spearman's "Law of Diminishing…
NASA Astrophysics Data System (ADS)
Springborg, Michael; Molayem, Mohammad; Kirtman, Bernard
2017-09-01
A theoretical treatment for the orbital response of an infinite, periodic system to a static, homogeneous, magnetic field is presented. It is assumed that the system of interest has an energy gap separating occupied and unoccupied orbitals and a zero Chern number. In contrast to earlier studies, we do not utilize a perturbation expansion, although we do assume the field is sufficiently weak that the occurrence of Landau levels can be ignored. The theory is developed by analyzing results for large, finite systems and also by comparing with the analogous treatment of an electrostatic field. The resulting many-electron Hamilton operator is forced to be hermitian, but hermiticity is not preserved, in general, for the subsequently derived single-particle operators that determine the electronic orbitals. However, we demonstrate that when focusing on the canonical solutions to the single-particle equations, hermiticity is preserved. The issue of gauge-origin dependence of approximate solutions is addressed. Our approach is compared with several previously proposed treatments, whereby limitations in some of the latter are identified.
Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks.
Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue
2017-06-06
Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions.
Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks
Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue
2017-01-01
Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions. PMID:28587304
Regularity for Fully Nonlinear Elliptic Equations with Oblique Boundary Conditions
NASA Astrophysics Data System (ADS)
Li, Dongsheng; Zhang, Kai
2018-06-01
In this paper, we obtain a series of regularity results for viscosity solutions of fully nonlinear elliptic equations with oblique derivative boundary conditions. In particular, we derive the pointwise C α, C 1,α and C 2,α regularity. As byproducts, we also prove the A-B-P maximum principle, Harnack inequality, uniqueness and solvability of the equations.
Born approximation in linear-time invariant system
NASA Astrophysics Data System (ADS)
Gumjudpai, Burin
2017-09-01
An alternative way of finding the LTI’s solution with the Born approximation, is investigated. We use Born approximation in the LTI and in the transformed LTI in form of Helmholtz equation. General solution are considered as infinite series or Feynman graph. Slow-roll approximation are explored. Transforming the LTI system into Helmholtz equation, approximated general solution can be found for any given forms of force with its initial value.
Regularizing portfolio optimization
NASA Astrophysics Data System (ADS)
Still, Susanne; Kondor, Imre
2010-07-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
A note on the regularity of solutions of infinite dimensional Riccati equations
NASA Technical Reports Server (NTRS)
Burns, John A.; King, Belinda B.
1994-01-01
This note is concerned with the regularity of solutions of algebraic Riccati equations arising from infinite dimensional LQR and LQG control problems. We show that distributed parameter systems described by certain parabolic partial differential equations often have a special structure that smoothes solutions of the corresponding Riccati equation. This analysis is motivated by the need to find specific representations for Riccati operators that can be used in the development of computational schemes for problems where the input and output operators are not Hilbert-Schmidt. This situation occurs in many boundary control problems and in certain distributed control problems associated with optimal sensor/actuator placement.
Regularization by Functions of Bounded Variation and Applications to Image Enhancement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casas, E.; Kunisch, K.; Pola, C.
1999-09-15
Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise.
NASA Astrophysics Data System (ADS)
Ohkitani, Koji
2012-09-01
We study the generalised 2D surface quasi-geostrophic (SQG) equation, where the active scalar is given by a fractional power α of Laplacian applied to the stream function. This includes the 2D SQG and Euler equations as special cases. Using Poincaré's successive approximation to higher α-derivatives of the active scalar, we derive a variational equation for describing perturbations in the generalized SQG equation. In particular, in the limit α → 0, an asymptotic equation is derived on a stretched time variable τ = αt, which unifies equations in the family near α = 0. The successive approximation is also discussed at the other extreme of the 2D Euler limit α = 2-0. Numerical experiments are presented for both limits. We consider whether the solution behaves in a more singular fashion, with more effective nonlinearity, when α is increased. Two competing effects are identified: the regularizing effect of a fractional inverse Laplacian (control by conservation) and cancellation by symmetry (nonlinearity depletion). Near α = 0 (complete depletion), the solution behaves in a more singular fashion as α increases. Near α = 2 (maximal control by conservation), the solution behave in a more singular fashion, as α decreases, suggesting that there may be some α in [0, 2] at which the solution behaves in the most singular manner. We also present some numerical results of the family for α = 0.5, 1, and 1.5. On the original time t, the H1 norm of θ generally grows more rapidly with increasing α. However, on the new time τ, this order is reversed. On the other hand, contour patterns for different α appear to be similar at fixed τ, even though the norms are markedly different in magnitude. Finally, point-vortex systems for the generalized SQG family are discussed to shed light on the above problems of time scale.
Zhang, Qing; Beard, Daniel A; Schlick, Tamar
2003-12-01
Salt-mediated electrostatics interactions play an essential role in biomolecular structures and dynamics. Because macromolecular systems modeled at atomic resolution contain thousands of solute atoms, the electrostatic computations constitute an expensive part of the force and energy calculations. Implicit solvent models are one way to simplify the model and associated calculations, but they are generally used in combination with standard atomic models for the solute. To approximate electrostatics interactions in models on the polymer level (e.g., supercoiled DNA) that are simulated over long times (e.g., milliseconds) using Brownian dynamics, Beard and Schlick have developed the DiSCO (Discrete Surface Charge Optimization) algorithm. DiSCO represents a macromolecular complex by a few hundred discrete charges on a surface enclosing the system modeled by the Debye-Hückel (screened Coulombic) approximation to the Poisson-Boltzmann equation, and treats the salt solution as continuum solvation. DiSCO can represent the nucleosome core particle (>12,000 atoms), for example, by 353 discrete surface charges distributed on the surfaces of a large disk for the nucleosome core particle and a slender cylinder for the histone tail; the charges are optimized with respect to the Poisson-Boltzmann solution for the electric field, yielding a approximately 5.5% residual. Because regular surfaces enclosing macromolecules are not sufficiently general and may be suboptimal for certain systems, we develop a general method to construct irregular models tailored to the geometry of macromolecules. We also compare charge optimization based on both the electric field and electrostatic potential refinement. Results indicate that irregular surfaces can lead to a more accurate approximation (lower residuals), and the refinement in terms of the electric field is more robust. We also show that surface smoothing for irregular models is important, that the charge optimization (by the TNPACK minimizer) is efficient and does not depend on the initial assigned values, and that the residual is acceptable when the distance to the model surface is close to, or larger than, the Debye length. We illustrate applications of DiSCO's model-building procedure to chromatin folding and supercoiled DNA bound to Hin and Fis proteins. DiSCO is generally applicable to other interesting macromolecular systems for which mesoscale models are appropriate, to yield a resolution between the all-atom representative and the polymer level. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 2063-2074, 2003
Geodesic active fields--a geometric framework for image registration.
Zosso, Dominique; Bresson, Xavier; Thiran, Jean-Philippe
2011-05-01
In this paper we present a novel geometric framework called geodesic active fields for general image registration. In image registration, one looks for the underlying deformation field that best maps one image onto another. This is a classic ill-posed inverse problem, which is usually solved by adding a regularization term. Here, we propose a multiplicative coupling between the registration term and the regularization term, which turns out to be equivalent to embed the deformation field in a weighted minimal surface problem. Then, the deformation field is driven by a minimization flow toward a harmonic map corresponding to the solution of the registration problem. This proposed approach for registration shares close similarities with the well-known geodesic active contours model in image segmentation, where the segmentation term (the edge detector function) is coupled with the regularization term (the length functional) via multiplication as well. As a matter of fact, our proposed geometric model is actually the exact mathematical generalization to vector fields of the weighted length problem for curves and surfaces introduced by Caselles-Kimmel-Sapiro. The energy of the deformation field is measured with the Polyakov energy weighted by a suitable image distance, borrowed from standard registration models. We investigate three different weighting functions, the squared error and the approximated absolute error for monomodal images, and the local joint entropy for multimodal images. As compared to specialized state-of-the-art methods tailored for specific applications, our geometric framework involves important contributions. Firstly, our general formulation for registration works on any parametrizable, smooth and differentiable surface, including nonflat and multiscale images. In the latter case, multiscale images are registered at all scales simultaneously, and the relations between space and scale are intrinsically being accounted for. Second, this method is, to the best of our knowledge, the first reparametrization invariant registration method introduced in the literature. Thirdly, the multiplicative coupling between the registration term, i.e. local image discrepancy, and the regularization term naturally results in a data-dependent tuning of the regularization strength. Finally, by choosing the metric on the deformation field one can freely interpolate between classic Gaussian and more interesting anisotropic, TV-like regularization.
Regularity of Solutions of the Nonlinear Sigma Model with Gravitino
NASA Astrophysics Data System (ADS)
Jost, Jürgen; Keßler, Enno; Tolksdorf, Jürgen; Wu, Ruijun; Zhu, Miaomiao
2018-02-01
We propose a geometric setup to study analytic aspects of a variant of the super symmetric two-dimensional nonlinear sigma model. This functional extends the functional of Dirac-harmonic maps by gravitino fields. The system of Euler-Lagrange equations of the two-dimensional nonlinear sigma model with gravitino is calculated explicitly. The gravitino terms pose additional analytic difficulties to show smoothness of its weak solutions which are overcome using Rivière's regularity theory and Riesz potential theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balakin, A. B.; Zayats, A. E.; Sushkov, S. V.
2007-04-15
We discuss exact solutions of a three-parameter nonminimal Einstein-Yang-Mills model, which describe the wormholes of a new type. These wormholes are considered to be supported by the SU(2)-symmetric Yang-Mills field, nonminimally coupled to gravity, the Wu-Yang ansatz for the gauge field being used. We distinguish between regular solutions, describing traversable nonminimal Wu-Yang wormholes, and black wormholes possessing one or two event horizons. The relation between the asymptotic mass of the regular traversable Wu-Yang wormhole and its throat radius is analyzed.
Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.
Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo
2017-06-01
Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.
NASA Astrophysics Data System (ADS)
Kuo, Peng-Hsuan; Zhang, Bo-Cong; Su, Chie-Shaan; Liu, Jun-Jen; Sheu, Ming-Thau
2017-08-01
In this study, cooling sonocrystallization was used to recrystallize an active pharmaceutical ingredient, sulfathiazole, using methanol as the solvent. The effects of three operating parameters-sonication intensity, sonication duration, and solution concentration-on the recrystallization were investigated by using a 2k factorial design. The solid-state properties of sulfathiazole, including the mean particle size, crystal habit, and polymorphic form, were analyzed. Analysis of variance showed that the effect of the sonication intensity, cross-interaction effect of sonication intensity/sonication duration, and cross-interaction effect of sonication intensity/solution concentration on the recrystallization were significant. The results obtained using the 2k factorial design indicated that a combination of high sonication intensity and long sonication duration is not favorable for sonocrystallization, especially at a high solution concentration. A comparison of the solid-state properties of the original and the recrystallized sulfathiazole revealed that the crystal habit of the recrystallized sulfathiazole was more regular and that its mean particle size could be reduced to approximately 10 μm. Furthermore, the analytical results obtained using the PXRD, DSC, and FTIR spectroscopy indicated that the polymorphic purity of sulfathiazole improved from the original Form III/IV mixture to Form III after sonocrystallization.
Vorticity-divergence semi-Lagrangian global atmospheric model SL-AV20: dynamical core
NASA Astrophysics Data System (ADS)
Tolstykh, Mikhail; Shashkin, Vladimir; Fadeev, Rostislav; Goyman, Gordey
2017-05-01
SL-AV (semi-Lagrangian, based on the absolute vorticity equation) is a global hydrostatic atmospheric model. Its latest version, SL-AV20, provides global operational medium-range weather forecast with 20 km resolution over Russia. The lower-resolution configurations of SL-AV20 are being tested for seasonal prediction and climate modeling. The article presents the model dynamical core. Its main features are a vorticity-divergence formulation at the unstaggered grid, high-order finite-difference approximations, semi-Lagrangian semi-implicit discretization and the reduced latitude-longitude grid with variable resolution in latitude. The accuracy of SL-AV20 numerical solutions using a reduced lat-lon grid and the variable resolution in latitude is tested with two idealized test cases. Accuracy and stability of SL-AV20 in the presence of the orography forcing are tested using the mountain-induced Rossby wave test case. The results of all three tests are in good agreement with other published model solutions. It is shown that the use of the reduced grid does not significantly affect the accuracy up to the 25 % reduction in the number of grid points with respect to the regular grid. Variable resolution in latitude allows us to improve the accuracy of a solution in the region of interest.
A multiplicative regularization for force reconstruction
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2017-02-01
Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.
Numerical solution of the unsteady Navier-Stokes equation
NASA Technical Reports Server (NTRS)
Osher, Stanley J.; Engquist, Bjoern
1985-01-01
The construction and the analysis of nonoscillatory shock capturing methods for the approximation of hyperbolic conservation laws are discussed. These schemes share many desirable properties with total variation diminishing schemes, but TVD schemes have at most first-order accuracy, in the sense of truncation error, at extrema of the solution. In this paper a uniformly second-order approximation is constructed, which is nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time. This is achieved via a nonoscillatory piecewise linear reconstruction of the solution from its cell averages, time evolution through an approximate solution of the resulting initial value problem, and averaging of this approximate solution over each cell.
NASA Astrophysics Data System (ADS)
Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.
2017-10-01
Over the recent decades, a number of fast approximate solutions of Lippmann-Schwinger equation, which are more accurate than classic Born and Rytov approximations, were proposed in the field of electromagnetic modeling. Those developments could be naturally extended to acoustic and elastic fields; however, until recently, they were almost unknown in seismology. This paper presents several solutions of this kind applied to acoustic modeling for both lossy and lossless media. We evaluated the numerical merits of those methods and provide an estimation of their numerical complexity. In our numerical realization we use the matrix-free implementation of the corresponding integral operator. We study the accuracy of those approximate solutions and demonstrate, that the quasi-analytical approximation is more accurate, than the Born approximation. Further, we apply the quasi-analytical approximation to the solution of the inverse problem. It is demonstrated that, this approach improves the estimation of the data gradient, comparing to the Born approximation. The developed inversion algorithm is based on the conjugate-gradient type optimization. Numerical model study demonstrates that the quasi-analytical solution significantly reduces computation time of the seismic full-waveform inversion. We also show how the quasi-analytical approximation can be extended to the case of elastic wavefield.
Remarks on regular black holes
NASA Astrophysics Data System (ADS)
Nicolini, Piero; Smailagic, Anais; Spallucci, Euro
Recently, it has been claimed by Chinaglia and Zerbini that the curvature singularity is present even in the so-called regular black hole solutions of the Einstein equations. In this brief note, we show that this criticism is devoid of any physical content.
On the Relations among Regular, Equal Unique Variances, and Image Factor Analysis Models.
ERIC Educational Resources Information Center
Hayashi, Kentaro; Bentler, Peter M.
2000-01-01
Investigated the conditions under which the matrix of factor loadings from the factor analysis model with equal unique variances will give a good approximation to the matrix of factor loadings from the regular factor analysis model. Extends the results to the image factor analysis model. Discusses implications for practice. (SLD)
NASA Astrophysics Data System (ADS)
Popov, Nikolay S.
2017-11-01
Solvability of some initial-boundary value problems for linear hyperbolic equations of the fourth order is studied. A condition on the lateral boundary in these problems relates the values of a solution or the conormal derivative of a solution to the values of some integral operator applied to a solution. Nonlocal boundary-value problems for one-dimensional hyperbolic second-order equations with integral conditions on the lateral boundary were considered in the articles by A.I. Kozhanov. Higher-dimensional hyperbolic equations of higher order with integral conditions on the lateral boundary were not studied earlier. The existence and uniqueness theorems of regular solutions are proven. The method of regularization and the method of continuation in a parameter are employed to establish solvability.
Exact solutions of unsteady Korteweg-de Vries and time regularized long wave equations.
Islam, S M Rayhanul; Khan, Kamruzzaman; Akbar, M Ali
2015-01-01
In this paper, we implement the exp(-Φ(ξ))-expansion method to construct the exact traveling wave solutions for nonlinear evolution equations (NLEEs). Here we consider two model equations, namely the Korteweg-de Vries (KdV) equation and the time regularized long wave (TRLW) equation. These equations play significant role in nonlinear sciences. We obtained four types of explicit function solutions, namely hyperbolic, trigonometric, exponential and rational function solutions of the variables in the considered equations. It has shown that the applied method is quite efficient and is practically well suited for the aforementioned problems and so for the other NLEEs those arise in mathematical physics and engineering fields. PACS numbers: 02.30.Jr, 02.70.Wz, 05.45.Yv, 94.05.Fq.
NASA Astrophysics Data System (ADS)
Deng, Shuxian; Ge, Xinxin
2017-10-01
Considering the non-Newtonian fluid equation of incompressible porous media, using the properties of operator semigroup and measure space and the principle of squeezed image, Fourier analysis and a priori estimate in the measurement space are used to discuss the non-compressible porous media, the properness of the solution of the equation, its gradual behavior and its topological properties. Through the diffusion regularization method and the compressed limit compact method, we study the overall decay rate of the solution of the equation in a certain space when the initial value is sufficient. The decay estimation of the solution of the incompressible seepage equation is obtained, and the asymptotic behavior of the solution is obtained by using the double regularization model and the Duhamel principle.
Approximate Solutions for Ideal Dam-Break Sediment-Laden Flows on Uniform Slopes
NASA Astrophysics Data System (ADS)
Ni, Yufang; Cao, Zhixian; Borthwick, Alistair; Liu, Qingquan
2018-04-01
Shallow water hydro-sediment-morphodynamic (SHSM) models have been applied increasingly widely in hydraulic engineering and geomorphological studies over the past few decades. Analytical and approximate solutions are usually sought to verify such models and therefore confirm their credibility. Dam-break flows are often evoked because such flows normally feature shock waves and contact discontinuities that warrant refined numerical schemes to solve. While analytical and approximate solutions to clear-water dam-break flows have been available for some time, such solutions are rare for sediment transport in dam-break flows. Here we aim to derive approximate solutions for ideal dam-break sediment-laden flows resulting from the sudden release of a finite volume of frictionless, incompressible water-sediment mixture on a uniform slope. The approximate solutions are presented for three typical sediment transport scenarios, i.e., pure advection, pure sedimentation, and concurrent entrainment and deposition. Although the cases considered in this paper are not real, the approximate solutions derived facilitate suitable benchmark tests for evaluating SHSM models, especially presently when shock waves can be numerically resolved accurately with a suite of finite volume methods, while the accuracy of the numerical solutions of contact discontinuities in sediment transport remains generally poorer.
A regularization of the Burgers equation using a filtered convective velocity
NASA Astrophysics Data System (ADS)
Norgard, Greg; Mohseni, Kamran
2008-08-01
This paper examines the properties of a regularization of the Burgers equation in one and multiple dimensions using a filtered convective velocity, which we have dubbed as the convectively filtered Burgers (CFB) equation. A physical motivation behind the filtering technique is presented. An existence and uniqueness theorem for multiple dimensions and a general class of filters is proven. Multiple invariants of motion are found for the CFB equation which are shown to be shared with the viscous and inviscid Burgers equations. Traveling wave solutions are found for a general class of filters and are shown to converge to weak solutions of the inviscid Burgers equation with the correct wave speed. Numerical simulations are conducted in 1D and 2D cases where the shock behavior, shock thickness and kinetic energy decay are examined. Energy spectra are also examined and are shown to be related to the smoothness of the solutions. This approach is presented with the hope of being extended to shock regularization of compressible Euler equations.
Convergence of Spectral Discretizations of the Vlasov--Poisson System
Manzini, G.; Funaro, D.; Delzanno, G. L.
2017-09-26
Here we prove the convergence of a spectral discretization of the Vlasov-Poisson system. The velocity term of the Vlasov equation is discretized using either Hermite functions on the infinite domain or Legendre polynomials on a bounded domain. The spatial term of the Vlasov and Poisson equations is discretized using periodic Fourier expansions. Boundary conditions are treated in weak form through a penalty type term that can be applied also in the Hermite case. As a matter of fact, stability properties of the approximated scheme descend from this added term. The convergence analysis is carried out in detail for the 1D-1Vmore » case, but results can be generalized to multidimensional domains, obtained as Cartesian product, in both space and velocity. The error estimates show the spectral convergence under suitable regularity assumptions on the exact solution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Simonetto, Andrea
This paper considers distribution networks featuring inverter-interfaced distributed energy resources, and develops distributed feedback controllers that continuously drive the inverter output powers to solutions of AC optimal power flow (OPF) problems. Particularly, the controllers update the power setpoints based on voltage measurements as well as given (time-varying) OPF targets, and entail elementary operations implementable onto low-cost microcontrollers that accompany power-electronics interfaces of gateways and inverters. The design of the control framework is based on suitable linear approximations of the AC power-flow equations as well as Lagrangian regularization methods. Convergence and OPF-target tracking capabilities of the controllers are analytically established. Overall,more » the proposed method allows to bypass traditional hierarchical setups where feedback control and optimization operate at distinct time scales, and to enable real-time optimization of distribution systems.« less
General phase regularized reconstruction using phase cycling.
Ong, Frank; Cheng, Joseph Y; Lustig, Michael
2018-07-01
To develop a general phase regularized image reconstruction method, with applications to partial Fourier imaging, water-fat imaging and flow imaging. The problem of enforcing phase constraints in reconstruction was studied under a regularized inverse problem framework. A general phase regularized reconstruction algorithm was proposed to enable various joint reconstruction of partial Fourier imaging, water-fat imaging and flow imaging, along with parallel imaging (PI) and compressed sensing (CS). Since phase regularized reconstruction is inherently non-convex and sensitive to phase wraps in the initial solution, a reconstruction technique, named phase cycling, was proposed to render the overall algorithm invariant to phase wraps. The proposed method was applied to retrospectively under-sampled in vivo datasets and compared with state of the art reconstruction methods. Phase cycling reconstructions showed reduction of artifacts compared to reconstructions without phase cycling and achieved similar performances as state of the art results in partial Fourier, water-fat and divergence-free regularized flow reconstruction. Joint reconstruction of partial Fourier + water-fat imaging + PI + CS, and partial Fourier + divergence-free regularized flow imaging + PI + CS were demonstrated. The proposed phase cycling reconstruction provides an alternative way to perform phase regularized reconstruction, without the need to perform phase unwrapping. It is robust to the choice of initial solutions and encourages the joint reconstruction of phase imaging applications. Magn Reson Med 80:112-125, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Study of X(5568) in a unitary coupled-channel approximation of BK¯ and Bs π
NASA Astrophysics Data System (ADS)
Sun, Bao-Xi; Dong, Fang-Yong; Pang, Jing-Long
2017-07-01
The potential of the B meson and the pseudoscalar meson is constructed up to the next-to-leading order Lagrangian, and then the BK¯ and Bs π interaction is studied in the unitary coupled-channel approximation. A resonant state with a mass about 5568 MeV and JP =0+ is generated dynamically, which can be associated with the X(5568) state announced by the D0 Collaboration recently. The mass and the decay width of this resonant state depend on the regularization scale in the dimensional regularization scheme, or the maximum momentum in the momentum cutoff regularization scheme. The scattering amplitude of the vector B meson and the pseudoscalar meson is calculated, and an axial-vector state with a mass near 5620 MeV and JP =1+ is produced. Their partners in the charm sector are also discussed.
Optimal Tikhonov Regularization in Finite-Frequency Tomography
NASA Astrophysics Data System (ADS)
Fang, Y.; Yao, Z.; Zhou, Y.
2017-12-01
The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.
Boundary Approximation Methods for Sloving Elliptic Problems on Unbounded Domains
NASA Astrophysics Data System (ADS)
Li, Zi-Cai; Mathon, Rudolf
1990-08-01
Boundary approximation methods with partial solutions are presented for solving a complicated problem on an unbounded domain, with both a crack singularity and a corner singularity. Also an analysis of partial solutions near the singular points is provided. These methods are easy to apply, have good stability properties, and lead to highly accurate solutions. Hence, boundary approximation methods with partial solutions are recommended for the treatment of elliptic problems on unbounded domains provided that piecewise solution expansions, in particular, asymptotic solutions near the singularities and infinity, can be found.
21 CFR 606.65 - Supplies and reagents.
Code of Federal Regulations, 2014 CFR
2014-04-01
... solutions shall be tested on a regularly scheduled basis by methods described in the Standard Operating Procedures Manual to determine their capacity to perform as required: Reagent or solution Frequency of...
21 CFR 606.65 - Supplies and reagents.
Code of Federal Regulations, 2012 CFR
2012-04-01
... solutions shall be tested on a regularly scheduled basis by methods described in the Standard Operating Procedures Manual to determine their capacity to perform as required: Reagent or solution Frequency of...
21 CFR 606.65 - Supplies and reagents.
Code of Federal Regulations, 2013 CFR
2013-04-01
... solutions shall be tested on a regularly scheduled basis by methods described in the Standard Operating Procedures Manual to determine their capacity to perform as required: Reagent or solution Frequency of...
21 CFR 606.65 - Supplies and reagents.
Code of Federal Regulations, 2011 CFR
2011-04-01
... solutions shall be tested on a regularly scheduled basis by methods described in the Standard Operating Procedures Manual to determine their capacity to perform as required: Reagent or solution Frequency of...
Regularization Paths for Cox's Proportional Hazards Model via Coordinate Descent.
Simon, Noah; Friedman, Jerome; Hastie, Trevor; Tibshirani, Rob
2011-03-01
We introduce a pathwise algorithm for the Cox proportional hazards model, regularized by convex combinations of ℓ 1 and ℓ 2 penalties (elastic net). Our algorithm fits via cyclical coordinate descent, and employs warm starts to find a solution along a regularization path. We demonstrate the efficacy of our algorithm on real and simulated data sets, and find considerable speedup between our algorithm and competing methods.
NASA Astrophysics Data System (ADS)
Cho, Yumi
2018-05-01
We study nonlinear elliptic problems with nonstandard growth and ellipticity related to an N-function. We establish global Calderón-Zygmund estimates of the weak solutions in the framework of Orlicz spaces over bounded non-smooth domains. Moreover, we prove a global regularity result for asymptotically regular problems which are getting close to the regular problems considered, when the gradient variable goes to infinity.
Efficient computational methods for electromagnetic imaging with applications to 3D magnetotellurics
NASA Astrophysics Data System (ADS)
Kordy, Michal Adam
The motivation for this work is the forward and inverse problem for magnetotellurics, a frequency domain electromagnetic remote-sensing geophysical method used in mineral, geothermal, and groundwater exploration. The dissertation consists of four papers. In the first paper, we prove the existence and uniqueness of a representation of any vector field in H(curl) by a vector lying in H(curl) and H(div). It allows us to represent electric or magnetic fields by another vector field, for which nodal finite element approximation may be used in the case of non-constant electromagnetic properties. With this approach, the system matrix does not become ill-posed for low-frequency. In the second paper, we consider hexahedral finite element approximation of an electric field for the magnetotelluric forward problem. The near-null space of the system matrix for low frequencies makes the numerical solution unstable in the air. We show that the proper solution may obtained by applying a correction on the null space of the curl. It is done by solving a Poisson equation using discrete Helmholtz decomposition. We parallelize the forward code on multicore workstation with large RAM. In the next paper, we use the forward code in the inversion. Regularization of the inversion is done by using the second norm of the logarithm of conductivity. The data space Gauss-Newton approach allows for significant savings in memory and computational time. We show the efficiency of the method by considering a number of synthetic inversions and we apply it to real data collected in Cascade Mountains. The last paper considers a cross-frequency interpolation of the forward response as well as the Jacobian. We consider Pade approximation through model order reduction and rational Krylov subspace. The interpolating frequencies are chosen adaptively in order to minimize the maximum error of interpolation. Two error indicator functions are compared. We prove a theorem of almost always lucky failure in the case of the right hand analytically dependent on frequency. The operator's null space is treated by decomposing the solution into the part in the null space and orthogonal to it.
Generalized matrix summability of a conjugate derived Fourier series.
Mursaleen, M; Alotaibi, Abdullah
2017-01-01
The study of infinite matrices is important in the theory of summability and in approximation. In particular, Toeplitz matrices or regular matrices and almost regular matrices have been very useful in this context. In this paper, we propose to use a more general matrix method to obtain necessary and sufficient conditions to sum the conjugate derived Fourier series.
Vacuum polarization in the field of a multidimensional global monopole
NASA Astrophysics Data System (ADS)
Grats, Yu. V.; Spirin, P. A.
2016-11-01
An approximate expression for the Euclidean Green function of a massless scalar field in the spacetime of a multidimensional global monopole has been derived. Expressions for the vacuum expectation values <ϕ2>ren and < T 00>ren have been derived by the dimensional regularization method. Comparison with the results obtained by alternative regularization methods is made.
Hesford, Andrew J.; Waag, Robert C.
2010-01-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366
NASA Astrophysics Data System (ADS)
Hesford, Andrew J.; Waag, Robert C.
2010-10-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
Hesford, Andrew J; Waag, Robert C
2010-10-20
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
Comment on "Construction of regular black holes in general relativity"
NASA Astrophysics Data System (ADS)
Bronnikov, Kirill A.
2017-12-01
We claim that the paper by Zhong-Ying Fan and Xiaobao Wang on nonlinear electrodynamics coupled to general relativity [Phys. Rev. D 94,124027 (2016)], although correct in general, in some respects repeats previously obtained results without giving proper references. There is also an important point missing in this paper, which is necessary for understanding the physics of the system: in solutions with an electric charge, a regular center requires a non-Maxwell behavior of Lagrangian function L (f ) , (f =Fμ νFμ ν) at small f . Therefore, in all electric regular black hole solutions with a Reissner-Nordström asymptotic, the Lagrangian L (f ) is different in different parts of space, and the electromagnetic field behaves in a singular way at surfaces where L (f ) suffers branching.
Uniformly high-order accurate non-oscillatory schemes, 1
NASA Technical Reports Server (NTRS)
Harten, A.; Osher, S.
1985-01-01
The construction and the analysis of nonoscillatory shock capturing methods for the approximation of hyperbolic conservation laws was begun. These schemes share many desirable properties with total variation diminishing schemes (TVD), but TVD schemes have at most first order accuracy, in the sense of truncation error, at extreme of the solution. A uniformly second order approximation was constucted, which is nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time. This is achieved via a nonoscillatory piecewise linear reconstruction of the solution from its cell averages, time evolution through an approximate solution of the resulting initial value problem, and averaging of this approximate solution over each cell.
Handwashing with soap or alcoholic solutions? A randomized clinical trial of its effectiveness.
Zaragoza, M; Sallés, M; Gomez, J; Bayas, J M; Trilla, A
1999-06-01
The effectiveness of an alcoholic solution compared with the standard hygienic handwashing procedure during regular work in clinical wards and intensive care units of a large public university hospital in Barcelona was assessed. A prospective, randomized clinical trial with crossover design, paired data, and blind evaluation was done. Eligible health care workers (HCWs) included permanent and temporary HCWs of wards and intensive care units. From each category, a random sample of persons was selected. HCWs were randomly assigned to regular handwashing (liquid soap and water) or handwashing with the alcoholic solution by using a crossover design. The number of colony-forming units on agar plates from hands printing in 3 different samples was counted. A total of 47 HCWs were included. The average reduction in the number of colony-forming units from samples before handwashing to samples after handwashing was 49.6% for soap and water and 88.2% for the alcoholic solution. When both methods were compared, the average number of colony-forming units recovered after the procedure showed a statistically significant difference in favor of the alcoholic solution (P <.001). The alcoholic solution was well tolerated by HCWs. Overall acceptance rate was classified as "good" by 72% of HCWs after 2 weeks use. Of all HCWs included, 9.3% stated that the use of the alcoholic solution worsened minor pre-existing skin conditions. Although the regular use of hygienic soap and water handwashing procedures is the gold standard, the use of alcoholic solutions is effective and safe and deserves more attention, especially in situations in which the handwashing compliance rate is hampered by architectural problems (lack of sinks) or nursing work overload.
Regular black holes from semi-classical down to Planckian size
NASA Astrophysics Data System (ADS)
Spallucci, Euro; Smailagic, Anais
In this paper, we review various models of curvature singularity free black holes (BHs). In the first part of the review, we describe semi-classical solutions of the Einstein equations which, however, contains a “quantum” input through the matter source. We start by reviewing the early model by Bardeen where the metric is regularized by-hand through a short-distance cutoff, which is justified in terms of nonlinear electro-dynamical effects. This toy-model is useful to point-out the common features shared by all regular semi-classical black holes. Then, we solve Einstein equations with a Gaussian source encoding the quantum spread of an elementary particle. We identify, the a priori arbitrary, Gaussian width with the Compton wavelength of the quantum particle. This Compton-Gauss model leads to the estimate of a terminal density that a gravitationally collapsed object can achieve. We identify this density to be the Planck density, and reformulate the Gaussian model assuming this as its peak density. All these models, are physically reliable as long as the BH mass is big enough with respect to the Planck mass. In the truly Planckian regime, the semi-classical approximation breaks down. In this case, a fully quantum BH description is needed. In the last part of this paper, we propose a nongeometrical quantum model of Planckian BHs implementing the Holographic Principle and realizing the “classicalization” scenario recently introduced by Dvali and collaborators. The classical relation between the mass and radius of the BH emerges only in the classical limit, far away from the Planck scale.
Stochastic resonance in the majority vote model on regular and small-world lattices
NASA Astrophysics Data System (ADS)
Krawiecki, A.
2017-11-01
The majority vote model with two states on regular and small-world networks is considered under the influence of periodic driving. Monte Carlo simulations show that the time-dependent magnetization, playing the role of the output signal, exhibits maximum periodicity at nonzero values of the internal noise parameter q, which is manifested as the occurrence of the maximum of the spectral power amplification; the location of the maximum depends in a nontrivial way on the amplitude and frequency of the periodic driving as well as on the network topology. This indicates the appearance of stochastic resonance in the system as a function of the intensity of the internal noise. Besides, for low frequencies and for certain narrow ranges of the amplitudes of the periodic driving double maxima of the spectral power amplification as a function of q occur, i.e., stochastic multiresonance appears. The above-mentioned results quantitatively agree with those obtained from numerical simulations of the mean-field equations for the time-dependent magnetization. In contrast, analytic solutions for the spectral power amplification obtained from the latter equations using the linear response approximation deviate significanlty from the numerical results since the effect of the periodic driving on the system is not small even for vanishing amplitude.
Regularity estimates up to the boundary for elliptic systems of difference equations
NASA Technical Reports Server (NTRS)
Strikwerda, J. C.; Wade, B. A.; Bube, K. P.
1986-01-01
Regularity estimates up to the boundary for solutions of elliptic systems of finite difference equations were proved. The regularity estimates, obtained for boundary fitted coordinate systems on domains with smooth boundary, involve discrete Sobolev norms and are proved using pseudo-difference operators to treat systems with variable coefficients. The elliptic systems of difference equations and the boundary conditions which are considered are very general in form. The regularity of a regular elliptic system of difference equations was proved equivalent to the nonexistence of eigensolutions. The regularity estimates obtained are analogous to those in the theory of elliptic systems of partial differential equations, and to the results of Gustafsson, Kreiss, and Sundstrom (1972) and others for hyperbolic difference equations.
NASA Astrophysics Data System (ADS)
Christenson, J. G.; Austin, R. A.; Phillips, R. J.
2018-05-01
The phonon Boltzmann transport equation is used to analyze model problems in one and two spatial dimensions, under transient and steady-state conditions. New, explicit solutions are obtained by using the P1 and P3 approximations, based on expansions in spherical harmonics, and are compared with solutions from the discrete ordinates method. For steady-state energy transfer, it is shown that analytic expressions derived using the P1 and P3 approximations agree quantitatively with the discrete ordinates method, in some cases for large Knudsen numbers, and always for Knudsen numbers less than unity. However, for time-dependent energy transfer, the PN solutions differ qualitatively from converged solutions obtained by the discrete ordinates method. Although they correctly capture the wave-like behavior of energy transfer at short times, the P1 and P3 approximations rely on one or two wave velocities, respectively, yielding abrupt, step-changes in temperature profiles that are absent when the angular dependence of the phonon velocities is captured more completely. It is shown that, with the gray approximation, the P1 approximation is formally equivalent to the so-called "hyperbolic heat equation." Overall, these results support the use of the PN approximation to find solutions to the phonon Boltzmann transport equation for steady-state conditions. Such solutions can be useful in the design and analysis of devices that involve heat transfer at nanometer length scales, where continuum-scale approaches become inaccurate.
ERIC Educational Resources Information Center
Shumway, Richard J.
1989-01-01
Illustrated is the problem of solving equations and some different strategies students might employ when using available technology. Gives illustrations for: exact solutions, approximate solutions, and approximate solutions which are graphically generated. (RT)
NASA Astrophysics Data System (ADS)
Nelson, Daniel A.; Jacobs, Gustaaf B.; Kopriva, David A.
2016-08-01
The effect of curved-boundary representation on the physics of the separated flow over a NACA 65(1)-412 airfoil is thoroughly investigated. A method is presented to approximate curved boundaries with a high-order discontinuous-Galerkin spectral element method for the solution of the Navier-Stokes equations. Multiblock quadrilateral element meshes are constructed with the grid generation software GridPro. The boundary of a NACA 65(1)-412 airfoil, defined by a cubic natural spline, is piecewise-approximated by isoparametric polynomial interpolants that represent the edges of boundary-fitted elements. Direct numerical simulation of the airfoil is performed on a coarse mesh and fine mesh with polynomial orders ranging from four to twelve. The accuracy of the curve fitting is investigated by comparing the flows computed on curved-sided meshes with those given by straight-sided meshes. Straight-sided meshes yield irregular wakes, whereas curved-sided meshes produce a regular Karman street wake. Straight-sided meshes also produce lower lift and higher viscous drag as compared with curved-sided meshes. When the mesh is refined by reducing the sizes of the elements, the lift decrease and viscous drag increase are less pronounced. The differences in the aerodynamic performance between the straight-sided meshes and the curved-sided meshes are concluded to be the result of artificial surface roughness introduced by the piecewise-linear boundary approximation provided by the straight-sided meshes.
Numerical Differentiation of Noisy, Nonsmooth Data
Chartrand, Rick
2011-01-01
We consider the problem of differentiating a function specified by noisy data. Regularizing the differentiation process avoids the noise amplification of finite-difference methods. We use total-variation regularization, which allows for discontinuous solutions. The resulting simple algorithm accurately differentiates noisy functions, including those which have a discontinuous derivative.
Approximate Solutions for Flow with a Stretching Boundary due to Partial Slip
Filobello-Nino, U.; Vazquez-Leal, H.; Sarmiento-Reyes, A.; Benhammouda, B.; Jimenez-Fernandez, V. M.; Pereyra-Diaz, D.; Perez-Sesma, A.; Cervantes-Perez, J.; Huerta-Chua, J.; Sanchez-Orea, J.; Contreras-Hernandez, A. D.
2014-01-01
The homotopy perturbation method (HPM) is coupled with versions of Laplace-Padé and Padé methods to provide an approximate solution to the nonlinear differential equation that describes the behaviour of a flow with a stretching flat boundary due to partial slip. Comparing results between approximate and numerical solutions, we concluded that our results are capable of providing an accurate solution and are extremely efficient. PMID:27433526
A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems
NASA Astrophysics Data System (ADS)
Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong
2017-09-01
In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.
Influence of Initial Correlations on Evolution of a Subsystem in a Heat Bath and Polaron Mobility
NASA Astrophysics Data System (ADS)
Los, Victor F.
2017-08-01
A regular approach to accounting for initial correlations, which allows to go beyond the unrealistic random phase (initial product state) approximation in deriving the evolution equations, is suggested. An exact homogeneous (time-convolution and time-convolutionless) equations for a relevant part of the two-time equilibrium correlation function for the dynamic variables of a subsystem interacting with a boson field (heat bath) are obtained. No conventional approximation like RPA or Bogoliubov's principle of weakening of initial correlations is used. The obtained equations take into account the initial correlations in the kernel governing their evolution. The solution to these equations is found in the second order of the kernel expansion in the electron-phonon interaction, which demonstrates that generally the initial correlations influence the correlation function's evolution in time. It is explicitly shown that this influence vanishes on a large timescale (actually at t→ ∞) and the evolution process enters an irreversible kinetic regime. The developed approach is applied to the Fröhlich polaron and the low-temperature polaron mobility (which was under a long-time debate) is found with a correction due to initial correlations.
Elimination des constantes arbitraires dans la theorie relativiste des quanta [85
NASA Astrophysics Data System (ADS)
This article shows how the influence of the undetermined constants in the integral theory of collisions1)2)3)4) can be avoided. A rule is given by which the probability amplitudes (5[F]-matrix) may be calculated in terms of a given local action. The procedure of the integral method differs essentially from the differential method employed by Tomonaga6), Schwikger5), FÅÕímaí7) and Dyson8) in that the two sorts of diverging terms occuring in the formal solution of a Schroedinqer equation are avoided. These two divergencies are: 1) the well known «.self energy» divergencies which have been since corrected by methods of regularization (Rivikr1), Pattli and Villaks9)); 2) the more serious boundary divergencies (Stueckelberg4)) due to the sharp spatio-temporal limitation of the space-time region of evolution V in which the collisions occur. The convergent parts (anomalous g-factor of the electron and the Lamb-Rethekford shift) obtained by Schwinger are, in the present theory, the boundary independent amplitudes in fourth approximation. Üp to this approximation the rule eliminates the arbitrary constants from all conservative processes.
HIGHLY ENRICHED URANIUM BLEND DOWN PROGRAM AT THE SAVANNAH RIVER SITE PRESENT AND FUTURE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magoulas, V; Charles Goergen, C; Ronald Oprea, R
2008-06-05
The Department of Energy (DOE) and Tennessee Valley Authority (TVA) entered into an Interagency Agreement to transfer approximately 40 metric tons of highly enriched uranium (HEU) to TVA for conversion to fuel for the Browns Ferry Nuclear Power Plant. Savannah River Site (SRS) inventories included a significant amount of this material, which resulted from processing spent fuel and surplus materials. The HEU is blended with natural uranium (NU) to low enriched uranium (LEU) with a 4.95% 235U isotopic content and shipped as solution to the TVA vendor. The HEU Blend Down Project provided the upgrades needed to achieve the productmore » throughput and purity required and provided loading facilities. The first blending to low enriched uranium (LEU) took place in March 2003 with the initial shipment to the TVA vendor in July 2003. The SRS Shipments have continued on a regular schedule without any major issues for the past 5 years and are due to complete in September 2008. The HEU Blend program is now looking to continue its success by dispositioning an additional approximately 21 MTU of HEU material as part of the SRS Enriched Uranium Disposition Project.« less
NASA Technical Reports Server (NTRS)
Le Vine, D. M.; Meneghini, R.
1978-01-01
A solution is presented for the electromagnetic fields radiated by an arbitrarily oriented current filament over a conducting ground plane in the case where the current propagates along the filament at the speed of light, and this solution is interpreted in terms of radiation from lightning return strokes. The solution is exact in the fullest sense; no mathematical approximations are made, and the governing differential equations and boundary conditions are satisfied. The solution has the additional attribute of being specified in closed form in terms of elementary functions. This solution is discussed from the point of view of deducing lightning current wave forms from measurements of the electromagnetic fields and understanding the effects of channel tortuosity on the radiated fields. In addition, it is compared with two approximate solutions, the traditional moment approximation and the Fraunhofer approximation, and a set of criteria describing their applicability are presented and interpreted.
NASA Technical Reports Server (NTRS)
Hyer, M. W.; Cooper, D. E.; Cohen, D.
1985-01-01
The effects of a uniform temperature change on the stresses and deformations of composite tubes are investigated. The accuracy of an approximate solution based on the principle of complementary virtual work is determined. Interest centers on tube response away from the ends and so a planar elasticity approach is used. For the approximate solution a piecewise linear variation of stresses with the radial coordinate is assumed. The results from the approximate solution are compared with the elasticity solution. The stress predictions agree well, particularly peak interlaminar stresses. Surprisingly, the axial deformations also agree well. This, despite the fact that the deformations predicted by the approximate solution do not satisfy the interface displacement continuity conditions required by the elasticity solution. The study shows that the axial thermal expansion coefficient of tubes with a specific number of axial and circumferential layers depends on the stacking sequence. This is in contrast to classical lamination theory which predicts the expansion to be independent of the stacking arrangement. As expected, the sign and magnitude of the peak interlaminar stresses depends on stacking sequence.
40 CFR 63.2872 - What definitions apply to this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... NESHAP General Provisions. (c) In this section as follows: Accounting month means a time interval defined... consistent and regular basis. An accounting month will consist of approximately 4 to 5 calendar weeks and each accounting month will be of approximate equal duration. An accounting month may not correspond...
Terminal attractors in neural networks
NASA Technical Reports Server (NTRS)
Zak, Michail
1989-01-01
A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.
Electrophysiology of neurones of the inferior mesenteric ganglion of the cat.
Julé, Y; Szurszewski, J H
1983-01-01
Intracellular recordings were obtained from cells in vitro in the inferior mesenteric ganglia of the cat. Neurones could be classified into three types: non-spontaneous, irregular discharging and regular discharging neurones. Non-spontaneous neurones had a stable resting membrane potential and responded with action potentials to indirect preganglionic nerve stimulation and to intracellular injection of depolarizing current. Irregular discharging neurones were characterized by a discharge of excitatory post-synaptic potentials (e.p.s.p.s.) which sometimes gave rise to action potentials. This activity was abolished by hexamethonium bromide, chlorisondamine and d-tubocurarine chloride. Tetrodotoxin and a low Ca2+ -high Mg2+ solution also blocked on-going activity in irregular discharging neurones. Regular discharging neurones were characterized by a rhythmic discharge of action potentials. Each action potential was preceded by a gradual depolarization of the intracellularly recorded membrane potential. Intracellular injection of hyperpolarizing current abolished the regular discharge of action potential. No synaptic potentials were observed during hyperpolarization of the membrane potential. Nicotinic, muscarinic and adrenergic receptor blocking drugs did not modify the discharge of action potentials in regular discharging neurones. A low Ca2+ -high Mg2+ solution also had no effect on the regular discharge of action potentials. Interpolation of an action potential between spontaneous action potentials in regular discharging neurones reset the rhythm of discharge. It is suggested that regular discharging neurones were endogenously active and that these neurones provided synaptic input to irregular discharging neurones. PMID:6140310
Electrophysiology of neurones of the inferior mesenteric ganglion of the cat.
Julé, Y; Szurszewski, J H
1983-11-01
Intracellular recordings were obtained from cells in vitro in the inferior mesenteric ganglia of the cat. Neurones could be classified into three types: non-spontaneous, irregular discharging and regular discharging neurones. Non-spontaneous neurones had a stable resting membrane potential and responded with action potentials to indirect preganglionic nerve stimulation and to intracellular injection of depolarizing current. Irregular discharging neurones were characterized by a discharge of excitatory post-synaptic potentials (e.p.s.p.s.) which sometimes gave rise to action potentials. This activity was abolished by hexamethonium bromide, chlorisondamine and d-tubocurarine chloride. Tetrodotoxin and a low Ca2+ -high Mg2+ solution also blocked on-going activity in irregular discharging neurones. Regular discharging neurones were characterized by a rhythmic discharge of action potentials. Each action potential was preceded by a gradual depolarization of the intracellularly recorded membrane potential. Intracellular injection of hyperpolarizing current abolished the regular discharge of action potential. No synaptic potentials were observed during hyperpolarization of the membrane potential. Nicotinic, muscarinic and adrenergic receptor blocking drugs did not modify the discharge of action potentials in regular discharging neurones. A low Ca2+ -high Mg2+ solution also had no effect on the regular discharge of action potentials. Interpolation of an action potential between spontaneous action potentials in regular discharging neurones reset the rhythm of discharge. It is suggested that regular discharging neurones were endogenously active and that these neurones provided synaptic input to irregular discharging neurones.
Thick de Sitter brane solutions in higher dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dzhunushaliev, Vladimir; Department of Physics and Microelectronic Engineering, Kyrgyz-Russian Slavic University, Bishkek, Kievskaya Str. 44, 720021, Kyrgyz Republic; Folomeev, Vladimir
2009-01-15
We present thick de Sitter brane solutions which are supported by two interacting phantom scalar fields in five-, six-, and seven-dimensional spacetime. It is shown that for all cases regular solutions with anti-de Sitter asymptotic (5D problem) and a flat asymptotic far from the brane (6D and 7D cases) exist. We also discuss the stability of our solutions.
An hp-adaptivity and error estimation for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Bey, Kim S.
1995-01-01
This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.
Effects of regular and whitening dentifrices on remineralization of bovine enamel in vitro.
Kielbassa, Andrej M; Tschoppe, Peter; Hellwig, Elmar; Wrbas, Karl-Thomas
2009-02-01
To compare in vitro the remineralizing effects of different regular dentifrices and whitening dentifrices (containing pyrophosphates) on predemineralized enamel. Specimens from 84 bovine incisors were embedded in epoxy resin, partly covered with nail varnish, and demineralized in a lactic acid solution (37 degrees C, pH 5.0, 8 days). Parts of the demineralized areas were covered with nail varnish, and specimens were randomly assigned to 6 groups. Subsequently, specimens were exposed to a remineralizing solution (37 degrees C, pH 7.0, 60 days) and brushed 3 times a day (1:3 slurry with remineralizing solution) with 1 of 3 regular dentifrices designed for anticaries (group 1, amine; group 2, sodium fluoride) or periodontal (group 3, amine/stannous fluoride) purposes or whitening dentifrice containing pyrophosphates (group 4, sodium fluoride). An experimental dentifrice (group 5, without pyrophosphates/fluorides) and a whitening dentifrice (group 6, monofluorophosphate) served as controls. Mineral loss and lesion depths were evaluated from contact microradiographs, and intergroup comparisons were performed using the closed-test procedure (alpha =.05). Compared to baseline, specimens brushed with the dentifrices containing stannous/amine fluorides revealed significant mineral gains and lesion depth reductions (P < .05). Concerning the reacquired mineral, the whitening dentifrice performed worse than the regular dentifrices (P > .05), while mineral gain, as well as lesion depth, reduction was negligible with the control groups. Dentifrices containing pyrophosphates perform worse than regular dentifrices but do not necessarily affect remineralization. Unless remineralizing efficacy is proven, whitening dentifrices should be recommended only after deliberate consideration in caries-prone patients.
Assessment of polarization effect on aerosol retrievals from MODIS
NASA Astrophysics Data System (ADS)
Korkin, S.; Lyapustin, A.
2010-12-01
Light polarization affects the total intensity of scattered radiation. In this work, we compare aerosol retrievals performed by code MAIAC [1] with and without taking polarization into account. The MAIAC retrievals are based on the look-up tables (LUT). For this work, MAIAC was run using two different LUTs, the first one generated using the scalar code SHARM [2], and the second one generated with the vector code Modified Vector Discrete Ordinates Method (MVDOM). MVDOM is a new code suitable for computations with highly anisotropic phase functions, including cirrus clouds and snow [3]. To this end, the solution of the vector radiative transfer equation (VRTE) is represented as a sum of anisotropic and regular components. The anisotropic component is evaluated in the Small Angle Modification of the Spherical Harmonics Method (MSH) [4]. The MSH is formulated in the frame of reference of the solar beam where z-axis lies along the solar beam direction. In this case, the MSH solution for anisotropic part is nearly symmetric in azimuth, and is computed analytically. In scalar case, this solution coincides with the Goudsmit-Saunderson small-angle approximation [5]. To correct for an analytical separation of the anisotropic part of the signal, the transfer equation for the regular part contains a correction source function term [6]. Several examples of polarization impact on aerosol retrievals over different surface types will be presented. 1. Lyapustin A., Wang Y., Laszlo I., Kahn R., Korkin S., Remer L., Levy R., and Reid J. S. Multi-Angle Implementation of Atmospheric Correction (MAIAC): Part 2. Aerosol Algorithm. J. Geophys. Res., submitted (2010). 2. Lyapustin A., Muldashev T., Wang Y. Code SHARM: fast and accurate radiative transfer over spatially variable anisotropic surfaces. In: Light Scattering Reviews 5. Chichester: Springer, 205 - 247 (2010). 3. Budak, V.P., Korkin S.V. On the solution of a vectorial radiative transfer equation in an arbitrary three-dimensional turbid medium with anisotropic scattering. JQSRT, 109, 220-234 (2008). 4. Budak V.P., Sarmin S.E. Solution of radiative transfer equation by the method of spherical harmonics in the small angle modification. Atmospheric and Oceanic Optics, 3, 898-903 (1990). 5. Goudsmit S., Saunderson J.L. Multiple scattering of electrons. Phys. Rev., 57, 24-29 (1940). 6. Budak V.P, Klyuykov D.A., Korkin S.V. Convergence acceleration of radiative transfer equation solution at strongly anisotropic scattering. In: Light Scattering Reviews 5. Chichester: Springer, 147 - 204 (2010).
High-Accuracy Comparison Between the Post-Newtonian and Self-Force Dynamics of Black-Hole Binaries
NASA Astrophysics Data System (ADS)
Blanchet, Luc; Detweiler, Steven; Le Tiec, Alexandre; Whiting, Bernard F.
The relativistic motion of a compact binary system moving in circular orbit is investigated using the post-Newtonian (PN) approximation and the perturbative self-force (SF) formalism. A particular gauge-invariant observable quantity is computed as a function of the binary's orbital frequency. The conservative effect induced by the gravitational SF is obtained numerically with high precision, and compared to the PN prediction developed to high order. The PN calculation involves the computation of the 3PN regularized metric at the location of the particle. Its divergent self-field is regularized by means of dimensional regularization. The poles ∝ {(d - 3)}^{-1} that occur within dimensional regularization at the 3PN order disappear from the final gauge-invariant result. The leading 4PN and next-to-leading 5PN conservative logarithmic contributions originating from gravitational wave tails are also obtained. Making use of these exact PN results, some previously unknown PN coefficients are measured up to the very high 7PN order by fitting to the numerical SF data. Using just the 2PN and new logarithmic terms, the value of the 3PN coefficient is also confirmed numerically with very high precision. The consistency of this cross-cultural comparison provides a crucial test of the very different regularization methods used in both SF and PN formalisms, and illustrates the complementarity of these approximation schemes when modeling compact binary systems.
Black-hole solutions with scalar hair in Einstein-scalar-Gauss-Bonnet theories
NASA Astrophysics Data System (ADS)
Antoniou, G.; Bakopoulos, A.; Kanti, P.
2018-04-01
In the context of the Einstein-scalar-Gauss-Bonnet theory, with a general coupling function between the scalar field and the quadratic Gauss-Bonnet term, we investigate the existence of regular black-hole solutions with scalar hair. Based on a previous theoretical analysis, which studied the evasion of the old and novel no-hair theorems, we consider a variety of forms for the coupling function (exponential, even and odd polynomial, inverse polynomial, and logarithmic) that, in conjunction with the profile of the scalar field, satisfy a basic constraint. Our numerical analysis then always leads to families of regular, asymptotically flat black-hole solutions with nontrivial scalar hair. The solution for the scalar field and the profile of the corresponding energy-momentum tensor, depending on the value of the coupling constant, may exhibit a nonmonotonic behavior, an unusual feature that highlights the limitations of the existing no-hair theorems. We also determine and study in detail the scalar charge, horizon area, and entropy of our solutions.
Black hole solution in the framework of arctan-electrodynamics
NASA Astrophysics Data System (ADS)
Kruglov, S. I.
An arctan-electrodynamics coupled with the gravitational field is investigated. We obtain the regular black hole solution that at r →∞ gives corrections to the Reissner-Nordström solution. The corrections to Coulomb’s law at r →∞ are found. We evaluate the mass of the black hole that is a function of the dimensional parameter β introduced in the model. The magnetically charged black hole was investigated and we have obtained the magnetic mass of the black hole and the metric function at r →∞. The regular black hole solution is obtained at r → 0 with the de Sitter core. We show that there is no singularity of the Ricci scalar for electrically and magnetically charged black holes. Restrictions on the electric and magnetic fields are found that follow from the requirement of the absence of superluminal sound speed and the requirement of a classical stability.
Optimal feedback control infinite dimensional parabolic evolution systems: Approximation techniques
NASA Technical Reports Server (NTRS)
Banks, H. T.; Wang, C.
1989-01-01
A general approximation framework is discussed for computation of optimal feedback controls in linear quadratic regular problems for nonautonomous parabolic distributed parameter systems. This is done in the context of a theoretical framework using general evolution systems in infinite dimensional Hilbert spaces. Conditions are discussed for preservation under approximation of stabilizability and detectability hypotheses on the infinite dimensional system. The special case of periodic systems is also treated.
Infinite horizon problems on stratifiable state-constraints sets
NASA Astrophysics Data System (ADS)
Hermosilla, C.; Zidani, H.
2015-02-01
This paper deals with a state-constrained control problem. It is well known that, unless some compatibility condition between constraints and dynamics holds, the Value Function has not enough regularity, or can fail to be the unique constrained viscosity solution of a Hamilton-Jacobi-Bellman (HJB) equation. Here, we consider the case of a set of constraints having a stratified structure. Under this circumstance, the interior of this set may be empty or disconnected, and the admissible trajectories may have the only option to stay on the boundary without possible approximation in the interior of the constraints. In such situations, the classical pointing qualification hypothesis is not relevant. The discontinuous Value Function is then characterized by means of a system of HJB equations on each stratum that composes the state-constraints. This result is obtained under a local controllability assumption which is required only on the strata where some chattering phenomena could occur.
Advances in Modal Analysis Using a Robust and Multiscale Method
NASA Astrophysics Data System (ADS)
Picard, Cécile; Frisson, Christian; Faure, François; Drettakis, George; Kry, Paul G.
2010-12-01
This paper presents a new approach to modal synthesis for rendering sounds of virtual objects. We propose a generic method that preserves sound variety across the surface of an object at different scales of resolution and for a variety of complex geometries. The technique performs automatic voxelization of a surface model and automatic tuning of the parameters of hexahedral finite elements, based on the distribution of material in each cell. The voxelization is performed using a sparse regular grid embedding of the object, which permits the construction of plausible lower resolution approximations of the modal model. We can compute the audible impulse response of a variety of objects. Our solution is robust and can handle nonmanifold geometries that include both volumetric and surface parts. We present a system which allows us to manipulate and tune sounding objects in an appropriate way for games, training simulations, and other interactive virtual environments.
NASA Astrophysics Data System (ADS)
Xia, Ya-Rong; Zhang, Shun-Li; Xin, Xiang-Peng
2018-03-01
In this paper, we propose the concept of the perturbed invariant subspaces (PISs), and study the approximate generalized functional variable separation solution for the nonlinear diffusion-convection equation with weak source by the approximate generalized conditional symmetries (AGCSs) related to the PISs. Complete classification of the perturbed equations which admit the approximate generalized functional separable solutions (AGFSSs) is obtained. As a consequence, some AGFSSs to the resulting equations are explicitly constructed by way of examples.
A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems ⋆
Ying, Wenjun; Henriquez, Craig S.
2013-01-01
This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong. PMID:23519600
Regularized two-step brain activity reconstruction from spatiotemporal EEG data
NASA Astrophysics Data System (ADS)
Alecu, Teodor I.; Voloshynovskiy, Sviatoslav; Pun, Thierry
2004-10-01
We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EEG data can be thought of as a collection of time continuous streams from sparse locations. The measured electric potential on one electrode is the result of the superposition of synchronized synaptic activity from sources in all the brain volume. Consequently, the EEG inverse problem is a highly underdetermined (and ill-posed) problem. Moreover, each source contribution is linear with respect to its amplitude but non-linear with respect to its localization and orientation. In order to overcome these drawbacks we propose a novel two-step inversion procedure. The solution is based on a double scale division of the solution space. The first step uses a coarse discretization and has the sole purpose of globally identifying the active regions, via a sparse approximation algorithm. The second step is applied only on the retained regions and makes use of a fine discretization of the space, aiming at detailing the brain activity. The local configuration of sources is recovered using an iterative stochastic estimator with adaptive joint minimum energy and directional consistency constraints.
Selection of regularization parameter in total variation image restoration.
Liao, Haiyong; Li, Fang; Ng, Michael K
2009-11-01
We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.
Twisting singular solutions of Betheʼs equations
NASA Astrophysics Data System (ADS)
Nepomechie, Rafael I.; Wang, Chunguang
2014-12-01
The Bethe equations for the periodic XXX and XXZ spin chains admit singular solutions, for which the corresponding eigenvalues and eigenvectors are ill-defined. We use a twist regularization to derive conditions for such singular solutions to be physical, in which case they correspond to genuine eigenvalues and eigenvectors of the Hamiltonian.
An approximation theory for the identification of linear thermoelastic systems
NASA Technical Reports Server (NTRS)
Rosen, I. G.; Su, Chien-Hua Frank
1990-01-01
An abstract approximation framework and convergence theory for the identification of thermoelastic systems is developed. Starting from an abstract operator formulation consisting of a coupled second order hyperbolic equation of elasticity and first order parabolic equation for heat conduction, well-posedness is established using linear semigroup theory in Hilbert space, and a class of parameter estimation problems is then defined involving mild solutions. The approximation framework is based upon generic Galerkin approximation of the mild solutions, and convergence of solutions of the resulting sequence of approximating finite dimensional parameter identification problems to a solution of the original infinite dimensional inverse problem is established using approximation results for operator semigroups. An example involving the basic equations of one dimensional linear thermoelasticity and a linear spline based scheme are discussed. Numerical results indicate how the approach might be used in a study of damping mechanisms in flexible structures.
A Note on Weak Solutions of Conservation Laws and Energy/Entropy Conservation
NASA Astrophysics Data System (ADS)
Gwiazda, Piotr; Michálek, Martin; Świerczewska-Gwiazda, Agnieszka
2018-03-01
A common feature of systems of conservation laws of continuum physics is that they are endowed with natural companion laws which are in such cases most often related to the second law of thermodynamics. This observation easily generalizes to any symmetrizable system of conservation laws; they are endowed with nontrivial companion conservation laws, which are immediately satisfied by classical solutions. Not surprisingly, weak solutions may fail to satisfy companion laws, which are then often relaxed from equality to inequality and overtake the role of physical admissibility conditions for weak solutions. We want to answer the question: what is a critical regularity of weak solutions to a general system of conservation laws to satisfy an associated companion law as an equality? An archetypal example of such a result was derived for the incompressible Euler system in the context of Onsager's conjecture in the early nineties. This general result can serve as a simple criterion to numerous systems of mathematical physics to prescribe the regularity of solutions needed for an appropriate companion law to be satisfied.
The charge conserving Poisson-Boltzmann equations: Existence, uniqueness, and maximum principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Chiun-Chang, E-mail: chlee@mail.nhcue.edu.tw
2014-05-15
The present article is concerned with the charge conserving Poisson-Boltzmann (CCPB) equation in high-dimensional bounded smooth domains. The CCPB equation is a Poisson-Boltzmann type of equation with nonlocal coefficients. First, under the Robin boundary condition, we get the existence of weak solutions to this equation. The main approach is variational, based on minimization of a logarithm-type energy functional. To deal with the regularity of weak solutions, we establish a maximum modulus estimate for the standard Poisson-Boltzmann (PB) equation to show that weak solutions of the CCPB equation are essentially bounded. Then the classical solutions follow from the elliptic regularity theorem.more » Second, a maximum principle for the CCPB equation is established. In particular, we show that in the case of global electroneutrality, the solution achieves both its maximum and minimum values at the boundary. However, in the case of global non-electroneutrality, the solution may attain its maximum value at an interior point. In addition, under certain conditions on the boundary, we show that the global non-electroneutrality implies pointwise non-electroneutrality.« less
NASA Astrophysics Data System (ADS)
Kataev, A. L.; Kazantsev, A. E.; Stepanyantz, K. V.
2018-01-01
We calculate the Adler D-function for N = 1 SQCD in the three-loop approximation using the higher covariant derivative regularization and the NSVZ-like subtraction scheme. The recently formulated all-order relation between the Adler function and the anomalous dimension of the matter superfields defined in terms of the bare coupling constant is first considered and generalized to the case of an arbitrary representation for the chiral matter superfields. The correctness of this all-order relation is explicitly verified at the three-loop level. The special renormalization scheme in which this all-order relation remains valid for the D-function and the anomalous dimension defined in terms of the renormalized coupling constant is constructed in the case of using the higher derivative regularization. The analytic expression for the Adler function for N = 1 SQCD is found in this scheme to the order O (αs2). The problem of scheme-dependence of the D-function and the NSVZ-like equation is briefly discussed.
Heudorf, Ursel; Grünewald, Miriam; Otto, Ulla
2016-01-01
The Commission for Hospital Hygiene and Infection Prevention (KRINKO) updated the recommendations for the prevention of catheter-associated urinary tract infections in 2015. This article will describe the implementation of these recommendations in Frankfurt's hospitals in autumn, 2015. In two non-ICU wards of each of Frankfurt's 17 hospitals, inspections were performed using a checklist based on the new KRINKO recommendations. In one large hospital, a total of 5 wards were inspected. The inspections covered the structure and process quality (operating instructions, training, indication, the placement and maintenance of catheters) and the demonstration of the preparation for insertion of a catheter using an empty bed and an imaginary patient, or insertion in a model. Operating instructions were available in all hospital wards; approximately half of the wards regularly performed training sessions. The indications were largely in line with the recommendations of the KRINKO. Alternatives to urinary tract catheters were available and were used more often than the urinary tract catheters themselves (15.9% vs. 13.5%). In accordance with the recommendations, catheters were placed without antibiotic prophylaxis or the instillation of antiseptic or antimicrobial substances or catheter flushing solutions. The demonstration of catheter placement was conscientiously performed. Need for improvement was seen in the daily documentation and the regular verification of continuing indication for a urinary catheter, as well as the omission of regular catheter change. Overall, the recommendations of the KRINKO on the prevention of catheter-associated urinary tract infections were adequately implemented. However, it cannot be ruled out that in situations with time pressure and staff shortage, the handling of urinary tract catheters may be of lower quality than that observed during the inspections, when catheter insertion was done by two nurses. Against this background, a sufficient number of qualified staff and regular ward rounds by the hygiene staff appear recommendable.
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas
2016-11-01
Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
NASA Technical Reports Server (NTRS)
Yaron, I.
1974-01-01
Steady state heat or mass transfer in concentrated ensembles of drops, bubbles or solid spheres in uniform, slow viscous motion, is investigated. Convective effects at small Peclet numbers are taken into account by expanding the nondimensional temperature or concentration in powers of the Peclet number. Uniformly valid solutions are obtained, which reflect the effects of dispersed phase content and rate of internal circulation within the fluid particles. The dependence of the range of Peclet and Reynolds numbers, for which regular expansions are valid, on particle concentration is discussed.
Error Estimates for Approximate Solutions of the Riccati Equation with Real or Complex Potentials
NASA Astrophysics Data System (ADS)
Finster, Felix; Smoller, Joel
2010-09-01
A method is presented for obtaining rigorous error estimates for approximate solutions of the Riccati equation, with real or complex potentials. Our main tool is to derive invariant region estimates for complex solutions of the Riccati equation. We explain the general strategy for applying these estimates and illustrate the method in typical examples, where the approximate solutions are obtained by gluing together WKB and Airy solutions of corresponding one-dimensional Schrödinger equations. Our method is motivated by, and has applications to, the analysis of linear wave equations in the geometry of a rotating black hole.
Atmospheric guidance law for planar skip trajectories
NASA Technical Reports Server (NTRS)
Mease, K. D.; Mccreary, F. A.
1985-01-01
The applicability of an approximate, closed-form, analytical solution to the equations of motion, as a basis for a deterministic guidance law for controlling the in-plane motion during a skip trajectory, is investigated. The derivation of the solution by the method of matched asymptotic expansions is discussed. Specific issues that arise in the application of the solution to skip trajectories are addressed. Based on the solution, an explicit formula for the approximate energy loss due to an atmospheric pass is derived. A guidance strategy is proposed that illustrates the use of the approximate solution. A numerical example shows encouraging performance.
NASA Technical Reports Server (NTRS)
Ito, K.
1984-01-01
The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A charactristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.
The convergence rate of approximate solutions for nonlinear scalar conservation laws
NASA Technical Reports Server (NTRS)
Nessyahu, Haim; Tadmor, Eitan
1991-01-01
The convergence rate is discussed of approximate solutions for the nonlinear scalar conservation law. The linear convergence theory is extended into a weak regime. The extension is based on the usual two ingredients of stability and consistency. On the one hand, the counterexamples show that one must strengthen the linearized L(sup 2)-stability requirement. It is assumed that the approximate solutions are Lip(sup +)-stable in the sense that they satisfy a one-sided Lipschitz condition, in agreement with Oleinik's E-condition for the entropy solution. On the other hand, the lack of smoothness requires to weaken the consistency requirement, which is measured in the Lip'-(semi)norm. It is proved for Lip(sup +)-stable approximate solutions, that their Lip'convergence rate to the entropy solution is of the same order as their Lip'-consistency. The Lip'-convergence rate is then converted into stronger L(sup p) convergence rate estimates.
Black holes in vector-tensor theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heisenberg, Lavinia; Kase, Ryotaro; Tsujikawa, Shinji
We study static and spherically symmetric black hole (BH) solutions in second-order generalized Proca theories with nonminimal vector field derivative couplings to the Ricci scalar, the Einstein tensor, and the double dual Riemann tensor. We find concrete Lagrangians which give rise to exact BH solutions by imposing two conditions of the two identical metric components and the constant norm of the vector field. These exact solutions are described by either Reissner-Nordström (RN), stealth Schwarzschild, or extremal RN solutions with a non-trivial longitudinal mode of the vector field. We then numerically construct BH solutions without imposing these conditions. For cubic andmore » quartic Lagrangians with power-law couplings which encompass vector Galileons as the specific cases, we show the existence of BH solutions with the difference between two non-trivial metric components. The quintic-order power-law couplings do not give rise to non-trivial BH solutions regular throughout the horizon exterior. The sixth-order and intrinsic vector-mode couplings can lead to BH solutions with a secondary hair. For all the solutions, the vector field is regular at least at the future or past horizon. The deviation from General Relativity induced by the Proca hair can be potentially tested by future measurements of gravitational waves in the nonlinear regime of gravity.« less
Development of daily "swath" mascon solutions from GRACE
NASA Astrophysics Data System (ADS)
Save, Himanshu; Bettadpur, Srinivas
2016-04-01
The Gravity Recovery and Climate Experiment (GRACE) mission has provided invaluable and the only data of its kind over the past 14 years that measures the total water column in the Earth System. The GRACE project provides monthly average solutions and there are experimental quick-look solutions and regularized sliding window solutions available from Center for Space Research (CSR) that implement a sliding window approach and variable daily weights. The need for special handling of these solutions in data assimilation and the possibility of capturing the total water storage (TWS) signal at sub-monthly time scales motivated this study. This study discusses the progress of the development of true daily high resolution "swath" mascon total water storage estimate from GRACE using Tikhonov regularization. These solutions include the estimates of daily total water storage (TWS) for the mascon elements that were "observed" by the GRACE satellites on a given day. This paper discusses the computation techniques, signal, error and uncertainty characterization of these daily solutions. We discuss the comparisons with the official GRACE RL05 solutions and with CSR mascon solution to characterize the impact on science results especially at the sub-monthly time scales. The evaluation is done with emphasis on the temporal signal characteristics and validated against in-situ data set and multiple models.
Hip-hop solutions of the 2N-body problem
NASA Astrophysics Data System (ADS)
Barrabés, Esther; Cors, Josep Maria; Pinyol, Conxita; Soler, Jaume
2006-05-01
Hip-hop solutions of the 2N-body problem with equal masses are shown to exist using an analytic continuation argument. These solutions are close to planar regular 2N-gon relative equilibria with small vertical oscillations. For fixed N, an infinity of these solutions are three-dimensional choreographies, with all the bodies moving along the same closed curve in the inertial frame.
Approximation theory for LQG (Linear-Quadratic-Gaussian) optimal control of flexible structures
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Adamian, A.
1988-01-01
An approximation theory is presented for the LQG (Linear-Quadratic-Gaussian) optimal control problem for flexible structures whose distributed models have bounded input and output operators. The main purpose of the theory is to guide the design of finite dimensional compensators that approximate closely the optimal compensator. The optimal LQG problem separates into an optimal linear-quadratic regulator problem and an optimal state estimation problem. The solution of the former problem lies in the solution to an infinite dimensional Riccati operator equation. The approximation scheme approximates the infinite dimensional LQG problem with a sequence of finite dimensional LQG problems defined for a sequence of finite dimensional, usually finite element or modal, approximations of the distributed model of the structure. Two Riccati matrix equations determine the solution to each approximating problem. The finite dimensional equations for numerical approximation are developed, including formulas for converting matrix control and estimator gains to their functional representation to allow comparison of gains based on different orders of approximation. Convergence of the approximating control and estimator gains and of the corresponding finite dimensional compensators is studied. Also, convergence and stability of the closed-loop systems produced with the finite dimensional compensators are discussed. The convergence theory is based on the convergence of the solutions of the finite dimensional Riccati equations to the solutions of the infinite dimensional Riccati equations. A numerical example with a flexible beam, a rotating rigid body, and a lumped mass is given.
NASA Technical Reports Server (NTRS)
Adamczyk, J. L.
1974-01-01
An approximate solution is reported for the unsteady aerodynamic response of an infinite swept wing encountering a vertical oblique gust in a compressible stream. The approximate expressions are of closed form and do not require excessive computer storage or computation time, and further, they are in good agreement with the results of exact theory. This analysis is used to predict the unsteady aerodynamic response of a helicopter rotor blade encountering the trailing vortex from a previous blade. Significant effects of three dimensionality and compressibility are evident in the results obtained. In addition, an approximate solution for the unsteady aerodynamic forces associated with the pitching or plunging motion of a two dimensional airfoil in a subsonic stream is presented. The mathematical form of this solution approaches the incompressible solution as the Mach number vanishes, the linear transonic solution as the Mach number approaches one, and the solution predicted by piston theory as the reduced frequency becomes large.
NASA Astrophysics Data System (ADS)
Petržala, Jaromír
2018-07-01
The knowledge of the emission function of a city is crucial for simulation of sky glow in its vicinity. The indirect methods to achieve this function from radiances measured over a part of the sky have been recently developed. In principle, such methods represent an ill-posed inverse problem. This paper deals with the theoretical feasibility study of various approaches to solving of given inverse problem. Particularly, it means testing of fitness of various stabilizing functionals within the Tikhonov's regularization. Further, the L-curve and generalized cross validation methods were investigated as indicators of an optimal regularization parameter. At first, we created the theoretical model for calculation of a sky spectral radiance in the form of a functional of an emission spectral radiance. Consequently, all the mentioned approaches were examined in numerical experiments with synthetical data generated for the fictitious city and loaded by random errors. The results demonstrate that the second order Tikhonov's regularization method together with regularization parameter choice by the L-curve maximum curvature criterion provide solutions which are in good agreement with the supposed model emission functions.
Dynamics from a mathematical model of a two-state gas laser
NASA Astrophysics Data System (ADS)
Kleanthous, Antigoni; Hua, Tianshu; Manai, Alexandre; Yawar, Kamran; Van Gorder, Robert A.
2018-05-01
Motivated by recent work in the area, we consider the behavior of solutions to a nonlinear PDE model of a two-state gas laser. We first review the derivation of the two-state gas laser model, before deriving a non-dimensional model given in terms of coupled nonlinear partial differential equations. We then classify the steady states of this system, in order to determine the possible long-time asymptotic solutions to this model, as well as corresponding stability results, showing that the only uniform steady state (the zero motion state) is unstable, while a linear profile in space is stable. We then provide numerical simulations for the full unsteady model. We show for a wide variety of initial conditions that the solutions tend toward the stable linear steady state profiles. We also consider traveling wave solutions, and determine the unique wave speed (in terms of the other model parameters) which allows wave-like solutions to exist. Despite some similarities between the model and the inviscid Burger's equation, the solutions we obtain are much more regular than the solutions to the inviscid Burger's equation, with no evidence of shock formation or loss of regularity.
NASA Astrophysics Data System (ADS)
Mädler, Thomas
2013-05-01
Perturbations of the linearized vacuum Einstein equations in the Bondi-Sachs formulation of general relativity can be derived from a single master function with spin weight two, which is related to the Weyl scalar Ψ0, and which is determined by a simple wave equation. By utilizing a standard spin representation of tensors on a sphere and two different approaches to solve the master equation, we are able to determine two simple and explicitly time-dependent solutions. Both solutions, of which one is asymptotically flat, comply with the regularity conditions at the vertex of the null cone. For the asymptotically flat solution we calculate the corresponding linearized perturbations, describing all multipoles of spin-2 waves that propagate on a Minkowskian background spacetime. We also analyze the asymptotic behavior of this solution at null infinity using a Penrose compactification and calculate the Weyl scalar Ψ4. Because of its simplicity, the asymptotically flat solution presented here is ideally suited for test bed calculations in the Bondi-Sachs formulation of numerical relativity. It may be considered as a sibling of the Bergmann-Sachs or Teukolsky-Rinne solutions, on spacelike hypersurfaces, for a metric adapted to null hypersurfaces.
An approach for the regularization of a power flow solution around the maximum loading point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kataoka, Y.
1992-08-01
In the conventional power flow solution, the boundary conditions are directly specified by active power and reactive power at each node, so that the singular point coincided with the maximum loading point. For this reason, the computations are often disturbed by ill-condition. This paper proposes a new method for getting the wide-range regularity by giving some modifications to the conventional power flow solution method, thereby eliminating the singular point or shifting it to the region with the voltage lower than that of the maximum loading point. Then, the continuous execution of V-P curves including maximum loading point is realized. Themore » efficiency and effectiveness of the method are tested in practical 598-nodes system in comparison with the conventional method.« less
A regularity condition and temporal asymptotics for chemotaxis-fluid equations
NASA Astrophysics Data System (ADS)
Chae, Myeongju; Kang, Kyungkeun; Lee, Jihoon; Lee, Ki-Ahm
2018-02-01
We consider two dimensional chemotaxis equations coupled to the Navier-Stokes equations. We present a new localized regularity criterion that is localized in a neighborhood at each point. Secondly, we establish temporal decays of the regular solutions under the assumption that the initial mass of biological cell density is sufficiently small. Both results are improvements of previously known results given in Chae et al (2013 Discrete Continuous Dyn. Syst. A 33 2271-97) and Chae et al (2014 Commun. PDE 39 1205-35)
Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart
2011-01-01
We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.
Lipschitz regularity results for nonlinear strictly elliptic equations and applications
NASA Astrophysics Data System (ADS)
Ley, Olivier; Nguyen, Vinh Duc
2017-10-01
Most of Lipschitz regularity results for nonlinear strictly elliptic equations are obtained for a suitable growth power of the nonlinearity with respect to the gradient variable (subquadratic for instance). For equations with superquadratic growth power in gradient, one usually uses weak Bernstein-type arguments which require regularity and/or convex-type assumptions on the gradient nonlinearity. In this article, we obtain new Lipschitz regularity results for a large class of nonlinear strictly elliptic equations with possibly arbitrary growth power of the Hamiltonian with respect to the gradient variable using some ideas coming from Ishii-Lions' method. We use these bounds to solve an ergodic problem and to study the regularity and the large time behavior of the solution of the evolution equation.
The rotation axis for stationary and axisymmetric space-times
NASA Astrophysics Data System (ADS)
van den Bergh, N.; Wils, P.
1985-03-01
A set of 'extended' regularity conditions is discussed which have to be satisfied on the rotation axis if the latter is assumed to be also an axis of symmetry. For a wide class of energy-momentum tensors these conditions can only hold at the origin of the Weyl canonical coordinate. For static and cylindrically symmetric space-times the conditions can be derived from the regularity of the Riemann tetrad coefficients on the axis. For stationary space-times, however, the extended conditions do not necessarily hold, even when 'elementary flatness' is satisfied and when there are no curvature singularities on the axis. The result by Davies and Caplan (1971) for cylindrically symmetric stationary Einstein-Maxwell fields is generalized by proving that only Minkowski space-time and a particular magnetostatic solution possess a regular axis of rotation. Further, several sets of solutions for neutral and charged, rigidly and differentially rotating dust are discussed.
Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data
NASA Astrophysics Data System (ADS)
Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.
2017-10-01
The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.
NASA Astrophysics Data System (ADS)
Padhi, Amit; Mallick, Subhashis
2014-03-01
Inversion of band- and offset-limited single component (P wave) seismic data does not provide robust estimates of subsurface elastic parameters and density. Multicomponent seismic data can, in principle, circumvent this limitation but adds to the complexity of the inversion algorithm because it requires simultaneous optimization of multiple objective functions, one for each data component. In seismology, these multiple objectives are typically handled by constructing a single objective given as a weighted sum of the objectives of individual data components and sometimes with additional regularization terms reflecting their interdependence; which is then followed by a single objective optimization. Multi-objective problems, inclusive of the multicomponent seismic inversion are however non-linear. They have non-unique solutions, known as the Pareto-optimal solutions. Therefore, casting such problems as a single objective optimization provides one out of the entire set of the Pareto-optimal solutions, which in turn, may be biased by the choice of the weights. To handle multiple objectives, it is thus appropriate to treat the objective as a vector and simultaneously optimize each of its components so that the entire Pareto-optimal set of solutions could be estimated. This paper proposes such a novel multi-objective methodology using a non-dominated sorting genetic algorithm for waveform inversion of multicomponent seismic data. The applicability of the method is demonstrated using synthetic data generated from multilayer models based on a real well log. We document that the proposed method can reliably extract subsurface elastic parameters and density from multicomponent seismic data both when the subsurface is considered isotropic and transversely isotropic with a vertical symmetry axis. We also compute approximate uncertainty values in the derived parameters. Although we restrict our inversion applications to horizontally stratified models, we outline a practical procedure of extending the method to approximately include local dips for each source-receiver offset pair. Finally, the applicability of the proposed method is not just limited to seismic inversion but it could be used to invert different data types not only requiring multiple objectives but also multiple physics to describe them.
Gai, Litao; Bilige, Sudao; Jie, Yingmo
2016-01-01
In this paper, we successfully obtained the exact solutions and the approximate analytic solutions of the (2 + 1)-dimensional KP equation based on the Lie symmetry, the extended tanh method and the homotopy perturbation method. In first part, we obtained the symmetries of the (2 + 1)-dimensional KP equation based on the Wu-differential characteristic set algorithm and reduced it. In the second part, we constructed the abundant exact travelling wave solutions by using the extended tanh method. These solutions are expressed by the hyperbolic functions, the trigonometric functions and the rational functions respectively. It should be noted that when the parameters are taken as special values, some solitary wave solutions are derived from the hyperbolic function solutions. Finally, we apply the homotopy perturbation method to obtain the approximate analytic solutions based on four kinds of initial conditions.
An approximate analytical solution for interlaminar stresses in angle-ply laminates
NASA Technical Reports Server (NTRS)
Rose, Cheryl A.; Herakovich, Carl T.
1991-01-01
An improved approximate analytical solution for interlaminar stresses in finite width, symmetric, angle-ply laminated coupons subjected to axial loading is presented. The solution is based upon statically admissible stress fields which take into consideration local property mismatch effects and global equilibrium requirements. Unknown constants in the admissible stress states are determined through minimization of the complementary energy. Typical results are presented for through-the-thickness and interlaminar stress distributions for angle-ply laminates. It is shown that the results represent an improved approximate analytical solution for interlaminar stresses.
Direct application of Padé approximant for solving nonlinear differential equations.
Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Garcia-Gervacio, Jose Luis; Huerta-Chua, Jesus; Morales-Mendoza, Luis Javier; Gonzalez-Lee, Mario
2014-01-01
This work presents a direct procedure to apply Padé method to find approximate solutions for nonlinear differential equations. Moreover, we present some cases study showing the strength of the method to generate highly accurate rational approximate solutions compared to other semi-analytical methods. The type of tested nonlinear equations are: a highly nonlinear boundary value problem, a differential-algebraic oscillator problem, and an asymptotic problem. The high accurate handy approximations obtained by the direct application of Padé method shows the high potential if the proposed scheme to approximate a wide variety of problems. What is more, the direct application of the Padé approximant aids to avoid the previous application of an approximative method like Taylor series method, homotopy perturbation method, Adomian Decomposition method, homotopy analysis method, variational iteration method, among others, as tools to obtain a power series solutions to post-treat with the Padé approximant. 34L30.
NASA Astrophysics Data System (ADS)
Doerr, Timothy; Alves, Gelio; Yu, Yi-Kuo
2006-03-01
Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time. This suggests a way to efficiently find approximate solutions - - find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the fininte number of high- ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks - - peptide sequencing using tandem mass spectrometry data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayissi, Raoul Domingo, E-mail: raoulayissi@yahoo.fr; Noutchegueme, Norbert, E-mail: nnoutch@yahoo.fr
Global solutions regular for the Einstein-Boltzmann equation on a magnetized Bianchi type-I cosmological model with the cosmological constant are investigated. We suppose that the metric is locally rotationally symmetric. The Einstein-Boltzmann equation has been already considered by some authors. But, in general Bancel and Choquet-Bruhat [Ann. Henri Poincaré XVIII(3), 263 (1973); Commun. Math. Phys. 33, 83 (1973)], they proved only the local existence, and in the case of the nonrelativistic Boltzmann equation. Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academymore » of Science, 2000), Vol. 52] obtained a global existence result, for the relativistic Boltzmann equation coupled with the Einstein equations and using the Yosida operator, but confusing unfortunately with the nonrelativistic case. Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)] and Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], have obtained a global solution in time, but still using the Yosida operator and considering only the uncharged case. Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)] also proved a global existence of solutions to the Maxwell-Boltzmann system using the characteristic method. In this paper, we obtain using a method totally different from those used in the works of Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)], Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)], and Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academy of Science, 2000), Vol. 52] the global in time existence and uniqueness of a regular solution to the Einstein-Maxwell-Boltzmann system with the cosmological constant. We define and we use the weighted Sobolev separable spaces for the Boltzmann equation; some special spaces for the Einstein equations, then we clearly display all the proofs leading to the global existence theorems.« less
NASA Astrophysics Data System (ADS)
Ayissi, Raoul Domingo; Noutchegueme, Norbert
2015-01-01
Global solutions regular for the Einstein-Boltzmann equation on a magnetized Bianchi type-I cosmological model with the cosmological constant are investigated. We suppose that the metric is locally rotationally symmetric. The Einstein-Boltzmann equation has been already considered by some authors. But, in general Bancel and Choquet-Bruhat [Ann. Henri Poincaré XVIII(3), 263 (1973); Commun. Math. Phys. 33, 83 (1973)], they proved only the local existence, and in the case of the nonrelativistic Boltzmann equation. Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academy of Science, 2000), Vol. 52] obtained a global existence result, for the relativistic Boltzmann equation coupled with the Einstein equations and using the Yosida operator, but confusing unfortunately with the nonrelativistic case. Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)] and Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], have obtained a global solution in time, but still using the Yosida operator and considering only the uncharged case. Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)] also proved a global existence of solutions to the Maxwell-Boltzmann system using the characteristic method. In this paper, we obtain using a method totally different from those used in the works of Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)], Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)], and Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academy of Science, 2000), Vol. 52] the global in time existence and uniqueness of a regular solution to the Einstein-Maxwell-Boltzmann system with the cosmological constant. We define and we use the weighted Sobolev separable spaces for the Boltzmann equation; some special spaces for the Einstein equations, then we clearly display all the proofs leading to the global existence theorems.
Sparse Poisson noisy image deblurring.
Carlavan, Mikael; Blanc-Féraud, Laure
2012-04-01
Deblurring noisy Poisson images has recently been a subject of an increasing amount of works in many areas such as astronomy and biological imaging. In this paper, we focus on confocal microscopy, which is a very popular technique for 3-D imaging of biological living specimens that gives images with a very good resolution (several hundreds of nanometers), although degraded by both blur and Poisson noise. Deconvolution methods have been proposed to reduce these degradations, and in this paper, we focus on techniques that promote the introduction of an explicit prior on the solution. One difficulty of these techniques is to set the value of the parameter, which weights the tradeoff between the data term and the regularizing term. Only few works have been devoted to the research of an automatic selection of this regularizing parameter when considering Poisson noise; therefore, it is often set manually such that it gives the best visual results. We present here two recent methods to estimate this regularizing parameter, and we first propose an improvement of these estimators, which takes advantage of confocal images. Following these estimators, we secondly propose to express the problem of the deconvolution of Poisson noisy images as the minimization of a new constrained problem. The proposed constrained formulation is well suited to this application domain since it is directly expressed using the antilog likelihood of the Poisson distribution and therefore does not require any approximation. We show how to solve the unconstrained and constrained problems using the recent alternating-direction technique, and we present results on synthetic and real data using well-known priors, such as total variation and wavelet transforms. Among these wavelet transforms, we specially focus on the dual-tree complex wavelet transform and on the dictionary composed of curvelets and an undecimated wavelet transform.
FAST TRACK COMMUNICATION Time-dependent exact solutions of the nonlinear Kompaneets equation
NASA Astrophysics Data System (ADS)
Ibragimov, N. H.
2010-12-01
Time-dependent exact solutions of the Kompaneets photon diffusion equation are obtained for several approximations of this equation. One of the approximations describes the case when the induced scattering is dominant. In this case, the Kompaneets equation has an additional symmetry which is used for constructing some exact solutions as group invariant solutions.
Polarimetric image reconstruction algorithms
NASA Astrophysics Data System (ADS)
Valenzuela, John R.
In the field of imaging polarimetry Stokes parameters are sought and must be inferred from noisy and blurred intensity measurements. Using a penalized-likelihood estimation framework we investigate reconstruction quality when estimating intensity images and then transforming to Stokes parameters (traditional estimator), and when estimating Stokes parameters directly (Stokes estimator). We define our cost function for reconstruction by a weighted least squares data fit term and a regularization penalty. It is shown that under quadratic regularization, the traditional and Stokes estimators can be made equal by appropriate choice of regularization parameters. It is empirically shown that, when using edge preserving regularization, estimating the Stokes parameters directly leads to lower RMS error in reconstruction. Also, the addition of a cross channel regularization term further lowers the RMS error for both methods especially in the case of low SNR. The technique of phase diversity has been used in traditional incoherent imaging systems to jointly estimate an object and optical system aberrations. We extend the technique of phase diversity to polarimetric imaging systems. Specifically, we describe penalized-likelihood methods for jointly estimating Stokes images and optical system aberrations from measurements that contain phase diversity. Jointly estimating Stokes images and optical system aberrations involves a large parameter space. A closed-form expression for the estimate of the Stokes images in terms of the aberration parameters is derived and used in a formulation that reduces the dimensionality of the search space to the number of aberration parameters only. We compare the performance of the joint estimator under both quadratic and edge-preserving regularization. The joint estimator with edge-preserving regularization yields higher fidelity polarization estimates than with quadratic regularization. Under quadratic regularization, using the reduced-parameter search strategy, accurate aberration estimates can be obtained without recourse to regularization "tuning". Phase-diverse wavefront sensing is emerging as a viable candidate wavefront sensor for adaptive-optics systems. In a quadratically penalized weighted least squares estimation framework a closed form expression for the object being imaged in terms of the aberrations in the system is available. This expression offers a dramatic reduction of the dimensionality of the estimation problem and thus is of great interest for practical applications. We have derived an expression for an approximate joint covariance matrix for object and aberrations in the phase diversity context. Our expression for the approximate joint covariance is compared with the "known-object" Cramer-Rao lower bound that is typically used for system parameter optimization. Estimates of the optimal amount of defocus in a phase-diverse wavefront sensor derived from the joint-covariance matrix, the known-object Cramer-Rao bound, and Monte Carlo simulations are compared for an extended scene and a point object. It is found that our variance approximation, that incorporates the uncertainty of the object, leads to an improvement in predicting the optimal amount of defocus to use in a phase-diverse wavefront sensor.
Vacuum polarization in the field of a multidimensional global monopole
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grats, Yu. V., E-mail: grats@phys.msu.ru; Spirin, P. A.
2016-11-15
An approximate expression for the Euclidean Green function of a massless scalar field in the spacetime of a multidimensional global monopole has been derived. Expressions for the vacuum expectation values 〈ϕ{sup 2}〉{sub ren} and 〈T{sub 00}〉{sub ren} have been derived by the dimensional regularization method. Comparison with the results obtained by alternative regularization methods is made.
Strength conditions for the elastic structures with a stress error
NASA Astrophysics Data System (ADS)
Matveev, A. D.
2017-10-01
As is known, the constraints (strength conditions) for the safety factor of elastic structures and design details of a particular class, e.g. aviation structures are established, i.e. the safety factor values of such structures should be within the given range. It should be noted that the constraints are set for the safety factors corresponding to analytical (exact) solutions of elasticity problems represented for the structures. Developing the analytical solutions for most structures, especially irregular shape ones, is associated with great difficulties. Approximate approaches to solve the elasticity problems, e.g. the technical theories of deformation of homogeneous and composite plates, beams and shells, are widely used for a great number of structures. Technical theories based on the hypotheses give rise to approximate (technical) solutions with an irreducible error, with the exact value being difficult to be determined. In static calculations of the structural strength with a specified small range for the safety factors application of technical (by the Theory of Strength of Materials) solutions is difficult. However, there are some numerical methods for developing the approximate solutions of elasticity problems with arbitrarily small errors. In present paper, the adjusted reference (specified) strength conditions for the structural safety factor corresponding to approximate solution of the elasticity problem have been proposed. The stress error estimation is taken into account using the proposed strength conditions. It has been shown that, to fulfill the specified strength conditions for the safety factor of the given structure corresponding to an exact solution, the adjusted strength conditions for the structural safety factor corresponding to an approximate solution are required. The stress error estimation which is the basis for developing the adjusted strength conditions has been determined for the specified strength conditions. The adjusted strength conditions presented by allowable stresses are suggested. Adjusted strength conditions make it possible to determine the set of approximate solutions, whereby meeting the specified strength conditions. Some examples of the specified strength conditions to be satisfied using the technical (by the Theory of Strength of Materials) solutions and strength conditions have been given, as well as the examples of stress conditions to be satisfied using approximate solutions with a small error.
Olafsson, Valur T; Noll, Douglas C; Fessler, Jeffrey A
2018-02-01
Penalized least-squares iterative image reconstruction algorithms used for spatial resolution-limited imaging, such as functional magnetic resonance imaging (fMRI), commonly use a quadratic roughness penalty to regularize the reconstructed images. When used for complex-valued images, the conventional roughness penalty regularizes the real and imaginary parts equally. However, these imaging methods sometimes benefit from separate penalties for each part. The spatial smoothness from the roughness penalty on the reconstructed image is dictated by the regularization parameter(s). One method to set the parameter to a desired smoothness level is to evaluate the full width at half maximum of the reconstruction method's local impulse response. Previous work has shown that when using the conventional quadratic roughness penalty, one can approximate the local impulse response using an FFT-based calculation. However, that acceleration method cannot be applied directly for separate real and imaginary regularization. This paper proposes a fast and stable calculation for this case that also uses FFT-based calculations to approximate the local impulse responses of the real and imaginary parts. This approach is demonstrated with a quadratic image reconstruction of fMRI data that uses separate roughness penalties for the real and imaginary parts.
Fast Algorithms for Earth Mover Distance Based on Optimal Transport and L1 Regularization II
2016-09-01
of optimal transport, the EMD problem can be reformulated as a familiar L1 minimization. We use a regularization which gives us a unique solution for...plays a central role in many applications, including image processing, computer vision and statistics etc. [13, 17, 20, 24]. The EMD is a metric defined
NASA Astrophysics Data System (ADS)
Nekrasova, N. A.; Kurbatova, S. V.; Zemtsova, M. N.
2016-12-01
Regularities of the sorption of 1,2,3,4-tetrahydroquinoline derivatives on octadecylsilyl silica gel and porous graphitic carbon from aqueous acetonitrile solutions were investigated. The effect the molecular structure and physicochemical parameters of the sorbates have on their retention characteristics under conditions of reversed phase HPLC are analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bildhauer, Michael, E-mail: bibi@math.uni-sb.de; Fuchs, Martin, E-mail: fuchs@math.uni-sb.de
2012-12-15
We discuss several variants of the TV-regularization model used in image recovery. The proposed alternatives are either of nearly linear growth or even of linear growth, but with some weak ellipticity properties. The main feature of the paper is the investigation of the analytic properties of the corresponding solutions.
NASA Technical Reports Server (NTRS)
Burkhart, G. R.; Chen, J.
1989-01-01
The integrodifferential equation describing the linear tearing instability in the bi-Maxwellian neutral sheet is solved without approximating the particle orbits or the eigenfunction psi. Results of this calculation are presented. Comparison between the exact solution and the three-region approximation motivates the piecewise-straight-line approximation, a simplification that allows faster solution of the integrodifferential equation, yet retains the important features of the exact solution.
İbiş, Birol
2014-01-01
This paper aims to obtain the approximate solution of time-fractional advection-dispersion equation (FADE) involving Jumarie's modification of Riemann-Liouville derivative by the fractional variational iteration method (FVIM). FVIM provides an analytical approximate solution in the form of a convergent series. Some examples are given and the results indicate that the FVIM is of high accuracy, more efficient, and more convenient for solving time FADEs. PMID:24578662
Long-term and seasonal Caspian Sea level change from satellite gravity and altimeter measurements
NASA Astrophysics Data System (ADS)
Chen, J. L.; Wilson, C. R.; Tapley, B. D.; Save, H.; Cretaux, Jean-Francois
2017-03-01
We examine recent Caspian Sea level change by using both satellite radar altimetry and satellite gravity data. The altimetry record for 2002-2015 shows a declining level at a rate that is approximately 20 times greater than the rate of global sea level rise. Seasonal fluctuations are also much larger than in the world oceans. With a clearly defined geographic region and dominant signal magnitude, variations in the sea level and associated mass changes provide an excellent way to compare various approaches for processing satellite gravity data. An altimeter time series derived from several successive satellite missions is compared with mass measurements inferred from Gravity Recovery and Climate Experiment (GRACE) data in the form of both spherical harmonic (SH) and mass concentration (mascon) solutions. After correcting for spatial leakage in GRACE SH estimates by constrained forward modeling and accounting for steric and terrestrial water processes, GRACE and altimeter observations are in complete agreement at seasonal and longer time scales, including linear trends. This demonstrates that removal of spatial leakage error in GRACE SH estimates is both possible and critical to improving their accuracy and spatial resolution. Excellent agreement between GRACE and altimeter estimates also provides confirmation of steric Caspian Sea level change estimates. GRACE mascon estimates (both the Jet Propulsion Laboratory (JPL) coastline resolution improvement version 2 solution and the Center for Space Research (CSR) regularized) are also affected by leakage error. After leakage corrections, both JPL and CSR mascon solutions also agree well with altimeter observations. However, accurate quantification of leakage bias in GRACE mascon solutions is a more challenging problem.
Analytical theory of mesoscopic Bose-Einstein condensation in an ideal gas
NASA Astrophysics Data System (ADS)
Kocharovsky, Vitaly V.; Kocharovsky, Vladimir V.
2010-03-01
We find the universal structure and scaling of the Bose-Einstein condensation (BEC) statistics and thermodynamics (Gibbs free energy, average energy, heat capacity) for a mesoscopic canonical-ensemble ideal gas in a trap with an arbitrary number of atoms, any volume, and any temperature, including the whole critical region. We identify a universal constraint-cutoff mechanism that makes BEC fluctuations strongly non-Gaussian and is responsible for all unusual critical phenomena of the BEC phase transition in the ideal gas. The main result is an analytical solution to the problem of critical phenomena. It is derived by, first, calculating analytically the universal probability distribution of the noncondensate occupation, or a Landau function, and then using it for the analytical calculation of the universal functions for the particular physical quantities via the exact formulas which express the constraint-cutoff mechanism. We find asymptotics of that analytical solution as well as its simple analytical approximations which describe the universal structure of the critical region in terms of the parabolic cylinder or confluent hypergeometric functions. The obtained results for the order parameter, all higher-order moments of BEC fluctuations, and thermodynamic quantities perfectly match the known asymptotics outside the critical region for both low and high temperature limits. We suggest two- and three-level trap models of BEC and find their exact solutions in terms of the cutoff negative binomial distribution (which tends to the cutoff gamma distribution in the continuous limit) and the confluent hypergeometric distribution, respectively. Also, we present an exactly solvable cutoff Gaussian model of BEC in a degenerate interacting gas. All these exact solutions confirm the universality and constraint-cutoff origin of the strongly non-Gaussian BEC statistics. We introduce a regular refinement scheme for the condensate statistics approximations on the basis of the infrared universality of higher-order cumulants and the method of superposition and show how to model BEC statistics in the actual traps. In particular, we find that the three-level trap model with matching the first four or five cumulants is enough to yield remarkably accurate results for all interesting quantities in the whole critical region. We derive an exact multinomial expansion for the noncondensate occupation probability distribution and find its high-temperature asymptotics (Poisson distribution) and corrections to it. Finally, we demonstrate that the critical exponents and a few known terms of the Taylor expansion of the universal functions, which were calculated previously from fitting the finite-size simulations within the phenomenological renormalization-group theory, can be easily obtained from the presented full analytical solutions for the mesoscopic BEC as certain approximations in the close vicinity of the critical point.
A simple homogeneous model for regular and irregular metallic wire media samples
NASA Astrophysics Data System (ADS)
Kosulnikov, S. Y.; Mirmoosa, M. S.; Simovski, C. R.
2018-02-01
To simplify the solution of electromagnetic problems with wire media samples, it is reasonable to treat them as the samples of a homogeneous material without spatial dispersion. The account of spatial dispersion implies additional boundary conditions and makes the solution of boundary problems difficult especially if the sample is not an infinitely extended layer. Moreover, for a novel type of wire media - arrays of randomly tilted wires - a spatially dispersive model has not been developed. Here, we introduce a simplistic heuristic model of wire media samples shaped as bricks. Our model covers WM of both regularly and irregularly stretched wires.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manzini, Gianmarco
2012-07-13
We develop and analyze a new family of virtual element methods on unstructured polygonal meshes for the diffusion problem in primal form, that use arbitrarily regular discrete spaces V{sub h} {contained_in} C{sup {alpha}} {element_of} N. The degrees of freedom are (a) solution and derivative values of various degree at suitable nodes and (b) solution moments inside polygons. The convergence of the method is proven theoretically and an optimal error estimate is derived. The connection with the Mimetic Finite Difference method is also discussed. Numerical experiments confirm the convergence rate that is expected from the theory.
The Cauchy Problem in Local Spaces for the Complex Ginzburg-Landau EquationII. Contraction Methods
NASA Astrophysics Data System (ADS)
Ginibre, J.; Velo, G.
We continue the study of the initial value problem for the complex Ginzburg-Landau equation
Impact of managed moorland burning on DOC concentrations in soil solutions and stream waters
NASA Astrophysics Data System (ADS)
Palmer, Sheila; Wearing, Catherine; Johnson, Kerrylyn; Holden, Joseph; Brown, Lee
2013-04-01
In the UK uplands, prescribed burning of moorland vegetation is a common practice to maintain suitable habitats for game birds. Many of these landscapes are in catchments covered by significant deposits of blanket peat (typically one metre or more in depth). There is growing interest in the effect of land management on the stability of these peatland carbon stores, and their contribution to dissolved and particulate organic carbon in surface waters (DOC and POC, respectively) and subsequent effects on stream biogeochemistry and ecology. Yet there are surprisingly few published catchment-scale studies on the effect of moorland burning on DOC and POC. As part of the EMBER project, stream chemistry data were collected approximately monthly in ten upland blanket peat catchments in the UK, five of which acted as controls and were not subject to burning. The other five catchments were subject to a history of prescribed burning, typically in small patches (300-900 m2) in rotations of 8-25 years. Soil solution DOC was also monitored at four depths at two intensively studied sites (one regularly burned and one control). At the two intensive sites, soil solution DOC was considerably higher at the burned site, particularly in surface solutions where concentrations in excess of 100 mg/L were recorded on several occasions (median 37 mg/L over 18 months). The high soil solution DOC concentrations at the burned site occurred in the most recently burned plots (less than 2 years prior to start of sampling) and the lowest DOC concentrations were observed in plots burned 15-25 years previously. On average, median stream DOC and POC concentrations were approximately 43% and 35% higher respectively in burned catchments relative to control catchments. All streams exhibited peak DOC in late summer/early autumn with higher peak DOC concentrations in burned catchments (20-66 mg/L) compared to control catchments (18-54 mg/L). During winter months, DOC concentrations were low in control catchments (typically less than 15 mg/L) but were highly variable in burned catchments (9-40 mg/L), implying some instability of peat carbon stores and/or fluctuation in source. The results offer strong evidence for an impact of burning on the delivery of DOC to streams, possibly through increased surface run-off from bare or partially vegetated patches.
NASA Astrophysics Data System (ADS)
Enciso, Alberto; Poyato, David; Soler, Juan
2018-05-01
Strong Beltrami fields, that is, vector fields in three dimensions whose curl is the product of the field itself by a constant factor, have long played a key role in fluid mechanics and magnetohydrodynamics. In particular, they are the kind of stationary solutions of the Euler equations where one has been able to show the existence of vortex structures (vortex tubes and vortex lines) of arbitrarily complicated topology. On the contrary, there are very few results about the existence of generalized Beltrami fields, that is, divergence-free fields whose curl is the field times a non-constant function. In fact, generalized Beltrami fields (which are also stationary solutions to the Euler equations) have been recently shown to be rare, in the sense that for "most" proportionality factors there are no nontrivial Beltrami fields of high enough regularity (e.g., of class {C^{6,α}}), not even locally. Our objective in this work is to show that, nevertheless, there are "many" Beltrami fields with non-constant factor, even realizing arbitrarily complicated vortex structures. This fact is relevant in the study of turbulent configurations. The core results are an "almost global" stability theorem for strong Beltrami fields, which ensures that a global strong Beltrami field with suitable decay at infinity can be perturbed to get "many" Beltrami fields with non-constant factor of arbitrarily high regularity and defined in the exterior of an arbitrarily small ball, and a "local" stability theorem for generalized Beltrami fields, which is an analogous perturbative result which is valid for any kind of Beltrami field (not just with a constant factor) but only applies to small enough domains. The proof relies on an iterative scheme of Grad-Rubin type. For this purpose, we study the Neumann problem for the inhomogeneous Beltrami equation in exterior domains via a boundary integral equation method and we obtain Hölder estimates, a sharp decay at infinity and some compactness properties for these sequences of approximate solutions. Some of the parts of the proof are of independent interest.
A Survey of Endodontic Practices among Dentists in Burkina Faso.
Kaboré, Wendpoulomdé Ad; Chevalier, Valérie; Gnagne-Koffi, Yolande; Ouédraogo, Carole Dw; Ndiaye, Diouma; Faye, Babacar
2017-08-01
Dental surgeons must be aware of the most appropriate endodontic treatments and how to properly conduct them. The aim of this study was to evaluate the knowledge of dental surgeons in Burkina Faso in terms of endodontic treatment procedures. This descriptive, cross-sectional study was performed during the regular annual conference of the National Board of Dental Surgeons of Burkina Faso, held on February 27 and 28, 2015 in Ouagadougou, through a questionnaire. A total of 33 practitioners took part (52.4% of the dental surgeons of Burkina Faso) in the study. The majority of them (90.9%) used sodium hypochlorite as their preferred irrigation solution. Nearly half of the dental surgeons (48.5%) did not know how to use a permeabilization file, and most did not make use of nickel-titanium (NiTi) mechanized instruments (78.8%) or rubber dams (93.9%). Approximately two-thirds of participants did not perform file-in-place radiography (66.7%) or control radiography of the canal obturation (63.6%). The adjusted single-cone technique was the most commonly used (87.9%). This study highlights that the majority of dental surgeons in Burkina Faso are not using the currently recommended endodontic procedures to perform obturations. Dental surgeons in Burkina Faso must commit to regularly upgrading their knowledge and techniques. Key words: Burkina faso, Cross-sectional study, Dental surgeons, Endodontic treatments, Protocol adherence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Y., E-mail: yuezhao@sjtu.edu.cn
2017-02-15
Epitaxial growth of oxide thin films has attracted much interest because of their broad applications in various fields. In this study, we investigated the microstructure of textured Gd{sub 2}Zr{sub 2}O{sub 7} films grown on (001)〈100〉 orientated NiW alloy substrates by a chemical solution deposition (CSD) method. The aging effect of precursor solution on defect formation was thoroughly investigated. A slight difference was observed between the as-obtained and aged precursor solutions with respect to the phase purity and global texture of films prepared using these solutions. However, the surface morphologies are different, i.e., some regular-shaped regions (mainly hexagonal or dodecagonal) weremore » observed on the film prepared using the as-obtained precursor, whereas the film prepared using the aged precursor exhibits a homogeneous structure. Electron backscatter diffraction and scanning electron microscopy analyses showed that the Gd{sub 2}Zr{sub 2}O{sub 7} grains present within the regular-shaped regions are polycrystalline, whereas those present in the surrounding are epitaxial. Some polycrystalline regions ranging from several micrometers to several tens of micrometers grew across the NiW grain boundaries underneath. To understand this phenomenon, the properties of the precursors and corresponding xerogel were studied by Fourier transform infrared spectroscopy and coupled thermogravimetry/differential thermal analysis. The results showed that both the solutions mainly contain small Gd−Zr−O clusters obtained by the reaction of zirconium acetylacetonate with propionic acid during the precursor synthesis. The regular-shaped regions were probably formed by large Gd−Zr−O frameworks with a metastable structure in the solution with limited aging time. This study demonstrates the importance of the precise control of chemical reaction path to enhance the stability and homogeneity of the precursors of the CSD route. - Highlights: •We investigate microstructure of Gd{sub 2}Zr{sub 2}O{sub 7} films grown by a chemical solution route. •The aging effect of precursor solution on formation of surface defect was thoroughly studied. •Gd−Zr−O clusters are present in the precursor solutions.« less
LP-stability for the strong solutions of the Navier-Stokes equations in the whole space
NASA Astrophysics Data System (ADS)
Beiraodaveiga, H.; Secchi, P.
1985-10-01
We consider the motion of a viscous fluid filling the whole space R3, governed by the classical Navier-Stokes equations (1). Existence of global (in time) regular solutions for that system of non-linear partial differential equations, is still an open problem. From either the mathematical and the physical point of view, an interesting property is the stability (or not) of the (eventual) global regular solutions. Here, we assume that v1(t,x) is a solution, with initial data a1(x). For small perturbations of a1, we want the solution v1(t,x) being slightly perturbed, too. Due to viscosity, it is even expected that the perturbed solution v2(t,x) approaches the unperturbed one, as time goes to + infinity. This is just the result proved in this paper. To measure the distance between v1(t,x) and v2(t,x), at each time t, suitable norms are introduced (LP-norms). For fluids filling a bounded vessel, exponential decay of the above distance, is expected. Such a strong result is not reasonable, for fluids filling the entire space.
Dynamical black holes in low-energy string theory
NASA Astrophysics Data System (ADS)
Aniceto, Pedro; Rocha, Jorge V.
2017-05-01
We investigate time-dependent spherically symmetric solutions of the four-dimensional Einstein-Maxwell-axion-dilaton system, with the dilaton coupling that occurs in low-energy effective heterotic string theory. A class of dilaton-electrovacuum radiating solutions with a trivial axion, previously found by Güven and Yörük, is re-derived in a simpler manner and its causal structure is clarified. It is shown that such dynamical spacetimes featuring apparent horizons do not possess a regular light-like past null infinity or future null infinity, depending on whether they are radiating or accreting. These solutions are then extended in two ways. First we consider a Vaidya-like generalisation, which introduces a null dust source. Such spacetimes are used to test the status of cosmic censorship in the context of low-energy string theory. We prove that — within this family of solutions — regular black holes cannot evolve into naked singularities by accreting null dust, unless standard energy conditions are violated. Secondly, we employ S-duality to derive new time-dependent dyon solutions with a nontrivial axion turned on. Although they share the same causal structure as their Einstein-Maxwell-dilaton counterparts, these solutions possess both electric and magnetic charges.
Evaluation of global equal-area mass grid solutions from GRACE
NASA Astrophysics Data System (ADS)
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron
2015-04-01
The Gravity Recovery and Climate Experiment (GRACE) range-rate data was inverted into global equal-area mass grid solutions at the Center for Space Research (CSR) using Tikhonov Regularization to stabilize the ill-posed inversion problem. These solutions are intended to be used for applications in Hydrology, Oceanography, Cryosphere etc without any need for post-processing. This paper evaluates these solutions with emphasis on spatial and temporal characteristics of the signal content. These solutions will be validated against multiple models and in-situ data sets.
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
Swimming in a two-dimensional Brinkman fluid: Computational modeling and regularized solutions
NASA Astrophysics Data System (ADS)
Leiderman, Karin; Olson, Sarah D.
2016-02-01
The incompressible Brinkman equation represents the homogenized fluid flow past obstacles that comprise a small volume fraction. In nondimensional form, the Brinkman equation can be characterized by a single parameter that represents the friction or resistance due to the obstacles. In this work, we derive an exact fundamental solution for 2D Brinkman flow driven by a regularized point force and describe the numerical method to use it in practice. To test our solution and method, we compare numerical results with an analytic solution of a stationary cylinder in a uniform Brinkman flow. Our method is also compared to asymptotic theory; for an infinite-length, undulating sheet of small amplitude, we recover an increasing swimming speed as the resistance is increased. With this computational framework, we study a model swimmer of finite length and observe an enhancement in propulsion and efficiency for small to moderate resistance. Finally, we study the interaction of two swimmers where attraction does not occur when the initial separation distance is larger than the screening length.
Neural network for nonsmooth pseudoconvex optimization with general convex constraints.
Bian, Wei; Ma, Litao; Qin, Sitian; Xue, Xiaoping
2018-05-01
In this paper, a one-layer recurrent neural network is proposed for solving a class of nonsmooth, pseudoconvex optimization problems with general convex constraints. Based on the smoothing method, we construct a new regularization function, which does not depend on any information of the feasible region. Thanks to the special structure of the regularization function, we prove the global existence, uniqueness and "slow solution" character of the state of the proposed neural network. Moreover, the state solution of the proposed network is proved to be convergent to the feasible region in finite time and to the optimal solution set of the related optimization problem subsequently. In particular, the convergence of the state to an exact optimal solution is also considered in this paper. Numerical examples with simulation results are given to show the efficiency and good characteristics of the proposed network. In addition, some preliminary theoretical analysis and application of the proposed network for a wider class of dynamic portfolio optimization are included. Copyright © 2018 Elsevier Ltd. All rights reserved.
Measuring, Enabling and Comparing Modularity, Regularity and Hierarchy in Evolutionary Design
NASA Technical Reports Server (NTRS)
Hornby, Gregory S.
2005-01-01
For computer-automated design systems to scale to complex designs they must be able to produce designs that exhibit the characteristics of modularity, regularity and hierarchy - characteristics that are found both in man-made and natural designs. Here we claim that these characteristics are enabled by implementing the attributes of combination, control-flow and abstraction in the representation. To support this claim we use an evolutionary algorithm to evolve solutions to different sizes of a table design problem using five different representations, each with different combinations of modularity, regularity and hierarchy enabled and show that the best performance happens when all three of these attributes are enabled. We also define metrics for modularity, regularity and hierarchy in design encodings and demonstrate that high fitness values are achieved with high values of modularity, regularity and hierarchy and that there is a positive correlation between increases in fitness and increases in modularity. regularity and hierarchy.
NASA Astrophysics Data System (ADS)
Perez, R. Navarro; Schunck, N.; Lasseri, R.-D.; Zhang, C.; Sarich, J.
2017-11-01
We describe the new version 3.00 of the code HFBTHO that solves the nuclear Hartree-Fock (HF) or Hartree-Fock-Bogolyubov (HFB) problem by using the cylindrical transformed deformed harmonic oscillator basis. In the new version, we have implemented the following features: (i) the full Gogny force in both particle-hole and particle-particle channels, (ii) the calculation of the nuclear collective inertia at the perturbative cranking approximation, (iii) the calculation of fission fragment charge, mass and deformations based on the determination of the neck, (iv) the regularization of zero-range pairing forces, (v) the calculation of localization functions, (vi) a MPI interface for large-scale mass table calculations. Program Files doi:http://dx.doi.org/10.17632/c5g2f92by3.1 Licensing provisions: GPL v3 Programming language: FORTRAN-95 Journal reference of previous version: M.V. Stoitsov, N. Schunck, M. Kortelainen, N. Michel, H. Nam, E. Olsen, J. Sarich, and S. Wild, Comput. Phys. Commun. 184 (2013). Does the new version supersede the previous one: Yes Summary of revisions: 1. the Gogny force in both particle-hole and particle-particle channels was implemented; 2. the nuclear collective inertia at the perturbative cranking approximation was implemented; 3. fission fragment charge, mass and deformations were implemented based on the determination of the position of the neck between nascent fragments; 4. the regularization method of zero-range pairing forces was implemented; 5. the localization functions of the HFB solution were implemented; 6. a MPI interface for large-scale mass table calculations was implemented. Nature of problem:HFBTHO is a physics computer code that is used to model the structure of the nucleus. It is an implementation of the energy density functional (EDF) approach to atomic nuclei, where the energy of the nucleus is obtained by integration over space of some phenomenological energy density, which is itself a functional of the neutron and proton intrinsic densities. In the present version of HFBTHO, the energy density derives either from the zero-range Skyrme or the finite-range Gogny effective two-body interaction between nucleons. Nuclear super-fluidity is treated at the Hartree-Fock-Bogolyubov (HFB) approximation. Constraints on the nuclear shape allows probing the potential energy surface of the nucleus as needed e.g., for the description of shape isomers or fission. The implementation of a local scale transformation of the single-particle basis in which the HFB solutions are expanded provide a tool to properly compute the structure of weakly-bound nuclei. Solution method: The program uses the axial Transformed Harmonic Oscillator (THO) single-particle basis to expand quasiparticle wave functions. It iteratively diagonalizes the Hartree-Fock-Bogolyubov Hamiltonian based on generalized Skyrme-like energy densities and zero-range pairing interactions or the finite-range Gogny force until a self-consistent solution is found. A previous version of the program was presented in M.V. Stoitsov, N. Schunck, M. Kortelainen, N. Michel, H. Nam, E. Olsen, J. Sarich, and S. Wild, Comput. Phys. Commun. 184 (2013) 1592-1604 with much of the formalism presented in the original paper M.V. Stoitsov, J. Dobaczewski, W. Nazarewicz, P. Ring, Comput. Phys. Commun. 167 (2005) 43-63. Additional comments: The user must have access to (i) the LAPACK subroutines DSYEEVR, DSYEVD, DSYTRF and DSYTRI, and their dependencies, which compute eigenvalues and eigenfunctions of real symmetric matrices, (ii) the LAPACK subroutines DGETRI and DGETRF, which invert arbitrary real matrices, and (iii) the BLAS routines DCOPY, DSCAL, DGEMM and DGEMV for double-precision linear algebra (or provide another set of subroutines that can perform such tasks). The BLAS and LAPACK subroutines can be obtained from the Netlib Repository at the University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/.
Application of the Parabolic Approximation to Predict Acoustical Propagation in the Ocean.
ERIC Educational Resources Information Center
McDaniel, Suzanne T.
1979-01-01
A simplified derivation of the parabolic approximation to the acoustical wave equation is presented. Exact solutions to this approximate equation are compared with solutions to the wave equation to demonstrate the applicability of this method to the study of underwater sound propagation. (Author/BB)
Approximate analytic expression for the Skyrmions crystal
NASA Astrophysics Data System (ADS)
Grandi, Nicolás; Sturla, Mauricio
2018-01-01
We find approximate solutions for the two-dimensional nonlinear Σ-model with Dzyalioshinkii-Moriya term, representing magnetic Skyrmions. They are built in an analytic form, by pasting different approximate solutions found in different regions of space. We verify that our construction reproduces the phenomenology known from numerical solutions and Monte Carlo simulations, giving rise to a Skyrmion lattice at an intermediate range of magnetic field, flanked by spiral and spin-polarized phases for low and high magnetic fields, respectively.
A class of nonideal solutions. 1: Definition and properties
NASA Technical Reports Server (NTRS)
Zeleznik, F. J.
1983-01-01
A class of nonideal solutions is defined by constructing a function to represent the composition dependence of thermodynamic properties for members of the class, and some properties of these solutions are studied. The constructed function has several useful features: (1) its parameters occur linearly; (2) it contains a logarithmic singularity in the dilute solution region and contains ideal solutions and regular solutions as special cases; and (3) it is applicable to N-ary systems and reduces to M-ary systems (M or = N) in a form-invariant manner.
NASA Astrophysics Data System (ADS)
Schöpfer, Martin; Lehner, Florian; Grasemann, Bernhard; Kaserer, Klemens; Hinsch, Ralph
2017-04-01
John G. Ramsay's sketch of structures developed in a layer progressively folded and deformed by tangential longitudinal strain (Figure 7-65 in Folding and Fracturing of Rocks) and the associated strain pattern analysis have been reproduced in many monographs on Structural Geology and are referred to in numerous publications. Although the origin of outer-arc extension fractures is well-understood and documented in many natural examples, geomechanical factors controlling their (finite or saturation) spacing are hitherto unexplored. This study investigates the formation of bending-induced fractures during constant-curvature forced folding using Distinct Element Method (DEM) numerical modelling. The DEM model comprises a central brittle layer embedded within weaker (low modulus) elastic layers; the layer interfaces are frictionless (free slip). Folding of this three-layer system is enforced by a velocity boundary condition at the model base, while a constant overburden pressure is maintained at the model top. The models illustrate several key stages of fracture array development: (i) Prior to the onset of fracture, the neutral surface is located midway between the layer boundaries; (ii) A first set of regularly spaced fractures develops once the tensile stress in the outer-arc equals the tensile strength of the layer. Since the layer boundaries are frictionless, these bending-induced fractures propagate through the entire layer; (iii) After the appearance of the first fracture set, the rate of fracture formation decreases rapidly and so-called infill fractures develop approximately midway between two existing fractures (sequential infilling); (iv) Eventually no new fractures form, irrespective of any further increase in fold curvature (fracture saturation). Analysis of the interfacial normal stress distributions suggests that at saturation the fracture-bound blocks are subjected to a loading condition similar to three-point bending. Using classical beam theory an analytical solution is derived for the critical fracture spacing, i.e. the spacing below which the maximum tensile stress cannot reach the layer strength. The model results are consistent with an approximate analytical solution, and illustrate that the spacing of bending-induced fractures is proportional to layer thickness and a square root function of the ratio of layer tensile strength to confining pressure. Although highly idealised, models and analysis presented in this study offer an explanation for fracture saturation during folding and point towards certain key factors that may control fracture spacing in natural systems.
Tompkins, Adrian M; McCreesh, Nicky
2016-03-31
One year of mobile phone location data from Senegal is analysed to determine the characteristics of journeys that result in an overnight stay, and are thus relevant for malaria transmission. Defining the home location of each person as the place of most frequent calls, it is found that approximately 60% of people who spend nights away from home have regular destinations that are repeatedly visited, although only 10% have 3 or more regular destinations. The number of journeys involving overnight stays peaks at a distance of 50 km, although roughly half of such journeys exceed 100 km. Most visits only involve a stay of one or two nights away from home, with just 4% exceeding one week. A new agent-based migration model is introduced, based on a gravity model adapted to represent overnight journeys. Each agent makes journeys involving overnight stays to either regular or random locations, with journey and destination probabilities taken from the mobile phone dataset. Preliminary simulations show that the agent-based model can approximately reproduce the patterns of migration involving overnight stays.
2016-11-22
structure of the graph, we replace the ℓ1- norm by the nonconvex Capped -ℓ1 norm , and obtain the Generalized Capped -ℓ1 regularized logistic regression...X. M. Yuan. Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Mathematics of Computation, 82(281):301...better approximations of ℓ0- norm theoretically and computationally beyond ℓ1- norm , for example, the compressive sensing (Xiao et al., 2011). The
Control of the transition between regular and mach reflection of shock waves
NASA Astrophysics Data System (ADS)
Alekseev, A. K.
2012-06-01
A control problem was considered that makes it possible to switch the flow between stationary Mach and regular reflection of shock waves within the dual solution domain. The sensitivity of the flow was computed by solving adjoint equations. A control disturbance was sought by applying gradient optimization methods. According to the computational results, the transition from regular to Mach reflection can be executed by raising the temperature. The transition from Mach to regular reflection can be achieved by lowering the temperature at moderate Mach numbers and is impossible at large numbers. The reliability of the numerical results was confirmed by verifying them with the help of a posteriori analysis.
An effective solution to the nonlinear, nonstationary Navier-Stokes equations for two dimensions
NASA Technical Reports Server (NTRS)
Gabrielsen, R. E.
1975-01-01
A sequence of approximate solutions for the nonlinear, nonstationary Navier-Stokes equations for a two-dimensional domain, from which explicit error estimates and rates of convergence are obtained, is described. This sequence of approximate solutions is based primarily on the Newton-Kantorovich method.
A study on Marangoni convection by the variational iteration method
NASA Astrophysics Data System (ADS)
Karaoǧlu, Onur; Oturanç, Galip
2012-09-01
In this paper, we will consider the use of the variational iteration method and Padé approximant for finding approximate solutions for a Marangoni convection induced flow over a free surface due to an imposed temperature gradient. The solutions are compared with the numerical (fourth-order Runge Kutta) solutions.
In dealing with problems related to land-based nuclear waste management, a number of analytical and approximate solutions were developed to quantify radionuclide transport through fractures contained in the porous formation. t has been reported that by treating the radioactive de...
Highly Accurate Analytical Approximate Solution to a Nonlinear Pseudo-Oscillator
NASA Astrophysics Data System (ADS)
Wu, Baisheng; Liu, Weijia; Lim, C. W.
2017-07-01
A second-order Newton method is presented to construct analytical approximate solutions to a nonlinear pseudo-oscillator in which the restoring force is inversely proportional to the dependent variable. The nonlinear equation is first expressed in a specific form, and it is then solved in two steps, a predictor and a corrector step. In each step, the harmonic balance method is used in an appropriate manner to obtain a set of linear algebraic equations. With only one simple second-order Newton iteration step, a short, explicit, and highly accurate analytical approximate solution can be derived. The approximate solutions are valid for all amplitudes of the pseudo-oscillator. Furthermore, the method incorporates second-order Taylor expansion in a natural way, and it is of significant faster convergence rate.
Stability Properties of the Regular Set for the Navier-Stokes Equation
NASA Astrophysics Data System (ADS)
D'Ancona, Piero; Lucà, Renato
2018-06-01
We investigate the size of the regular set for small perturbations of some classes of strong large solutions to the Navier-Stokes equation. We consider perturbations of the data that are small in suitable weighted L2 spaces but can be arbitrarily large in any translation invariant Banach space. We give similar results in the small data setting.
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms
NASA Astrophysics Data System (ADS)
Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy
2013-04-01
Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.
More on approximations of Poisson probabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, C
1980-05-01
Calculation of Poisson probabilities frequently involves calculating high factorials, which becomes tedious and time-consuming with regular calculators. The usual way to overcome this difficulty has been to find approximations by making use of the table of the standard normal distribution. A new transformation proposed by Kao in 1978 appears to perform better for this purpose than traditional transformations. In the present paper several approximation methods are stated and compared numerically, including an approximation method that utilizes a modified version of Kao's transformation. An approximation based on a power transformation was found to outperform those based on the square-root type transformationsmore » as proposed in literature. The traditional Wilson-Hilferty approximation and Makabe-Morimura approximation are extremely poor compared with this approximation. 4 tables. (RWR)« less
NASA Astrophysics Data System (ADS)
Alshaery, Aisha; Ebaid, Abdelhalim
2017-11-01
Kepler's equation is one of the fundamental equations in orbital mechanics. It is a transcendental equation in terms of the eccentric anomaly of a planet which orbits the Sun. Determining the position of a planet in its orbit around the Sun at a given time depends upon the solution of Kepler's equation, which we will solve in this paper by the Adomian decomposition method (ADM). Several properties of the periodicity of the obtained approximate solutions have been proved in lemmas. Our calculations demonstrated a rapid convergence of the obtained approximate solutions which are displayed in tables and graphs. Also, it has been shown in this paper that only a few terms of the Adomian decomposition series are sufficient to achieve highly accurate numerical results for any number of revolutions of the Earth around the Sun as a consequence of the periodicity property. Numerically, the four-term approximate solution coincides with the Bessel-Fourier series solution in the literature up to seven decimal places at some values of the time parameter and nine decimal places at other values. Moreover, the absolute error approaches zero using the nine term approximate Adomian solution. In addition, the approximate Adomian solutions for the eccentric anomaly have been used to show the convergence of the approximate radial distances of the Earth from the Sun for any number of revolutions. The minimal distance (perihelion) and maximal distance (aphelion) approach 147 million kilometers and 152.505 million kilometers, respectively, and these coincide with the well known results in astronomical physics. Therefore, the Adomian decomposition method is validated as an effective tool to solve Kepler's equation for elliptical orbits.
NASA Astrophysics Data System (ADS)
Prot, Olivier; SantolíK, OndřEj; Trotignon, Jean-Gabriel; Deferaudy, Hervé
2006-06-01
An entropy regularization algorithm (ERA) has been developed to compute the wave-energy density from electromagnetic field measurements. It is based on the wave distribution function (WDF) concept. To assess its suitability and efficiency, the algorithm is applied to experimental data that has already been analyzed using other inversion techniques. The FREJA satellite data that is used consists of six spectral matrices corresponding to six time-frequency points of an ELF hiss-event spectrogram. The WDF analysis is performed on these six points and the results are compared with those obtained previously. A statistical stability analysis confirms the stability of the solutions. The WDF computation is fast and without any prespecified parameters. The regularization parameter has been chosen in accordance with the Morozov's discrepancy principle. The Generalized Cross Validation and L-curve criterions are then tentatively used to provide a fully data-driven method. However, these criterions fail to determine a suitable value of the regularization parameter. Although the entropy regularization leads to solutions that agree fairly well with those already published, some differences are observed, and these are discussed in detail. The main advantage of the ERA is to return the WDF that exhibits the largest entropy and to avoid the use of a priori models, which sometimes seem to be more accurate but without any justification.
Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart
2011-01-01
We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859
On conforming mixed finite element methods for incompressible viscous flow problems
NASA Technical Reports Server (NTRS)
Gunzburger, M. D; Nicolaides, R. A.; Peterson, J. S.
1982-01-01
The application of conforming mixed finite element methods to obtain approximate solutions of linearized Navier-Stokes equations is examined. Attention is given to the convergence rates of various finite element approximations of the pressure and the velocity field. The optimality of the convergence rates are addressed in terms of comparisons of the approximation convergence to a smooth solution in relation to the best approximation available for the finite element space used. Consideration is also devoted to techniques for efficient use of a Gaussian elimination algorithm to obtain a solution to a system of linear algebraic equations derived by finite element discretizations of linear partial differential equations.
Ashbaugh, H S; Garde, S; Hummer, G; Kaler, E W; Paulaitis, M E
1999-01-01
Conformational free energies of butane, pentane, and hexane in water are calculated from molecular simulations with explicit waters and from a simple molecular theory in which the local hydration structure is estimated based on a proximity approximation. This proximity approximation uses only the two nearest carbon atoms on the alkane to predict the local water density at a given point in space. Conformational free energies of hydration are subsequently calculated using a free energy perturbation method. Quantitative agreement is found between the free energies obtained from simulations and theory. Moreover, free energy calculations using this proximity approximation are approximately four orders of magnitude faster than those based on explicit water simulations. Our results demonstrate the accuracy and utility of the proximity approximation for predicting water structure as the basis for a quantitative description of n-alkane conformational equilibria in water. In addition, the proximity approximation provides a molecular foundation for extending predictions of water structure and hydration thermodynamic properties of simple hydrophobic solutes to larger clusters or assemblies of hydrophobic solutes. PMID:10423414
Approximate analytic solutions to 3D unconfined groundwater flow within regional 2D models
NASA Astrophysics Data System (ADS)
Luther, K.; Haitjema, H. M.
2000-04-01
We present methods for finding approximate analytic solutions to three-dimensional (3D) unconfined steady state groundwater flow near partially penetrating and horizontal wells, and for combining those solutions with regional two-dimensional (2D) models. The 3D solutions use distributed singularities (analytic elements) to enforce boundary conditions on the phreatic surface and seepage faces at vertical wells, and to maintain fixed-head boundary conditions, obtained from the 2D model, at the perimeter of the 3D model. The approximate 3D solutions are analytic (continuous and differentiable) everywhere, including on the phreatic surface itself. While continuity of flow is satisfied exactly in the infinite 3D flow domain, water balance errors can occur across the phreatic surface.
Double power series method for approximating cosmological perturbations
NASA Astrophysics Data System (ADS)
Wren, Andrew J.; Malik, Karim A.
2017-04-01
We introduce a double power series method for finding approximate analytical solutions for systems of differential equations commonly found in cosmological perturbation theory. The method was set out, in a noncosmological context, by Feshchenko, Shkil' and Nikolenko (FSN) in 1966, and is applicable to cases where perturbations are on subhorizon scales. The FSN method is essentially an extension of the well known Wentzel-Kramers-Brillouin (WKB) method for finding approximate analytical solutions for ordinary differential equations. The FSN method we use is applicable well beyond perturbation theory to solve systems of ordinary differential equations, linear in the derivatives, that also depend on a small parameter, which here we take to be related to the inverse wave-number. We use the FSN method to find new approximate oscillating solutions in linear order cosmological perturbation theory for a flat radiation-matter universe. Together with this model's well-known growing and decaying Mészáros solutions, these oscillating modes provide a complete set of subhorizon approximations for the metric potential, radiation and matter perturbations. Comparison with numerical solutions of the perturbation equations shows that our approximations can be made accurate to within a typical error of 1%, or better. We also set out a heuristic method for error estimation. A Mathematica notebook which implements the double power series method is made available online.
NASA Astrophysics Data System (ADS)
Bian, Dongfen; Liu, Jitao
2017-12-01
This paper is concerned with the initial-boundary value problem to 2D magnetohydrodynamics-Boussinesq system with the temperature-dependent viscosity, thermal diffusivity and electrical conductivity. First, we establish the global weak solutions under the minimal initial assumption. Then by imposing higher regularity assumption on the initial data, we obtain the global strong solution with uniqueness. Moreover, the exponential decay rates of weak solutions and strong solution are obtained respectively.
A 25% tannic acid solution as a root canal irrigant cleanser: a scanning electron microscope study.
Bitter, N C
1989-03-01
A scanning electron microscope was used to evaluate the cleansing properties of a 25% tannic acid solution on the dentinal surface in the pulp chamber of endodontically prepared teeth. This was compared with the amorphous smear layer of the canal with the use of hydrogen peroxide and sodium hypochlorite solution as an irrigant. The tannic acid solution removed the smear layer more effectively than the regular cleansing agent.
Padé approximant for normal stress differences in large-amplitude oscillatory shear flow
NASA Astrophysics Data System (ADS)
Poungthong, P.; Saengow, C.; Giacomin, A. J.; Kolitawong, C.; Merger, D.; Wilhelm, M.
2018-04-01
Analytical solutions for the normal stress differences in large-amplitude oscillatory shear flow (LAOS), for continuum or molecular models, normally take the inexact form of the first few terms of a series expansion in the shear rate amplitude. Here, we improve the accuracy of these truncated expansions by replacing them with rational functions called Padé approximants. The recent advent of exact solutions in LAOS presents an opportunity to identify accurate and useful Padé approximants. For this identification, we replace the truncated expansion for the corotational Jeffreys fluid with its Padé approximants for the normal stress differences. We uncover the most accurate and useful approximant, the [3,4] approximant, and then test its accuracy against the exact solution [C. Saengow and A. J. Giacomin, "Normal stress differences from Oldroyd 8-constant framework: Exact analytical solution for large-amplitude oscillatory shear flow," Phys. Fluids 29, 121601 (2017)]. We use Ewoldt grids to show the stunning accuracy of our [3,4] approximant in LAOS. We quantify this accuracy with an objective function and then map it onto the Pipkin space. Our two applications illustrate how to use our new approximant reliably. For this, we use the Spriggs relations to generalize our best approximant to multimode, and then, we compare with measurements on molten high-density polyethylene and on dissolved polyisobutylene in isobutylene oligomer.
Laplace transform homotopy perturbation method for the approximation of variational problems.
Filobello-Nino, U; Vazquez-Leal, H; Rashidi, M M; Sedighi, H M; Perez-Sesma, A; Sandoval-Hernandez, M; Sarmiento-Reyes, A; Contreras-Hernandez, A D; Pereyra-Diaz, D; Hoyos-Reyes, C; Jimenez-Fernandez, V M; Huerta-Chua, J; Castro-Gonzalez, F; Laguna-Camacho, J R
2016-01-01
This article proposes the application of Laplace Transform-Homotopy Perturbation Method and some of its modifications in order to find analytical approximate solutions for the linear and nonlinear differential equations which arise from some variational problems. As case study we will solve four ordinary differential equations, and we will show that the proposed solutions have good accuracy, even we will obtain an exact solution. In the sequel, we will see that the square residual error for the approximate solutions, belongs to the interval [0.001918936920, 0.06334882582], which confirms the accuracy of the proposed methods, taking into account the complexity and difficulty of variational problems.
Derivation of phase functions from multiply scattered sunlight transmitted through a hazy atmosphere
NASA Technical Reports Server (NTRS)
Weinman, J. A.; Twitty, J. T.; Browning, S. R.; Herman, B. M.
1975-01-01
The intensity of sunlight multiply scattered in model atmospheres is derived from the equation of radiative transfer by an analytical small-angle approximation. The approximate analytical solutions are compared to rigorous numerical solutions of the same problem. Results obtained from an aerosol-laden model atmosphere are presented. Agreement between the rigorous and the approximate solutions is found to be within a few per cent. The analytical solution to the problem which considers an aerosol-laden atmosphere is then inverted to yield a phase function which describes a single scattering event at small angles. The effect of noisy data on the derived phase function is discussed.
NASA Astrophysics Data System (ADS)
Singh, Harendra
2018-04-01
The key purpose of this article is to introduce an efficient computational method for the approximate solution of the homogeneous as well as non-homogeneous nonlinear Lane-Emden type equations. Using proposed computational method given nonlinear equation is converted into a set of nonlinear algebraic equations whose solution gives the approximate solution to the Lane-Emden type equation. Various nonlinear cases of Lane-Emden type equations like standard Lane-Emden equation, the isothermal gas spheres equation and white-dwarf equation are discussed. Results are compared with some well-known numerical methods and it is observed that our results are more accurate.
Mascons, GRACE, and Time-variable Gravity
NASA Technical Reports Server (NTRS)
Lemoine, F.; Lutchke, S.; Rowlands, D.; Klosko, S.; Chinn, D.; Boy, J. P.
2006-01-01
The GRACE mission has been in orbit now for three years and now regularly produces snapshots of the Earth s gravity field on a monthly basis. The convenient standard approach has been to perform global solutions in spherical harmonics. Alternative local representations of mass variations using mascons show great promise and offer advantages in terms of computational efficiency, minimization of problems due to aliasing, and increased temporal resolution. In this paper, we discuss the results of processing the GRACE KBRR data from March 2003 through August 2005 to produce solutions for GRACE mass variations over mid-latitude and equatorial regions, such as South America, India and the United States, and over the polar regions (Antarctica and Greenland), with a focus on the methodology. We describe in particular mascon solutions developed on regular 4 degree x 4 degree grids, and those tailored specifically to drainage basins over these regions.
A comparative study of minimum norm inverse methods for MEG imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leahy, R.M.; Mosher, J.C.; Phillips, J.W.
1996-07-01
The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less
A deformation of Sasakian structure in the presence of torsion and supergravity solutions
NASA Astrophysics Data System (ADS)
Houri, Tsuyoshi; Takeuchi, Hiroshi; Yasui, Yukinori
2013-07-01
A deformation of Sasakian structure in the presence of totally skew-symmetric torsion is discussed on odd-dimensional manifolds whose metric cones are Kähler with torsion. It is shown that such a geometry inherits similar properties to those of Sasakian geometry. As their example, we present an explicit expression of local metrics. It is also demonstrated that our example of the metrics admits the existence of hidden symmetry described by non-trivial odd-rank generalized closed conformal Killing-Yano tensors. Furthermore, using these metrics as an ansatz, we construct exact solutions in five-dimensional minimal gauged/ungauged supergravity and 11-dimensional supergravity. Finally, the global structures of the solutions are discussed. We obtain regular metrics on compact manifolds in five dimensions, which give natural generalizations of Sasaki-Einstein manifolds Yp, q and La, b, c. We also briefly discuss regular metrics on non-compact manifolds in 11 dimensions.
NASA Astrophysics Data System (ADS)
Saadat, S. A.; Safari, A.; Needell, D.
2016-06-01
The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.
Wormhole solutions with a complex ghost scalar field and their instability
NASA Astrophysics Data System (ADS)
Dzhunushaliev, Vladimir; Folomeev, Vladimir; Kleihaus, Burkhard; Kunz, Jutta
2018-01-01
We study compact configurations with a nontrivial wormholelike spacetime topology supported by a complex ghost scalar field with a quartic self-interaction. For this case, we obtain regular asymptotically flat equilibrium solutions possessing reflection symmetry. We then show their instability with respect to linear radial perturbations.
Closure to new results for an approximate method for calculating two-dimensional furrow infiltration
USDA-ARS?s Scientific Manuscript database
In a discussion paper, Ebrahimian and Noury (2015) raised several concerns about an approximate solution to the two-dimensional Richards equation presented by Bautista et al (2014). The solution is based on a procedure originally proposed by Warrick et al. (2007). Such a solution is of practical i...
NASA Astrophysics Data System (ADS)
Quy Muoi, Pham; Nho Hào, Dinh; Sahoo, Sujit Kumar; Tang, Dongliang; Cong, Nguyen Huu; Dang, Cuong
2018-05-01
In this paper, we study a gradient-type method and a semismooth Newton method for minimization problems in regularizing inverse problems with nonnegative and sparse solutions. We propose a special penalty functional forcing the minimizers of regularized minimization problems to be nonnegative and sparse, and then we apply the proposed algorithms in a practical the problem. The strong convergence of the gradient-type method and the local superlinear convergence of the semismooth Newton method are proven. Then, we use these algorithms for the phase retrieval problem and illustrate their efficiency in numerical examples, particularly in the practical problem of optical imaging through scattering media where all the noises from experiment are presented.
Dirac-Born-Infeld actions and tachyon monopoles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calo, Vincenzo; Tallarita, Gianni; Thomas, Steven
2010-04-15
We investigate magnetic monopole solutions of the non-Abelian Dirac-Born-Infeld (DBI) action describing two coincident non-BPS D9-branes in flat space. Just as in the case of kink and vortex solitonic tachyon solutions of the full DBI non-BPS actions, as previously analyzed by Sen, these monopole configurations are singular in the first instance and require regularization. We discuss a suitable non-Abelian ansatz that describes a pointlike magnetic monopole and show it solves the equations of motion to leading order in the regularization parameter. Fluctuations are studied and shown to describe a codimension three BPS D6-brane, and a formula is derived for itsmore » tension.« less
On the Solutions of a 2+1-Dimensional Model for Epitaxial Growth with Axial Symmetry
NASA Astrophysics Data System (ADS)
Lu, Xin Yang
2018-04-01
In this paper, we study the evolution equation derived by Xu and Xiang (SIAM J Appl Math 69(5):1393-1414, 2009) to describe heteroepitaxial growth in 2+1 dimensions with elastic forces on vicinal surfaces is in the radial case and uniform mobility. This equation is strongly nonlinear and contains two elliptic integrals and defined via Cauchy principal value. We will first derive a formally equivalent parabolic evolution equation (i.e., full equivalence when sufficient regularity is assumed), and the main aim is to prove existence, uniqueness and regularity of strong solutions. We will extensively use techniques from the theory of evolution equations governed by maximal monotone operators in Banach spaces.
Complex optimization for big computational and experimental neutron datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Some Investigations Relating to the Elastostatics of a Tapered Tube
1978-03-01
regularity of the solution on the Z axis. Indeed the assumption of such’regularity is stated explicitly by Heins (p. 789) and the problems solved (e.g. a... assumptions , becomes where t h e integrand is evaluated a t ( + i ,O). This i s a form P a of t he i n t e g r a l representa t ion of t h e...solut ion. Now l e t us look a t t h e assumptions on Q. F i r s t of a l l , i n order t o be sure t h a t our operations a r e l eg i
Complex optimization for big computational and experimental neutron datasets
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...
2016-11-07
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Analysis of borehole expansion and gallery tests in anisotropic rock masses
Amadei, B.; Savage, W.Z.
1991-01-01
Closed-form solutions are used to show how rock anisotropy affects the variation of the modulus of deformation around the walls of a hole in which expansion tests are conducted. These tests include dilatometer and NX-jack tests in boreholes and gallery tests in tunnels. The effects of rock anisotropy on the modulus of deformation are shown for transversely isotropic and regularly jointed rock masses with planes of transverse isotropy or joint planes parallel or normal to the hole longitudinal axis for plane strain or plane stress condition. The closed-form solutions can also be used when determining the elastic properties of anisotropic rock masses (intact or regularly jointed) in situ. ?? 1991.
Höfle, Stefan; Bernhard, Christoph; Bruns, Michael; Kübel, Christian; Scherer, Torsten; Lemmer, Uli; Colsmann, Alexander
2015-04-22
Tandem organic light emitting diodes (OLEDs) utilizing fluorescent polymers in both sub-OLEDs and a regular device architecture were fabricated from solution, and their structure and performance characterized. The charge carrier generation layer comprised a zinc oxide layer, modified by a polyethylenimine interface dipole, for electron injection and either MoO3, WO3, or VOx for hole injection into the adjacent sub-OLEDs. ToF-SIMS investigations and STEM-EDX mapping verified the distinct functional layers throughout the layer stack. At a given device current density, the current efficiencies of both sub-OLEDs add up to a maximum of 25 cd/A, indicating a properly working tandem OLED.
Theoretical and experimental study on multimode optical fiber grating
NASA Astrophysics Data System (ADS)
Yunming, Wang; Jingcao, Dai; Mingde, Zhang; Xiaohan, Sun
2005-06-01
The characteristics of multimode optical fiber Bragg grating (MMFBG) are studied theoretically and experimentally. For the first time the analysis of MMFBG based on a novel quasi-three-dimensional (Q-3D) finite-difference time-domain beam propagation method (Q-FDTD-BPM) is described through separating the angle component of vector field solution from the cylindrical coordinate so that several discrete two-dimensional (2D) equations are obtained, which simplify the 3D equations. And then these equations are developed using an alternating-direction implicit method and generalized Douglas scheme, which achieves higher accuracy than the regular FD scheme. All of the 2D solutions for the field intensities are also added with different power coefficients for different angle mode order numbers to obtain 3D field distributions in MMFBG. The presented method has been demonstrated as suitable simulation tool for analyzing MMFBG. In addition, based on the hydrogen-loaded and phase mask techniques, a series of Bragg grating have been written into the silicon multimode optical fiber loaded hydrogen for a month, and the spectrums for that have been measured, which obtain good results approximate to the results in the experiment. Group delay/differentiate group delay spectrums are obtained using Agilent 81910A Photonic All-Parameter Analyzer.
Sparse Learning with Stochastic Composite Optimization.
Zhang, Weizhong; Zhang, Lijun; Jin, Zhongming; Jin, Rong; Cai, Deng; Li, Xuelong; Liang, Ronghua; He, Xiaofei
2017-06-01
In this paper, we study Stochastic Composite Optimization (SCO) for sparse learning that aims to learn a sparse solution from a composite function. Most of the recent SCO algorithms have already reached the optimal expected convergence rate O(1/λT), but they often fail to deliver sparse solutions at the end either due to the limited sparsity regularization during stochastic optimization (SO) or due to the limitation in online-to-batch conversion. Even when the objective function is strongly convex, their high probability bounds can only attain O(√{log(1/δ)/T}) with δ is the failure probability, which is much worse than the expected convergence rate. To address these limitations, we propose a simple yet effective two-phase Stochastic Composite Optimization scheme by adding a novel powerful sparse online-to-batch conversion to the general Stochastic Optimization algorithms. We further develop three concrete algorithms, OptimalSL, LastSL and AverageSL, directly under our scheme to prove the effectiveness of the proposed scheme. Both the theoretical analysis and the experiment results show that our methods can really outperform the existing methods at the ability of sparse learning and at the meantime we can improve the high probability bound to approximately O(log(log(T)/δ)/λT).
NASA Astrophysics Data System (ADS)
Dong, Bo-Qing; Jia, Yan; Li, Jingna; Wu, Jiahong
2018-05-01
This paper focuses on a system of the 2D magnetohydrodynamic (MHD) equations with the kinematic dissipation given by the fractional operator (-Δ )^α and the magnetic diffusion by partial Laplacian. We are able to show that this system with any α >0 always possesses a unique global smooth solution when the initial data is sufficiently smooth. In addition, we make a detailed study on the large-time behavior of these smooth solutions and obtain optimal large-time decay rates. Since the magnetic diffusion is only partial here, some classical tools such as the maximal regularity property for the 2D heat operator can no longer be applied. A key observation on the structure of the MHD equations allows us to get around the difficulties due to the lack of full Laplacian magnetic diffusion. The results presented here are the sharpest on the global regularity problem for the 2D MHD equations with only partial magnetic diffusion.
Simplified multiple scattering model for radiative transfer in turbid water
NASA Technical Reports Server (NTRS)
Ghovanlou, A. H.; Gupta, G. N.
1978-01-01
Quantitative analytical procedures for relating selected water quality parameters to the characteristics of the backscattered signals, measured by remote sensors, require the solution of the radiative transport equation in turbid media. Presented is an approximate closed form solution of this equation and based on this solution, the remote sensing of sediments is discussed. The results are compared with other standard closed form solutions such as quasi-single scattering approximations.
NASA Astrophysics Data System (ADS)
Cummings, Patrick
We consider the approximation of solutions of two complicated, physical systems via the nonlinear Schrodinger equation (NLS). In particular, we discuss the evolution of wave packets and long waves in two physical models. Due to the complicated nature of the equations governing many physical systems and the in-depth knowledge we have for solutions of the nonlinear Schrodinger equation, it is advantageous to use approximation results of this kind to model these physical systems. The approximations are simple enough that we can use them to understand the qualitative and quantitative behavior of the solutions, and by justifying them we can show that the behavior of the approximation captures the behavior of solutions to the original equation, at least for long, but finite time. We first consider a model of the water wave equations which can be approximated by wave packets using the NLS equation. We discuss a new proof that both simplifies and strengthens previous justification results of Schneider and Wayne. Rather than using analytic norms, as was done by Schneider and Wayne, we construct a modified energy functional so that the approximation holds for the full interval of existence of the approximate NLS solution as opposed to a subinterval (as is seen in the analytic case). Furthermore, the proof avoids problems associated with inverting the normal form transform by working with a modified energy functional motivated by Craig and Hunter et al. We then consider the Klein-Gordon-Zakharov system and prove a long wave approximation result. In this case there is a non-trivial resonance that cannot be eliminated via a normal form transform. By combining the normal form transform for small Fourier modes and using analytic norms elsewhere, we can get a justification result on the order 1 over epsilon squared time scale.
NASA Technical Reports Server (NTRS)
Ito, Kazufumi
1987-01-01
The linear quadratic optimal control problem on infinite time interval for linear time-invariant systems defined on Hilbert spaces is considered. The optimal control is given by a feedback form in terms of solution pi to the associated algebraic Riccati equation (ARE). A Ritz type approximation is used to obtain a sequence pi sup N of finite dimensional approximations of the solution to ARE. A sufficient condition that shows pi sup N converges strongly to pi is obtained. Under this condition, a formula is derived which can be used to obtain a rate of convergence of pi sup N to pi. The results of the Galerkin approximation is demonstrated and applied for parabolic systems and the averaging approximation for hereditary differential systems.
Hall, Wayne
2015-01-01
To examine changes in the evidence on the adverse health effects of cannabis since 1993. A comparison of the evidence in 1993 with the evidence and interpretation of the same health outcomes in 2013. Research in the past 20 years has shown that driving while cannabis-impaired approximately doubles car crash risk and that around one in 10 regular cannabis users develop dependence. Regular cannabis use in adolescence approximately doubles the risks of early school-leaving and of cognitive impairment and psychoses in adulthood. Regular cannabis use in adolescence is also associated strongly with the use of other illicit drugs. These associations persist after controlling for plausible confounding variables in longitudinal studies. This suggests that cannabis use is a contributory cause of these outcomes but some researchers still argue that these relationships are explained by shared causes or risk factors. Cannabis smoking probably increases cardiovascular disease risk in middle-aged adults but its effects on respiratory function and respiratory cancer remain unclear, because most cannabis smokers have smoked or still smoke tobacco. The epidemiological literature in the past 20 years shows that cannabis use increases the risk of accidents and can produce dependence, and that there are consistent associations between regular cannabis use and poor psychosocial outcomes and mental health in adulthood. © 2014 Society for the Study of Addiction.
Effects of high-frequency damping on iterative convergence of implicit viscous solver
NASA Astrophysics Data System (ADS)
Nishikawa, Hiroaki; Nakashima, Yoshitaka; Watanabe, Norihiko
2017-11-01
This paper discusses effects of high-frequency damping on iterative convergence of an implicit defect-correction solver for viscous problems. The study targets a finite-volume discretization with a one parameter family of damped viscous schemes. The parameter α controls high-frequency damping: zero damping with α = 0, and larger damping for larger α (> 0). Convergence rates are predicted for a model diffusion equation by a Fourier analysis over a practical range of α. It is shown that the convergence rate attains its minimum at α = 1 on regular quadrilateral grids, and deteriorates for larger values of α. A similar behavior is observed for regular triangular grids. In both quadrilateral and triangular grids, the solver is predicted to diverge for α smaller than approximately 0.5. Numerical results are shown for the diffusion equation and the Navier-Stokes equations on regular and irregular grids. The study suggests that α = 1 and 4/3 are suitable values for robust and efficient computations, and α = 4 / 3 is recommended for the diffusion equation, which achieves higher-order accuracy on regular quadrilateral grids. Finally, a Jacobian-Free Newton-Krylov solver with the implicit solver (a low-order Jacobian approximately inverted by a multi-color Gauss-Seidel relaxation scheme) used as a variable preconditioner is recommended for practical computations, which provides robust and efficient convergence for a wide range of α.
NASA Astrophysics Data System (ADS)
Adavi, Zohre; Mashhadi-Hossainali, Masoud
2015-04-01
Water vapor is considered as one of the most important weather parameter in meteorology. Its non-uniform distribution, which is due to the atmospheric phenomena above the surface of the earth, depends both on space and time. Due to the limited spatial and temporal coverage of observations, estimating water vapor is still a challenge in meteorology and related fields such as positioning and geodetic techniques. Tomography is a method for modeling the spatio-temporal variations of this parameter. By analyzing the impact of troposphere on the Global Navigation Satellite (GNSS) signals, inversion techniques are used for modeling the water vapor in this approach. Non-uniqueness and instability of solution are the two characteristic features of this problem. Horizontal and/or vertical constraints are usually used to compute a unique solution for this problem. Here, a hybrid regularization method is used for computing a regularized solution. The adopted method is based on the Least-Square QR (LSQR) and Tikhonov regularization techniques. This method benefits from the advantages of both the iterative and direct techniques. Moreover, it is independent of initial values. Based on this property and using an appropriate resolution for the model, firstly the number of model elements which are not constrained by GPS measurement are minimized and then; water vapor density is only estimated at the voxels which are constrained by these measurements. In other words, no constraint is added to solve the problem. Reconstructed profiles of water vapor are validated using radiosonde measurements.
NASA Astrophysics Data System (ADS)
Corbard, T.; Berthomieu, G.; Provost, J.; Blanc-Feraud, L.
Inferring the solar rotation from observed frequency splittings represents an ill-posed problem in the sense of Hadamard and the traditional approach used to override this difficulty consists in regularizing the problem by adding some a priori information on the global smoothness of the solution defined as the norm of its first or second derivative. Nevertheless, inversions of rotational splittings (e.g. Corbard et al., 1998; Schou et al., 1998) have shown that the surface layers and the so-called solar tachocline (Spiegel & Zahn 1992) at the base of the convection zone are regions in which high radial gradients of the rotation rate occur. %there exist high gradients in the solar rotation profile near %the surface and at the base of the convection zone (e.g. Corbard et al. 1998) %in the so-called solar tachocline (Spiegel & Zahn 1992). Therefore, the global smoothness a-priori which tends to smooth out every high gradient in the solution may not be appropriate for the study of a zone like the tachocline which is of particular interest for the study of solar dynamics (e.g. Elliot 1997). In order to infer the fine structure of such regions with high gradients by inverting helioseismic data, we have to find a way to preserve these zones in the inversion process. Setting a more adapted constraint on the solution leads to non-linear regularization methods that are in current use for edge-preserving regularization in computed imaging (e.g. Blanc-Feraud et al. 1995). In this work, we investigate their use in the helioseismic context of rotational inversions.
Particle-like solutions of the Einstein-Dirac-Maxwell equations
NASA Astrophysics Data System (ADS)
Finster, Felix; Smoller, Joel; Yau, Shing-Tung
1999-08-01
We consider the coupled Einstein-Dirac-Maxwell equations for a static, spherically symmetric system of two fermions in a singlet spinor state. Soliton-like solutions are constructed numerically. The stability and the properties of the ground state solutions are discussed for different values of the electromagnetic coupling constant. We find solutions even when the electromagnetic coupling is so strong that the total interaction is repulsive in the Newtonian limit. Our solutions are regular and well-behaved; this shows that the combined electromagnetic and gravitational self-interaction of the Dirac particles is finite.
NASA Astrophysics Data System (ADS)
Provencher, Stephen W.
1982-09-01
CONTIN is a portable Fortran IV package for inverting noisy linear operator equations. These problems occur in the analysis of data from a wide variety experiments. They are generally ill-posed problems, which means that errors in an unregularized inversion are unbounded. Instead, CONTIN seeks the optimal solution by incorporating parsimony and any statistical prior knowledge into the regularizor and absolute prior knowledge into equallity and inequality constraints. This can be greatly increase the resolution and accuracyh of the solution. CONTIN is very flexible, consisting of a core of about 50 subprograms plus 13 small "USER" subprograms, which the user can easily modify to specify special-purpose constraints, regularizors, operator equations, simulations, statistical weighting, etc. Specjial collections of USER subprograms are available for photon correlation spectroscopy, multicomponent spectra, and Fourier-Bessel, Fourier and Laplace transforms. Numerically stable algorithms are used throughout CONTIN. A fairly precise definition of information content in terms of degrees of freedom is given. The regularization parameter can be automatically chosen on the basis of an F-test and confidence region. The interpretation of the latter and of error estimates based on the covariance matrix of the constrained regularized solution are discussed. The strategies, methods and options in CONTIN are outlined. The program itself is described in the following paper.
Asymptotic traveling wave solution for a credit rating migration problem
NASA Astrophysics Data System (ADS)
Liang, Jin; Wu, Yuan; Hu, Bei
2016-07-01
In this paper, an asymptotic traveling wave solution of a free boundary model for pricing a corporate bond with credit rating migration risk is studied. This is the first study to associate the asymptotic traveling wave solution to the credit rating migration problem. The pricing problem with credit rating migration risk is modeled by a free boundary problem. The existence, uniqueness and regularity of the solution are obtained. Under some condition, we proved that the solution of our credit rating problem is convergent to a traveling wave solution, which has an explicit form. Furthermore, numerical examples are presented.
Evolutionary Games of Multiplayer Cooperation on Graphs
Arranz, Jordi; Traulsen, Arne
2016-01-01
There has been much interest in studying evolutionary games in structured populations, often modeled as graphs. However, most analytical results so far have only been obtained for two-player or linear games, while the study of more complex multiplayer games has been usually tackled by computer simulations. Here we investigate evolutionary multiplayer games on graphs updated with a Moran death-Birth process. For cycles, we obtain an exact analytical condition for cooperation to be favored by natural selection, given in terms of the payoffs of the game and a set of structure coefficients. For regular graphs of degree three and larger, we estimate this condition using a combination of pair approximation and diffusion approximation. For a large class of cooperation games, our approximations suggest that graph-structured populations are stronger promoters of cooperation than populations lacking spatial structure. Computer simulations validate our analytical approximations for random regular graphs and cycles, but show systematic differences for graphs with many loops such as lattices. In particular, our simulation results show that these kinds of graphs can even lead to more stringent conditions for the evolution of cooperation than well-mixed populations. Overall, we provide evidence suggesting that the complexity arising from many-player interactions and spatial structure can be captured by pair approximation in the case of random graphs, but that it need to be handled with care for graphs with high clustering. PMID:27513946
Higher order total variation regularization for EIT reconstruction.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut
2018-01-08
Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.
Hamilton's Principle and Approximate Solutions to Problems in Classical Mechanics
ERIC Educational Resources Information Center
Schlitt, D. W.
1977-01-01
Shows how to use the Ritz method for obtaining approximate solutions to problems expressed in variational form directly from the variational equation. Application of this method to classical mechanics is given. (MLH)
Legendre-tau approximations for functional differential equations
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1986-01-01
The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.
Legendre-Tau approximations for functional differential equations
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1983-01-01
The numerical approximation of solutions to linear functional differential equations are considered using the so called Legendre tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time differentiation. The approximate solution is then represented as a truncated Legendre series with time varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximations is made.
Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms
NASA Astrophysics Data System (ADS)
Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.
2017-09-01
Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.
Convection Regularization of High Wavenumbers in Turbulence ANS Shocks
2011-07-31
dynamics of particles that adhere to one another upon collision and has been studied as a simple cosmological model for describing the nonlinear formation of...solution we mean a solution to the Cauchy problem in the following sense. Definition 5.1. A function u : R × [0, T ] 7→ RN is a weak solution of the...step 2 the limit function in the α → 0 limit is shown to satisfy the definition of a weak solution for the Cauchy problem. Without loss of generality
First and second order approximations to stage numbers in multicomponent enrichment cascades
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scopatz, A.
2013-07-01
This paper describes closed form, Taylor series approximations to the number product stages in a multicomponent enrichment cascade. Such closed form approximations are required when a symbolic, rather than a numeric, algorithm is used to compute the optimal cascade state. Both first and second order approximations were implemented. The first order solution was found to be grossly incorrect, having the wrong functional form over the entire domain. On the other hand, the second order solution shows excellent agreement with the 'true' solution over the domain of interest. An implementation of the symbolic, second order solver is available in the freemore » and open source PyNE library. (authors)« less
Potentials of mean force for biomolecular simulations: Theory and test on alanine dipeptide
NASA Astrophysics Data System (ADS)
Pellegrini, Matteo; Grønbech-Jensen, Niels; Doniach, Sebastian
1996-06-01
We describe a technique for generating potentials of mean force (PMF) between solutes in an aqueous solution. We first generate solute-solvent correlation functions (CF) using Monte Carlo (MC) simulations in which we place a single atom solute in a periodic boundary box containing a few hundred water molecules. We then make use of the Kirkwood superposition approximation, where the 3-body correlation function is approximated as the product of 2-body CFs, to describe the mean water density around two solutes. Computing the force generated on the solutes by this average water density allows us to compute potentials of mean force between the two solutes. For charged solutes an additional approximation involving dielectric screening is made, by setting the dielectric constant of water to ɛ=80. These potentials account, in an approximate manner, for the average effect of water on the atoms. Following the work of Pettitt and Karplus [Chem. Phys. Lett. 121, 194 (1985)], we approximate the n-body potential of mean force as a sum of the pairwise potentials of mean force. This allows us to run simulations of biomolecules without introducing explicit water, hence gaining several orders of magnitude in efficiency with respect to standard molecular dynamics techniques. We demonstrate the validity of this technique by first comparing the PMFs for methane-methane and sodium-chloride generated with this procedure, with those calculated with a standard Monte Carlo simulation with explicit water. We then compare the results of the free energy profiles between the equilibria of alanine dipeptide generated by the two methods.
NASA Astrophysics Data System (ADS)
Krisch, J. P.; Glass, E. N.
2014-10-01
A set of cylindrical solutions to Einstein's field equations for power law densities is described. The solutions have a Bessel function contribution to the metric. For matter cylinders regular on axis, the first two solutions are the constant density Gott-Hiscock string and a cylinder with a metric Airy function. All members of this family have the Vilenkin limit to their mass per length. Some examples of Bessel shells and Bessel motion are given.
Application of geometric approximation to the CPMG experiment: Two- and three-site exchange.
Chao, Fa-An; Byrd, R Andrew
2017-04-01
The Carr-Purcell-Meiboom-Gill (CPMG) experiment is one of the most classical and well-known relaxation dispersion experiments in NMR spectroscopy, and it has been successfully applied to characterize biologically relevant conformational dynamics in many cases. Although the data analysis of the CPMG experiment for the 2-site exchange model can be facilitated by analytical solutions, the data analysis in a more complex exchange model generally requires computationally-intensive numerical analysis. Recently, a powerful computational strategy, geometric approximation, has been proposed to provide approximate numerical solutions for the adiabatic relaxation dispersion experiments where analytical solutions are neither available nor feasible. Here, we demonstrate the general potential of geometric approximation by providing a data analysis solution of the CPMG experiment for both the traditional 2-site model and a linear 3-site exchange model. The approximate numerical solution deviates less than 0.5% from the numerical solution on average, and the new approach is computationally 60,000-fold more efficient than the numerical approach. Moreover, we find that accurate dynamic parameters can be determined in most cases, and, for a range of experimental conditions, the relaxation can be assumed to follow mono-exponential decay. The method is general and applicable to any CPMG RD experiment (e.g. N, C', C α , H α , etc.) The approach forms a foundation of building solution surfaces to analyze the CPMG experiment for different models of 3-site exchange. Thus, the geometric approximation is a general strategy to analyze relaxation dispersion data in any system (biological or chemical) if the appropriate library can be built in a physically meaningful domain. Published by Elsevier Inc.
Convolutionless Nakajima-Zwanzig equations for stochastic analysis in nonlinear dynamical systems.
Venturi, D; Karniadakis, G E
2014-06-08
Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima-Zwanzig-Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection-reaction problems.
The spatial configuration of ordered polynucleotide chains. II. The poly(rA) helix.
Olson, W K
1975-01-01
Approximate details of the spatial configuration of the ordered single-stranded poly(rA) molecule in dilute solution have been obtained in a combined theoretical analysis of base stacking and chain flexibility. Only those regularly repeating structures which fulfill the criterion of conformational flexibility (based upon all available experimental and theoretical evidence of preferred bond rotations) and which also exhibit the right-handed base stacking pattern observed in nmr investigations of poly(rA) are deemed suitable single-stranded helices. In addition, the helical geometry of the stacked structures is required to be consistent with the experimentally observed dimensions of both completely ordered and partially ordered poly(rA) chains. Only a single category of poly(rA) helices (very similar in all conformational details to the individual chains of the poly(rA) double-stranded X-ray structure) is thus obtained. Other conformationally feasible polynucleotide helices characterized simply by a parallel and overlapping base stacking arrangement are also discussed. PMID:1052529
Finotello, Alexia; Bara, Jason E; Narayan, Suguna; Camper, Dean; Noble, Richard D
2008-02-28
This study focuses on the solubility behaviors of CO2, CH4, and N2 gases in binary mixtures of imidazolium-based room-temperature ionic liquids (RTILs) using 1-ethyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide ([C2mim][Tf2N]) and 1-ethyl-3-methylimidazolium tetrafluoroborate ([C2mim][BF4]) at 40 degrees C and low pressures (approximately 1 atm). The mixtures tested were 0, 25, 50, 75, 90, 95, and 100 mol % [C2mim][BF4] in [C2mim][Tf2N]. Results show that regular solution theory (RST) can be used to describe the gas solubility and selectivity behaviors in RTIL mixtures using an average mixture solubility parameter or an average measured mixture molar volume. Interestingly, the solubility selectivity, defined as the ratio of gas mole fractions in the RTIL mixture, of CO2 with N2 or CH4 in pure [C2mim][BF4] can be enhanced by adding 5 mol % [C2mim][Tf2N].
NASA Astrophysics Data System (ADS)
Roubíček, Tomáš; Tomassetti, Giuseppe
2018-06-01
A theory of elastic magnets is formulated under possible diffusion and heat flow governed by Fick's and Fourier's laws in the deformed (Eulerian) configuration, respectively. The concepts of nonlocal nonsimple materials and viscous Cahn-Hilliard equations are used. The formulation of the problem uses Lagrangian (reference) configuration while the transport processes are pulled back. Except the static problem, the demagnetizing energy is ignored and only local non-self-penetration is considered. The analysis as far as existence of weak solutions of the (thermo) dynamical problem is performed by a careful regularization and approximation by a Galerkin method, suggesting also a numerical strategy. Either ignoring or combining particular aspects, the model has numerous applications as ferro-to-paramagnetic transformation in elastic ferromagnets, diffusion of solvents in polymers possibly accompanied by magnetic effects (magnetic gels), or metal-hydride phase transformation in some intermetallics under diffusion of hydrogen accompanied possibly by magnetic effects (and in particular ferro-to-antiferromagnetic phase transformation), all in the full thermodynamical context under large strains.
Convolutionless Nakajima–Zwanzig equations for stochastic analysis in nonlinear dynamical systems
Venturi, D.; Karniadakis, G. E.
2014-01-01
Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima–Zwanzig–Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection–reaction problems. PMID:24910519
NASA Astrophysics Data System (ADS)
Karami, Fahd; Ziad, Lamia; Sadik, Khadija
2017-12-01
In this paper, we focus on a numerical method of a problem called the Perona-Malik inequality which we use for image denoising. This model is obtained as the limit of the Perona-Malik model and the p-Laplacian operator with p→ ∞. In Atlas et al., (Nonlinear Anal. Real World Appl 18:57-68, 2014), the authors have proved the existence and uniqueness of the solution of the proposed model. However, in their work, they used the explicit numerical scheme for approximated problem which is strongly dependent to the parameter p. To overcome this, we use in this work an efficient algorithm which is a combination of the classical additive operator splitting and a nonlinear relaxation algorithm. At last, we have presented the experimental results in image filtering show, which demonstrate the efficiency and effectiveness of our algorithm and finally, we have compared it with the previous scheme presented in Atlas et al., (Nonlinear Anal. Real World Appl 18:57-68, 2014).
Zhou, Quanlin; Oldenburg, Curtis M.; Spangler, Lee H.; ...
2017-01-05
Analytical solutions with infinite exponential series are available to calculate the rate of diffusive transfer between low-permeability blocks and high-permeability zones in the subsurface. Truncation of these series is often employed by neglecting the early-time regime. Here in this paper, we present unified-form approximate solutions in which the early-time and the late-time solutions are continuous at a switchover time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the first coefficient dependent only on the dimensionless area-to-volume ratio. The last two coefficients are either determined analytically for isotropic blocks (e.g., spheresmore » and slabs) or obtained by fitting the exact solutions, and they solely depend on the aspect ratios for rectangular columns and parallelepipeds. For the late-time solutions, only the leading exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic rectangular blocks. The optimal switchover time is between 0.157 and 0.229, with highest relative approximation error less than 0.2%. The solutions are used to demonstrate the storage of dissolved CO 2 in fractured reservoirs with low-permeability matrix blocks of single and multiple shapes and sizes. These approximate solutions are building blocks for development of analytical and numerical tools for hydraulic, solute, and thermal diffusion processes in low-permeability matrix blocks.« less
NASA Astrophysics Data System (ADS)
Kudo, K.; Maeda, H.; Kawakubo, T.; Ootani, Y.; Funaki, M.; Fukui, H.
2006-06-01
The normalized elimination of the small component (NESC) theory, recently proposed by Filatov and Cremer [J. Chem. Phys. 122, 064104 (2005)], is extended to include magnetic interactions and applied to the calculation of the nuclear magnetic shielding in HX (X =F,Cl,Br,I) systems. The NESC calculations are performed at the levels of the zeroth-order regular approximation (ZORA) and the second-order regular approximation (SORA). The calculations show that the NESC-ZORA results are very close to the NESC-SORA results, except for the shielding of the I nucleus. Both the NESC-ZORA and NESC-SORA calculations yield very similar results to the previously reported values obtained using the relativistic infinite-order two-component coupled Hartree-Fock method. The difference between NESC-ZORA and NESC-SORA results is significant for the shieldings of iodine.
Zeroth order regular approximation approach to electric dipole moment interactions of the electron.
Gaul, Konstantin; Berger, Robert
2017-07-07
A quasi-relativistic two-component approach for an efficient calculation of P,T-odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.
Zeroth order regular approximation approach to electric dipole moment interactions of the electron
NASA Astrophysics Data System (ADS)
Gaul, Konstantin; Berger, Robert
2017-07-01
A quasi-relativistic two-component approach for an efficient calculation of P ,T -odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.
Range-Separated Brueckner Coupled Cluster Doubles Theory
NASA Astrophysics Data System (ADS)
Shepherd, James J.; Henderson, Thomas M.; Scuseria, Gustavo E.
2014-04-01
We introduce a range-separation approximation to coupled cluster doubles (CCD) theory that successfully overcomes limitations of regular CCD when applied to the uniform electron gas. We combine the short-range ladder channel with the long-range ring channel in the presence of a Bruckner renormalized one-body interaction and obtain ground-state energies with an accuracy of 0.001 a.u./electron across a wide range of density regimes. Our scheme is particularly useful in the low-density and strongly correlated regimes, where regular CCD has serious drawbacks. Moreover, we cure the infamous overcorrelation of approaches based on ring diagrams (i.e., the particle-hole random phase approximation). Our energies are further shown to have appropriate basis set and thermodynamic limit convergence, and overall this scheme promises energetic properties for realistic periodic and extended systems which existing methods do not possess.
Spatial resolution properties of motion-compensated tomographic image reconstruction methods.
Chun, Se Young; Fessler, Jeffrey A
2012-07-01
Many motion-compensated image reconstruction (MCIR) methods have been proposed to correct for subject motion in medical imaging. MCIR methods incorporate motion models to improve image quality by reducing motion artifacts and noise. This paper analyzes the spatial resolution properties of MCIR methods and shows that nonrigid local motion can lead to nonuniform and anisotropic spatial resolution for conventional quadratic regularizers. This undesirable property is akin to the known effects of interactions between heteroscedastic log-likelihoods (e.g., Poisson likelihood) and quadratic regularizers. This effect may lead to quantification errors in small or narrow structures (such as small lesions or rings) of reconstructed images. This paper proposes novel spatial regularization design methods for three different MCIR methods that account for known nonrigid motion. We develop MCIR regularization designs that provide approximately uniform and isotropic spatial resolution and that match a user-specified target spatial resolution. Two-dimensional PET simulations demonstrate the performance and benefits of the proposed spatial regularization design methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karagiannis, Georgios; Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesianmore » model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.« less
NASA Astrophysics Data System (ADS)
Schafbuch, Paul Jay
The boundary element method (BEM) is used to numerically simulate the interaction of ultrasonic waves with material defects such as voids, inclusions, and open cracks. The time harmonic formulation is in 3D and therefore allows flaws of arbitrary shape to be modeled. The BEM makes such problems feasible because the underlying boundary integral equation only requires a surface (2D) integration and difficulties associated with the seemingly infinite extent of the host domain are not encountered. The computer code utilized in this work is built upon recent advances in elastodynamic boundary element theory such as a scheme for self adjusting integration order and singular integration regularization. Incident fields may be taken as compressional or shear plane waves or predicted by an approximate Gauss -Hermite beam model. The code is highly optimized for voids and has been coupled with computer aided engineering packages for automated flaw shape definition and mesh generation. Subsequent graphical display of intermediate results supports model refinement and physical interpretation. Final results are typically cast in a nondestructive evaluation (NDE) context as either scattering amplitudes or flaw signals (via a measurement model based on a reciprocity integral). The near field is also predicted which allows for improved physical insight into the scattering process and the evaluation of certain modeling approximations. The accuracy of the BEM approach is first examined by comparing its predictions to those of other models for single, isolated scatterers. The comparisons are with the predictions of analytical solutions for spherical defects and with MOOT and T-matrix calculations for axisymmetric flaws. Experimental comparisons are also made for volumetric shapes with different characteristic dimensions in all three directions, since no other numerical approach has yet produced results of this type. Theoretical findings regarding the fictitious eigenfrequency difficulty are substantiated through the analytical solution of a fundamental elastodynamics problem and corresponding BEM studies. Given the confidence in the BEM technique engendered by these comparisons, it is then used to investigate the modeling of "open", cracklike defects amenable to a volumetric formulation. The limits of applicability of approximate theories (e.g., quasistatic, Kirchhoff, and geometric theory of diffraction) are explored for elliptical cracks, from this basis. The problem of two interacting scatterers is then considered. Results from a fully implicit approach and from a more efficient hybrid scheme are compared with generalized Born and farfield approximate interaction theories.
NASA Astrophysics Data System (ADS)
Schafbuch, Paul Jay
1991-02-01
The boundary element method (BEM) is used to numerically simulate the interaction of ultrasonic waves with material defects such as voids, inclusions, and open cracks. The time harmonic formulation is in 3D and therefore allows flaws of arbitrary shape to be modeled. The BEM makes such problems feasible because the underlying boundary integral equation only requires a surface (2D) integration and difficulties associated with the seemingly infinite extent of the host domain are not encountered. The computer code utilized in this work is built upon recent advances in elastodynamic boundary element theory such as a scheme for self adjusting integration order and singular integration regularization. Incident fields may be taken as compressional or shear plane waves or predicted by an approximate Gauss-Hermite beam model. The code is highly optimized for voids and has been coupled with computer aided engineering packages for automated flaw shape definition and mesh generation. Subsequent graphical display of intermediate results supports model refinement and physical interpretation. Final results are typically cast in a nondestructive evaluation (NDE) context as either scattering amplitudes or flaw signals (via a measurement model based on a reciprocity integral). The near field is also predicted which allows for improved physical insight into the scattering process and the evaluation of certain modeling approximations. The accuracy of the BEM approach is first examined by comparing its predictions to those of other models for single, isolated scatters. The comparisons are with the predictions of analytical solutions for spherical defects and with MOOT and T-matrix calculations for axisymmetric flaws. Experimental comparisons are also made for volumetric shapes with different characteristic dimensions in all three directions, since no other numerical approach has yet produced results of this type. Theoretical findings regarding the fictitious eigenfrequency difficulty are substantiated through the analytical solution of a fundamental elastodynamics problem and corresponding BEM studies. Given the confidence in the BEM technique engendered by these comparisons, it is then used to investigate the modeling of 'open', cracklike defects amenable to a volumetric formulation. The limits of applicability of approximate theories (e.g., quasistatic, Kirchhoff, and geometric theory of diffraction) are explored for elliptical cracks, from this basis. The problem of two interacting scatterers is then considered. Results from a fully implicit approach and from a more efficient hybrid scheme are compared with generalized Born and farfield approximate interaction theories.
ERIC Educational Resources Information Center
Litchfield, Daniel C.; Goldenheim, David A.
1997-01-01
Describes the solution to a geometric problem by two ninth-grade mathematicians using The Geometer's Sketchpad computer software program. The problem was to divide any line segment into a regular partition of any number of parts, a variation on a problem by Euclid. The solution yielded two constructions, one a GLaD construction and the other using…
ERIC Educational Resources Information Center
Grigsby, Greg
This report summarizes and presents information from interviews with 22 National Inservice Network project directors. The purpose was to identify problems and solutions encountered in directing regular education inservice (REGI) projects. The projects were sponsored by institutions of higher education, state and local education agencies, and an…
NASA Astrophysics Data System (ADS)
Tugay, A. V.; Zakordonskiy, V. P.
2006-06-01
The association of cationogenic benzethonium chloride with polymethacrylic acid in aqueous solutions was studied by nephelometry, conductometry, tensiometry, viscometry, and pH-metry. The critical concentrations of aggregation and polymer saturation with the surface-active substance were determined. A model describing processes in such systems step by step was suggested.
NASA Astrophysics Data System (ADS)
Filatov, Michael; Cremer, Dieter
2005-01-01
A simple modification of the zeroth-order regular approximation (ZORA) in relativistic theory is suggested to suppress its erroneous gauge dependence to a high level of approximation. The method, coined gauge-independent ZORA (ZORA-GI), can be easily installed in any existing nonrelativistic quantum chemical package by programming simple one-electron matrix elements for the quasirelativistic Hamiltonian. Results of benchmark calculations obtained with ZORA-GI at the Hartree-Fock (HF) and second-order Møller-Plesset perturbation theory (MP2) level for dihalogens X2 (X=F,Cl,Br,I,At) are in good agreement with the results of four-component relativistic calculations (HF level) and experimental data (MP2 level). ZORA-GI calculations based on MP2 or coupled-cluster theory with single and double perturbations and a perturbative inclusion of triple excitations [CCSD(T)] lead to accurate atomization energies and molecular geometries for the tetroxides of group VIII elements. With ZORA-GI/CCSD(T), an improved estimate for the atomization energy of hassium (Z=108) tetroxide is obtained.
Lopez-Sangil, Luis; George, Charles; Medina-Barcenas, Eduardo; Birkett, Ali J; Baxendale, Catherine; Bréchet, Laëtitia M; Estradera-Gumbau, Eduard; Sayer, Emma J
2017-09-01
Root exudation is a key component of nutrient and carbon dynamics in terrestrial ecosystems. Exudation rates vary widely by plant species and environmental conditions, but our understanding of how root exudates affect soil functioning is incomplete, in part because there are few viable methods to manipulate root exudates in situ . To address this, we devised the Automated Root Exudate System (ARES), which simulates increased root exudation by applying small amounts of labile solutes at regular intervals in the field.The ARES is a gravity-fed drip irrigation system comprising a reservoir bottle connected via a timer to a micro-hose irrigation grid covering c . 1 m 2 ; 24 drip-tips are inserted into the soil to 4-cm depth to apply solutions into the rooting zone. We installed two ARES subplots within existing litter removal and control plots in a temperate deciduous woodland. We applied either an artificial root exudate solution (RE) or a procedural control solution (CP) to each subplot for 1 min day -1 during two growing seasons. To investigate the influence of root exudation on soil carbon dynamics, we measured soil respiration monthly and soil microbial biomass at the end of each growing season.The ARES applied the solutions at a rate of c . 2 L m -2 week -1 without significantly increasing soil water content. The application of RE solution had a clear effect on soil carbon dynamics, but the response varied by litter treatment. Across two growing seasons, soil respiration was 25% higher in RE compared to CP subplots in the litter removal treatment, but not in the control plots. By contrast, we observed a significant increase in microbial biomass carbon (33%) and nitrogen (26%) in RE subplots in the control litter treatment.The ARES is an effective, low-cost method to apply experimental solutions directly into the rooting zone in the field. The installation of the systems entails minimal disturbance to the soil and little maintenance is required. Although we used ARES to apply root exudate solution, the method can be used to apply many other treatments involving solute inputs at regular intervals in a wide range of ecosystems.
1977-12-01
exponentials encountered are complex and zhey are approximately at harmonic frequencies. Moreover, the real parts of the complex exponencials are much...functions as a basis for expanding the current distribution on an antenna by the method of moments results in a regularized ill-posed problem with respect...to the current distribution on the antenna structure. However, the problem is not regularized with respect to chaoge because the chaPge distribution
Regularization of instabilities in gravity theories
NASA Astrophysics Data System (ADS)
Ramazanoǧlu, Fethi M.
2018-01-01
We investigate instabilities and their regularization in theories of gravitation. Instabilities can be beneficial since their growth often leads to prominent observable signatures, which makes them especially relevant to relatively low signal-to-noise ratio measurements such as gravitational wave detections. An indefinitely growing instability usually renders a theory unphysical; hence, a desirable instability should also come with underlying physical machinery that stops the growth at finite values, i.e., regularization mechanisms. The prototypical gravity theory that presents such an instability is the spontaneous scalarization phenomena of scalar-tensor theories, which feature a tachyonic instability. We identify the regularization mechanisms in this theory and show that they can be utilized to regularize other instabilities as well. Namely, we present theories in which spontaneous growth is triggered by a ghost rather than a tachyon and numerically calculate stationary solutions of scalarized neutron stars in these theories. We speculate on the possibility of regularizing known divergent instabilities in certain gravity theories using our findings and discuss alternative theories of gravitation in which regularized instabilities may be present. Even though we study many specific examples, our main point is the recognition of regularized instabilities as a common theme and unifying mechanism in a vast array of gravity theories.
Regularization of soft-X-ray imaging in the DIII-D tokamak
Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...
2015-03-02
We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less
Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms
NASA Astrophysics Data System (ADS)
Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R.; Kim, Jeehyun; Nelson, J. Stuart
2008-03-01
Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.
Exact and approximate solutions to the oblique shock equations for real-time applications
NASA Technical Reports Server (NTRS)
Hartley, T. T.; Brandis, R.; Mossayebi, F.
1991-01-01
The derivation of exact solutions for determining the characteristics of an oblique shock wave in a supersonic flow is investigated. Specifically, an explicit expression for the oblique shock angle in terms of the free stream Mach number, the centerbody deflection angle, and the ratio of the specific heats, is derived. A simpler approximate solution is obtained and compared to the exact solution. The primary objectives of obtaining these solutions is to provide a fast algorithm that can run in a real time environment.
Convex blind image deconvolution with inverse filtering
NASA Astrophysics Data System (ADS)
Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong
2018-03-01
Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.
Approximated analytical solution to an Ebola optimal control problem
NASA Astrophysics Data System (ADS)
Hincapié-Palacio, Doracelly; Ospina, Juan; Torres, Delfim F. M.
2016-11-01
An analytical expression for the optimal control of an Ebola problem is obtained. The analytical solution is found as a first-order approximation to the Pontryagin Maximum Principle via the Euler-Lagrange equation. An implementation of the method is given using the computer algebra system Maple. Our analytical solutions confirm the results recently reported in the literature using numerical methods.
Second-order numerical solution of time-dependent, first-order hyperbolic equations
NASA Technical Reports Server (NTRS)
Shah, Patricia L.; Hardin, Jay
1995-01-01
A finite difference scheme is developed to find an approximate solution of two similar hyperbolic equations, namely a first-order plane wave and spherical wave problem. Finite difference approximations are made for both the space and time derivatives. The result is a conditionally stable equation yielding an exact solution when the Courant number is set to one.
Finlay, Andrea K.; White, Helene R.; Mun, Eun-Young; Cronley, Courtney C.; Lee, Chioun
2011-01-01
Background Although there are significant differences in prevalence of substance use between African-American and White adolescents, few studies have examined racial differences in developmental patterns of substance use, especially during the important developmental transition from adolescence to young adulthood. This study examines racial differences in trajectories of heavy drinking and regular marijuana use from adolescence into young adulthood. Methods A community-based sample of non-Hispanic African-American (n = 276) and non-Hispanic White (n = 211) males was analyzed to identify trajectories from ages 13 through 24. Results Initial analyses indicated race differences in heavy drinking and regular marijuana use trajectories. African Americans were more likely than Whites to be members of the nonheavy drinkers/nondrinkers group and less likely to be members of the early-onset heavy drinkers group. The former were also more likely than the latter to be members of the late-onset regular marijuana use group. Separate analyses by race indicated differences in heavy drinking for African Americans and Whites. A 2-group model for heavy drinking fit best for African Americans, whereas a 4-group solution fit best for Whites. For regular marijuana use, a similar 4-group solution fit for both races, although group proportions differed. Conclusions Within-race analyses indicated that there were clear race differences in the long-term patterns of alcohol use; regular marijuana use patterns were more similar. Extended follow ups are needed to examine differences and similarities in maturation processes for African-American and White males. For both races, prevention and intervention efforts are necessary into young adulthood. PMID:21908109
Stresses and deformations in cross-ply composite tubes subjected to a uniform temperature change
NASA Technical Reports Server (NTRS)
Hyer, M. W.; Cooper, D. E.; Cohen, D.
1986-01-01
This study investigates the effects of a uniform temperature change on the stresses and deformations of composite tubes and determines the accuracy of an approximate solution based on the principle of complementary virtual work. Interest centers on tube response away from the ends and so a planar elasticity approach is used. For the approximate solution a piecewise linear variation of stresses with the radial coordinate is assumed. The results from the approximate solution are compared with the elasticity solution. The stress predictions agree well, particularly peak interlaminar stresses. Surprisingly, the axial deformations also agree well, despite the fact that the deformations predicted by the approximate solution do not satisfy the interface displacement continuity conditions required by the elasticity solution. The study shows that the axial thermal expansion coefficient of tubes with a specific number of axial and circumferential layers depends on the stacking sequence. This is in contrast to classical lamination theory, which predicts that the expansion will be independent of the stacking arrangement. As expected, the sign and magnitude of the peak interlaminar stresses depend on stacking sequence. For tubes with a specific number of axial and circumferential layers, thermally induced interlaminar stresses can be controlled by altering stacking arrangement.
Regularity of random attractors for fractional stochastic reaction-diffusion equations on Rn
NASA Astrophysics Data System (ADS)
Gu, Anhui; Li, Dingshi; Wang, Bixiang; Yang, Han
2018-06-01
We investigate the regularity of random attractors for the non-autonomous non-local fractional stochastic reaction-diffusion equations in Hs (Rn) with s ∈ (0 , 1). We prove the existence and uniqueness of the tempered random attractor that is compact in Hs (Rn) and attracts all tempered random subsets of L2 (Rn) with respect to the norm of Hs (Rn). The main difficulty is to show the pullback asymptotic compactness of solutions in Hs (Rn) due to the noncompactness of Sobolev embeddings on unbounded domains and the almost sure nondifferentiability of the sample paths of the Wiener process. We establish such compactness by the ideas of uniform tail-estimates and the spectral decomposition of solutions in bounded domains.
Ionospheric-thermospheric UV tomography: 1. Image space reconstruction algorithms
NASA Astrophysics Data System (ADS)
Dymond, K. F.; Budzien, S. A.; Hei, M. A.
2017-03-01
We present and discuss two algorithms of the class known as Image Space Reconstruction Algorithms (ISRAs) that we are applying to the solution of large-scale ionospheric tomography problems. ISRAs have several desirable features that make them useful for ionospheric tomography. In addition to producing nonnegative solutions, ISRAs are amenable to sparse-matrix formulations and are fast, stable, and robust. We present the results of our studies of two types of ISRA: the Least Squares Positive Definite and the Richardson-Lucy algorithms. We compare their performance to the Multiplicative Algebraic Reconstruction and Conjugate Gradient Least Squares algorithms. We then discuss the use of regularization in these algorithms and present our new approach based on regularization to a partial differential equation.
Electroencephalography in ellipsoidal geometry with fourth-order harmonics.
Alcocer-Sosa, M; Gutierrez, D
2016-08-01
We present a solution to the electroencephalographs (EEG) forward problem of computing the scalp electric potentials for the case when the head's geometry is modeled using a four-shell ellipsoidal geometry and the brain sources with an equivalent current dipole (ECD). The proposed solution includes terms up to the fourth-order ellipsoidal harmonics and we compare this new approximation against those that only considered up to second- and third-order harmonics. Our comparisons use as reference a solution in which a tessellated volume approximates the head and the forward problem is solved through the boundary element method (BEM). We also assess the solution to the inverse problem of estimating the magnitude of an ECD through different harmonic approximations. Our results show that the fourth-order solution provides a better estimate of the ECD in comparison to lesser order ones.
NASA Astrophysics Data System (ADS)
Doerr, Timothy P.; Alves, Gelio; Yu, Yi-Kuo
2005-08-01
Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time using the transfer matrix technique or, equivalently, the dynamic programming approach. This suggests a way to efficiently find approximate solutions-find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of the kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the finite number of high-ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks-peptide sequencing using tandem mass spectrometry data. For directed paths in random media, the scaling function depends on the particular realization of randomness; in the mass spectrometry case, the scaling function is spectrum-specific.
Boundedness and almost Periodicity in Time of Solutions of Evolutionary Variational Inequalities
NASA Astrophysics Data System (ADS)
Pankov, A. A.
1983-04-01
In this paper existence theorems are obtained for the solutions of abstract parabolic variational inequalities, which are bounded with respect to time (in the Stepanov and L^\\infty norms). The regularity and almost periodicity properties of such solutions are studied. Theorems are also established concerning their solvability in spaces of Besicovitch almost periodic functions. The majority of the results are obtained without any compactness assumptions. Bibliography: 30 titles.
Fabrication of solution-processed InSnZnO/ZrO2 thin film transistors.
Hwang, Soo Min; Lee, Seung Muk; Choi, Jun Hyuk; Lim, Jun Hyung; Joo, Jinho
2013-11-01
We fabricated InSnZnO (ITZO) thin-film transistors (TFTs) with a high-permittivity (K) ZrO2 gate insulator using a solution process and explored the microstructure and electrical properties. ZrO2 and ITZO (In:Sn:Zn = 2:1:1) precursor solutions were deposited using consecutive spin-coating and drying steps on highly doped p-type Si substrate, followed by annealing at 700 degrees C in ambient air. The ITZO/ZrO2 TFT device showed n-channel depletion mode characteristics, and it possessed a high saturation mobility of approximately 9.8 cm2/V x s, a small subthreshold voltage swing of approximately 2.3 V/decade, and a negative V(TH) of approximately 1.5 V, but a relatively low on/off current ratio of approximately 10(-3). These results were thought to be due to the use of the high-kappa crystallized ZrO2 dielectric (kappa approximately 21.8) as the gate insulator, which could permit low-voltage operation of the solution-processed ITZO TFT devices for applications to high-throughput, low-cost, flexible and transparent electronics.
SKYNET: an efficient and robust neural network training tool for machine learning in astronomy
NASA Astrophysics Data System (ADS)
Graff, Philip; Feroz, Farhan; Hobson, Michael P.; Lasenby, Anthony
2014-06-01
We present the first public release of our generic neural network training algorithm, called SKYNET. This efficient and robust machine learning tool is able to train large and deep feed-forward neural networks, including autoencoders, for use in a wide range of supervised and unsupervised learning applications, such as regression, classification, density estimation, clustering and dimensionality reduction. SKYNET uses a `pre-training' method to obtain a set of network parameters that has empirically been shown to be close to a good solution, followed by further optimization using a regularized variant of Newton's method, where the level of regularization is determined and adjusted automatically; the latter uses second-order derivative information to improve convergence, but without the need to evaluate or store the full Hessian matrix, by using a fast approximate method to calculate Hessian-vector products. This combination of methods allows for the training of complicated networks that are difficult to optimize using standard backpropagation techniques. SKYNET employs convergence criteria that naturally prevent overfitting, and also includes a fast algorithm for estimating the accuracy of network outputs. The utility and flexibility of SKYNET are demonstrated by application to a number of toy problems, and to astronomical problems focusing on the recovery of structure from blurred and noisy images, the identification of gamma-ray bursters, and the compression and denoising of galaxy images. The SKYNET software, which is implemented in standard ANSI C and fully parallelized using MPI, is available at http://www.mrao.cam.ac.uk/software/skynet/.
NASA Astrophysics Data System (ADS)
Ray, Anandaroop; Key, Kerry; Bodin, Thomas; Myer, David; Constable, Steven
2014-12-01
We apply a reversible-jump Markov chain Monte Carlo method to sample the Bayesian posterior model probability density function of 2-D seafloor resistivity as constrained by marine controlled source electromagnetic data. This density function of earth models conveys information on which parts of the model space are illuminated by the data. Whereas conventional gradient-based inversion approaches require subjective regularization choices to stabilize this highly non-linear and non-unique inverse problem and provide only a single solution with no model uncertainty information, the method we use entirely avoids model regularization. The result of our approach is an ensemble of models that can be visualized and queried to provide meaningful information about the sensitivity of the data to the subsurface, and the level of resolution of model parameters. We represent models in 2-D using a Voronoi cell parametrization. To make the 2-D problem practical, we use a source-receiver common midpoint approximation with 1-D forward modelling. Our algorithm is transdimensional and self-parametrizing where the number of resistivity cells within a 2-D depth section is variable, as are their positions and geometries. Two synthetic studies demonstrate the algorithm's use in the appraisal of a thin, segmented, resistive reservoir which makes for a challenging exploration target. As a demonstration example, we apply our method to survey data collected over the Scarborough gas field on the Northwest Australian shelf.
Axisymmetric inertial modes in a spherical shell at low Ekman numbers
NASA Astrophysics Data System (ADS)
Rieutord, M.; Valdettaro, L.
2018-06-01
We investigate the asymptotic properties of axisymmetric inertial modes propagating in a spherical shell when viscosity tends to zero. We identify three kinds of eigenmodes whose eigenvalues follow very different laws as the Ekman number $E$ becomes very small. First are modes associated with attractors of characteristics that are made of thin shear layers closely following the periodic orbit traced by the characteristic attractor. Second are modes made of shear layers that connect the critical latitude singularities of the two hemispheres of the inner boundary of the spherical shell. Third are quasi-regular modes associated with the frequency of neutral periodic orbits of characteristics. We thoroughly analyse a subset of attractor modes for which numerical solutions point to an asymptotic law governing the eigenvalues. We show that three length scales proportional to $E^{1/6}$, $E^{1/4}$ and $E^{1/3}$ control the shape of the shear layers that are associated with these modes. These scales point out the key role of the small parameter $E^{1/12}$ in these oscillatory flows. With a simplified model of the viscous Poincar\\'e equation, we can give an approximate analytical formula that reproduces the velocity field in such shear layers. Finally, we also present an analysis of the quasi-regular modes whose frequencies are close to $\\sin(\\pi/4)$ and explain why a fluid inside a spherical shell cannot respond to any periodic forcing at this frequency when viscosity vanishes.
Effects of CO2/HCO3- in perilymph on the endocochlear potential in guinea pigs.
Nimura, Yoshitsugu; Mori, Yoshiaki; Inui, Takaki; Sohma, Yoshiro; Takenaka, Hiroshi; Kubota, Takahiro
2007-02-01
The effect of CO(2)/HCO(3)(-) on the endocochlear potential (EP) was examined by using both ion-selective and conventional microelectrodes and the endolymphatic or perilymphatic perfusion technique. The main findings were as follows: (i) A decrease in the EP from approximately +75 to approximately +35 mV was produced by perilymphatic perfusion with CO(2)/HCO(3)(-)-free solution, which decrease was accompanied by an increase in the endolymphatic pH (DeltapH(e), approximately 0.4). (ii) Perilymphatic perfusion with a solution containing 20 mM NH(4)Cl produced a decrease in the EP (DeltaEP, approximately 20 mV) with an increase in the pH(e) (DeltapH(e), approximately 0.2), whereas switching the perfusion solution from the NH(4)Cl solution to a 5% CO(2)/25 mM HCO(3)(-) solution produced a gradual increase in the EP to the control level with the concomitant recovery of the pH(e). (iii) The perfusion with a solution of high or low HCO(3)(-) with a constant CO(2) level within 10 min produced no significant changes in the EP. (iv) Perfusion of the perilymph with 10 microg/ml nifedipine suppressed the transient asphyxia-induced decrease in EP slightly, but not significantly. (v) By contrast, the administration of 1 microg/ml nifedipine via the endolymph inhibited significantly the reduction in the EP induced by transient asphyxia or perilymphatic perfusion with CO(2)/HCO(3)(-)-free or 20 mM NH(4)Cl solution. These findings suggest that the effect of CO(2) removal from perilymphatic perfusion solution on the EP may be mediated by an increase in cytosolic Ca(2+) concentration induced by an elevation of cytosolic pH in endolymphatic surface cells.
OPTRAN- OPTIMAL LOW THRUST ORBIT TRANSFERS
NASA Technical Reports Server (NTRS)
Breakwell, J. V.
1994-01-01
OPTRAN is a collection of programs that solve the problem of optimal low thrust orbit transfers between non-coplanar circular orbits for spacecraft with chemical propulsion systems. The programs are set up to find Hohmann-type solutions, with burns near the perigee and apogee of the transfer orbit. They will solve both fairly long burn-arc transfers and "divided-burn" transfers. Program modeling includes a spherical earth gravity model and propulsion system models for either constant thrust or constant acceleration. The solutions obtained are optimal with respect to fuel use: i.e., final mass of the spacecraft is maximized with respect to the controls. The controls are the direction of thrust and the thrust on/off times. Two basic types of programs are provided in OPTRAN. The first type is for "exact solution" which results in complete, exact tkme-histories. The exact spacecraft position, velocity, and optimal thrust direction are given throughout the maneuver, as are the optimal thrust switch points, the transfer time, and the fuel costs. Exact solution programs are provided in two versions for non-coplanar transfers and in a fast version for coplanar transfers. The second basic type is for "approximate solutions" which results in approximate information on the transfer time and fuel costs. The approximate solution is used to estimate initial conditions for the exact solution. It can be used in divided-burn transfers to find the best number of burns with respect to time. The approximate solution is useful by itself in relatively efficient, short burn-arc transfers. These programs are written in FORTRAN 77 for batch execution and have been implemented on a DEC VAX series computer with the largest program having a central memory requirement of approximately 54K of 8 bit bytes. The OPTRAN program were developed in 1983.
ERIC Educational Resources Information Center
Johannessen, Kim
2010-01-01
An analytic approximation of the solution to the differential equation describing the oscillations of a simple pendulum at large angles and with initial velocity is discussed. In the derivation, a sinusoidal approximation has been applied, and an analytic formula for the large-angle period of the simple pendulum is obtained, which also includes…
NASA Astrophysics Data System (ADS)
La Mura, Cristina; Gholami, Vahid; Panza, Giuliano F.
2013-04-01
In order to enable realistic and reliable earthquake hazard assessment and reliable estimation of the ground motion response to an earthquake, three-dimensional velocity models have to be considered. The propagation of seismic waves in complex laterally varying 3D layered structures is a complicated process. Analytical solutions of the elastodynamic equations for such types of media are not known. The most common approaches to the formal description of seismic wavefields in such complex structures are methods based on direct numerical solutions of the elastodynamic equations, e.g. finite-difference, finite-element method, and approximate asymptotic methods. In this work, we present an innovative methodology for computing synthetic seismograms, complete of the main direct, refracted, converted phases and surface waves in three-dimensional anelastic models based on the combination of the Modal Summation technique with the Asymptotic Ray Theory in the framework of the WKBJ - approximation. The three - dimensional models are constructed using a set of vertically heterogeneous sections (1D structures) that are juxtaposed on a regular grid. The distribution of these sections in the grid is done in such a way to fulfill the requirement of weak lateral inhomogeneity in order to satisfy the condition of applicability of the WKBJ - approximation, i.e. the lateral gradient of the parameters characterizing the 1D structure has to be small with respect to the prevailing wavelength. The new method has been validated comparing synthetic seismograms with the records available of three different earthquakes in three different regions: Kanto basin (Japan) triggered by the 1990 Odawara earthquake Mw= 5.1, Romanian territory triggered by the 30 May 1990 Vrancea intermediate-depth earthquake Mw= 6.9 and Iranian territory affected by the 26 December 2003 Bam earthquake Mw= 6.6. Besides the advantage of being a useful tool for assessment of seismic hazard and seismic risk reduction, it is characterized by high efficiency, in fact, once the study region is identified and the 3D model is constructed, the computation, at each station, of the three components of the synthetic signal (displacement, velocity, and acceleration) takes less than 3 hours on a 2 GHz CPU.
NASA Technical Reports Server (NTRS)
Thate, Robert
2012-01-01
The modular flooring system (MFS) was developed to provide a portable, modular, durable carpeting solution for NASA fs Robotics Alliance Project fs (RAP) outreach efforts. It was also designed to improve and replace a modular flooring system that was too heavy for safe use and transportation. The MFS was developed for use as the flooring for various robotics competitions that RAP utilizes to meet its mission goals. One of these competitions, the FIRST Robotics Competition (FRC), currently uses two massive rolls of broadloom carpet for the foundation of the arena in which the robots are contained during the competition. The area of the arena is approximately 30 by 72 ft (approximately 9 by 22 m). This carpet is very cumbersome and requires large-capacity vehicles, and handling equipment and personnel to transport and deploy. The broadloom carpet sustains severe abuse from the robots during a regular three-day competition, and as a result, the carpet is not used again for competition. Similarly, broadloom carpets used for trade shows at convention centers around the world are typically discarded after only one use. This innovation provides a green solution to this wasteful practice. Each of the flooring modules in the previous system weighed 44 lb (.20 kg). The improvements in the overall design of the system reduce the weight of each module by approximately 22 lb (.10 kg) (50 %), and utilize an improved "module-to-module" connection method that is superior to the previous system. The MFS comprises 4-by-4-ft (.1.2-by- 1.2-m) carpet module assemblies that utilize commercially available carpet tiles that are bonded to a lightweight substrate. The substrate surface opposite from the carpeted surface has a module-to-module connecting interface that allows for the modules to be connected, one to the other, as the modules are constructed. This connection is hidden underneath the modules, creating a smooth, co-planar flooring surface. The modules are stacked and strapped onto durable, commercially available drywall carts for storage and/or transportation. This method of storage and transportation makes it very convenient and safe when handling large quantities of modules.
Shahbazi, Mohammad; Saranlı, Uluç; Babuška, Robert; Lopes, Gabriel A D
2016-12-05
This paper introduces approximate time-domain solutions to the otherwise non-integrable double-stance dynamics of the 'bipedal' spring-loaded inverted pendulum (B-SLIP) in the presence of non-negligible damping. We first introduce an auxiliary system whose behavior under certain conditions is approximately equivalent to the B-SLIP in double-stance. Then, we derive approximate solutions to the dynamics of the new system following two different methods: (i) updated-momentum approach that can deal with both the lossy and lossless B-SLIP models, and (ii) perturbation-based approach following which we only derive a solution to the lossless case. The prediction performance of each method is characterized via a comprehensive numerical analysis. The derived representations are computationally very efficient compared to numerical integrations, and, hence, are suitable for online planning, increasing the autonomy of walking robots. Two application examples of walking gait control are presented. The proposed solutions can serve as instrumental tools in various fields such as control in legged robotics and human motion understanding in biomechanics.
Reproduction of exact solutions of Lipkin model by nonlinear higher random-phase approximation
NASA Astrophysics Data System (ADS)
Terasaki, J.; Smetana, A.; Šimkovic, F.; Krivoruchenko, M. I.
2017-10-01
It is shown that the random-phase approximation (RPA) method with its nonlinear higher generalization, which was previously considered as approximation except for a very limited case, reproduces the exact solutions of the Lipkin model. The nonlinear higher RPA is based on an equation nonlinear on eigenvectors and includes many-particle-many-hole components in the creation operator of the excited states. We demonstrate the exact character of solutions analytically for the particle number N = 2 and numerically for N = 8. This finding indicates that the nonlinear higher RPA is equivalent to the exact Schrödinger equation.
NASA Astrophysics Data System (ADS)
Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten
2018-06-01
This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.
Analytic solutions for Long's equation and its generalization
NASA Astrophysics Data System (ADS)
Humi, Mayer
2017-12-01
Two-dimensional, steady-state, stratified, isothermal atmospheric flow over topography is governed by Long's equation. Numerical solutions of this equation were derived and used by several authors. In particular, these solutions were applied extensively to analyze the experimental observations of gravity waves. In the first part of this paper we derive an extension of this equation to non-isothermal flows. Then we devise a transformation that simplifies this equation. We show that this simplified equation admits solitonic-type solutions in addition to regular gravity waves. These new analytical solutions provide new insights into the propagation and amplitude of gravity waves over topography.
A framework with Cucho algorithm for discovering regular plans in mobile clients
NASA Astrophysics Data System (ADS)
Tsiligaridis, John
2017-09-01
In a mobile computing system, broadcasting has become a very interesting and challenging research issue. The server continuously broadcasts data to mobile users; the data can be inserted into customized size relations and broadcasted as Regular Broadcast Plan (RBP) with multiple channels. Two algorithms, given the data size for each provided service, the Basic Regular (BRA) and the Partition Value Algorithm (PVA) can provide a static and dynamic RBP construction with multiple constraints solutions respectively. Servers have to define the data size of the services and can provide a feasible RBP working with many broadcasting plan operations. The operations become more complicated when there are many kinds of services and the sizes of data sets are unknown to the server. To that end a framework has been developed that also gives the ability to select low or high capacity channels for servicing. Theorems with new analytical results can provide direct conditions that can state the existence of solutions for the RBP problem with the compound criterion. Two kinds of solutions are provided: the equal and the non equal subrelation solutions. The Cucho Search Algorithm (CS) with the Levy flight behavior has been selected for the optimization. The CS for RBP (CSRP) is developed applying the theorems to the discovery of RBPs. An additional change to CS has been made in order to increase the local search. The CS can also discover RBPs with the minimum number of channels. From all the above modern servers can be upgraded with these possibilities in regards to RBPs discovery with fewer channels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pereyra, Brandon; Wendt, Fabian; Robertson, Amy
2017-03-09
The hydrodynamic loads on an offshore wind turbine's support structure present unique engineering challenges for offshore wind. Two typical approaches used for modeling these hydrodynamic loads are potential flow (PF) and strip theory (ST), the latter via Morison's equation. This study examines the first- and second-order wave-excitation surge forces on a fixed cylinder in regular waves computed by the PF and ST approaches to (1) verify their numerical implementations in HydroDyn and (2) understand when the ST approach breaks down. The numerical implementation of PF and ST in HydroDyn, a hydrodynamic time-domain solver implemented as a module in the FASTmore » wind turbine engineering tool, was verified by showing the consistency in the first- and second-order force output between the two methods across a range of wave frequencies. ST is known to be invalid at high frequencies, and this study investigates where the ST solution diverges from the PF solution. Regular waves across a range of frequencies were run in HydroDyn for a monopile substructure. As expected, the solutions for the first-order (linear) wave-excitation loads resulting from these regular waves are similar for PF and ST when the diameter of the cylinder is small compared to the length of the waves (generally when the diameter-to-wavelength ratio is less than 0.2). The same finding applies to the solutions for second-order wave-excitation loads, but for much smaller diameter-to-wavelength ratios (based on wavelengths of first-order waves).« less
Assessment of First- and Second-Order Wave-Excitation Load Models for Cylindrical Substructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pereyra, Brandon; Wendt, Fabian; Robertson, Amy
2016-07-01
The hydrodynamic loads on an offshore wind turbine's support structure present unique engineering challenges for offshore wind. Two typical approaches used for modeling these hydrodynamic loads are potential flow (PF) and strip theory (ST), the latter via Morison's equation. This study examines the first- and second-order wave-excitation surge forces on a fixed cylinder in regular waves computed by the PF and ST approaches to (1) verify their numerical implementations in HydroDyn and (2) understand when the ST approach breaks down. The numerical implementation of PF and ST in HydroDyn, a hydrodynamic time-domain solver implemented as a module in the FASTmore » wind turbine engineering tool, was verified by showing the consistency in the first- and second-order force output between the two methods across a range of wave frequencies. ST is known to be invalid at high frequencies, and this study investigates where the ST solution diverges from the PF solution. Regular waves across a range of frequencies were run in HydroDyn for a monopile substructure. As expected, the solutions for the first-order (linear) wave-excitation loads resulting from these regular waves are similar for PF and ST when the diameter of the cylinder is small compared to the length of the waves (generally when the diameter-to-wavelength ratio is less than 0.2). The same finding applies to the solutions for second-order wave-excitation loads, but for much smaller diameter-to-wavelength ratios (based on wavelengths of first-order waves).« less
Mehraeen, Shahab; Dierks, Travis; Jagannathan, S; Crow, Mariesa L
2013-12-01
In this paper, the nearly optimal solution for discrete-time (DT) affine nonlinear control systems in the presence of partially unknown internal system dynamics and disturbances is considered. The approach is based on successive approximate solution of the Hamilton-Jacobi-Isaacs (HJI) equation, which appears in optimal control. Successive approximation approach for updating control and disturbance inputs for DT nonlinear affine systems are proposed. Moreover, sufficient conditions for the convergence of the approximate HJI solution to the saddle point are derived, and an iterative approach to approximate the HJI equation using a neural network (NN) is presented. Then, the requirement of full knowledge of the internal dynamics of the nonlinear DT system is relaxed by using a second NN online approximator. The result is a closed-loop optimal NN controller via offline learning. A numerical example is provided illustrating the effectiveness of the approach.
Approximate Analytical Solutions for Hypersonic Flow Over Slender Power Law Bodies
NASA Technical Reports Server (NTRS)
Mirels, Harold
1959-01-01
Approximate analytical solutions are presented for two-dimensional and axisymmetric hypersonic flow over slender power law bodies. Both zero order (M approaches infinity) and first order (small but nonvanishing values of 1/(M(Delta)(sup 2) solutions are presented, where M is free-stream Mach number and Delta is a characteristic slope. These solutions are compared with exact numerical integration of the equations of motion and appear to be accurate particularly when the shock is relatively close to the body.
Path Following in the Exact Penalty Method of Convex Programming.
Zhou, Hua; Lange, Kenneth
2015-07-01
Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.
Path Following in the Exact Penalty Method of Convex Programming
Zhou, Hua; Lange, Kenneth
2015-01-01
Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044