Inverse problem of radiofrequency sounding of ionosphere
NASA Astrophysics Data System (ADS)
Velichko, E. N.; Yu. Grishentsev, A.; Korobeynikov, A. G.
2016-01-01
An algorithm for the solution of the inverse problem of vertical ionosphere sounding and a mathematical model of noise filtering are presented. An automated system for processing and analysis of spectrograms of vertical ionosphere sounding based on our algorithm is described. It is shown that the algorithm we suggest has a rather high efficiency. This is supported by the data obtained at the ionospheric stations of the so-called “AIS-M” type.
NASA Astrophysics Data System (ADS)
Alessandrini, Giovanni; de Hoop, Maarten V.; Gaburro, Romina
2017-12-01
We discuss the inverse problem of determining the, possibly anisotropic, conductivity of a body Ω\\subset{R}n when the so-called Neumann-to-Dirichlet map is locally given on a non-empty curved portion Σ of the boundary \\partialΩ . We prove that anisotropic conductivities that are a priori known to be piecewise constant matrices on a given partition of Ω with curved interfaces can be uniquely determined in the interior from the knowledge of the local Neumann-to-Dirichlet map.
Full-Physics Inverse Learning Machine for Satellite Remote Sensing Retrievals
NASA Astrophysics Data System (ADS)
Loyola, D. G.
2017-12-01
The satellite remote sensing retrievals are usually ill-posed inverse problems that are typically solved by finding a state vector that minimizes the residual between simulated data and real measurements. The classical inversion methods are very time-consuming as they require iterative calls to complex radiative-transfer forward models to simulate radiances and Jacobians, and subsequent inversion of relatively large matrices. In this work we present a novel and extremely fast algorithm for solving inverse problems called full-physics inverse learning machine (FP-ILM). The FP-ILM algorithm consists of a training phase in which machine learning techniques are used to derive an inversion operator based on synthetic data generated using a radiative transfer model (which expresses the "full-physics" component) and the smart sampling technique, and an operational phase in which the inversion operator is applied to real measurements. FP-ILM has been successfully applied to the retrieval of the SO2 plume height during volcanic eruptions and to the retrieval of ozone profile shapes from UV/VIS satellite sensors. Furthermore, FP-ILM will be used for the near-real-time processing of the upcoming generation of European Sentinel sensors with their unprecedented spectral and spatial resolution and associated large increases in the amount of data.
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
Zatsiorsky, Vladimir M.
2011-01-01
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907
Solving geosteering inverse problems by stochastic Hybrid Monte Carlo method
Shen, Qiuyang; Wu, Xuqing; Chen, Jiefu; ...
2017-11-20
The inverse problems arise in almost all fields of science where the real-world parameters are extracted from a set of measured data. The geosteering inversion plays an essential role in the accurate prediction of oncoming strata as well as a reliable guidance to adjust the borehole position on the fly to reach one or more geological targets. This mathematical treatment is not easy to solve, which requires finding an optimum solution among a large solution space, especially when the problem is non-linear and non-convex. Nowadays, a new generation of logging-while-drilling (LWD) tools has emerged on the market. The so-called azimuthalmore » resistivity LWD tools have azimuthal sensitivity and a large depth of investigation. Hence, the associated inverse problems become much more difficult since the earth model to be inverted will have more detailed structures. The conventional deterministic methods are incapable to solve such a complicated inverse problem, where they suffer from the local minimum trap. Alternatively, stochastic optimizations are in general better at finding global optimal solutions and handling uncertainty quantification. In this article, we investigate the Hybrid Monte Carlo (HMC) based statistical inversion approach and suggest that HMC based inference is more efficient in dealing with the increased complexity and uncertainty faced by the geosteering problems.« less
Polynomial compensation, inversion, and approximation of discrete time linear systems
NASA Technical Reports Server (NTRS)
Baram, Yoram
1987-01-01
The least-squares transformation of a discrete-time multivariable linear system into a desired one by convolving the first with a polynomial system yields optimal polynomial solutions to the problems of system compensation, inversion, and approximation. The polynomial coefficients are obtained from the solution to a so-called normal linear matrix equation, whose coefficients are shown to be the weighting patterns of certain linear systems. These, in turn, can be used in the recursive solution of the normal equation.
An ambiguity of information content and error in an ill-posed satellite inversion
NASA Astrophysics Data System (ADS)
Koner, Prabhat
According to Rodgers (2000, stochastic approach), the averaging kernel (AK) is the representational matrix to understand the information content in a scholastic inversion. On the other hand, in deterministic approach this is referred to as model resolution matrix (MRM, Menke 1989). The analysis of AK/MRM can only give some understanding of how much regularization is imposed on the inverse problem. The trace of the AK/MRM matrix, which is the so-called degree of freedom from signal (DFS; stochastic) or degree of freedom in retrieval (DFR; deterministic). There are no physical/mathematical explanations in the literature: why the trace of the matrix is a valid form to calculate this quantity? We will present an ambiguity between information and error using a real life problem of SST retrieval from GOES13. The stochastic information content calculation is based on the linear assumption. The validity of such mathematics in satellite inversion will be questioned because it is based on the nonlinear radiative transfer and ill-conditioned inverse problems. References: Menke, W., 1989: Geophysical data analysis: discrete inverse theory. San Diego academic press. Rodgers, C.D., 2000: Inverse methods for atmospheric soundings: theory and practice. Singapore :World Scientific.
A fast time-difference inverse solver for 3D EIT with application to lung imaging.
Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut
2016-08-01
A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.
Inverse Scattering and Local Observable Algebras in Integrable Quantum Field Theories
NASA Astrophysics Data System (ADS)
Alazzawi, Sabina; Lechner, Gandalf
2017-09-01
We present a solution method for the inverse scattering problem for integrable two-dimensional relativistic quantum field theories, specified in terms of a given massive single particle spectrum and a factorizing S-matrix. An arbitrary number of massive particles transforming under an arbitrary compact global gauge group is allowed, thereby generalizing previous constructions of scalar theories. The two-particle S-matrix S is assumed to be an analytic solution of the Yang-Baxter equation with standard properties, including unitarity, TCP invariance, and crossing symmetry. Using methods from operator algebras and complex analysis, we identify sufficient criteria on S that imply the solution of the inverse scattering problem. These conditions are shown to be satisfied in particular by so-called diagonal S-matrices, but presumably also in other cases such as the O( N)-invariant nonlinear {σ}-models.
Structural damage identification using an enhanced thermal exchange optimization algorithm
NASA Astrophysics Data System (ADS)
Kaveh, A.; Dadras, A.
2018-03-01
The recently developed optimization algorithm-the so-called thermal exchange optimization (TEO) algorithm-is enhanced and applied to a damage detection problem. An offline parameter tuning approach is utilized to set the internal parameters of the TEO, resulting in the enhanced heat transfer optimization (ETEO) algorithm. The damage detection problem is defined as an inverse problem, and ETEO is applied to a wide range of structures. Several scenarios with noise and noise-free modal data are tested and the locations and extents of damages are identified with good accuracy.
NASA Astrophysics Data System (ADS)
Corbard, T.; Berthomieu, G.; Provost, J.; Blanc-Feraud, L.
Inferring the solar rotation from observed frequency splittings represents an ill-posed problem in the sense of Hadamard and the traditional approach used to override this difficulty consists in regularizing the problem by adding some a priori information on the global smoothness of the solution defined as the norm of its first or second derivative. Nevertheless, inversions of rotational splittings (e.g. Corbard et al., 1998; Schou et al., 1998) have shown that the surface layers and the so-called solar tachocline (Spiegel & Zahn 1992) at the base of the convection zone are regions in which high radial gradients of the rotation rate occur. %there exist high gradients in the solar rotation profile near %the surface and at the base of the convection zone (e.g. Corbard et al. 1998) %in the so-called solar tachocline (Spiegel & Zahn 1992). Therefore, the global smoothness a-priori which tends to smooth out every high gradient in the solution may not be appropriate for the study of a zone like the tachocline which is of particular interest for the study of solar dynamics (e.g. Elliot 1997). In order to infer the fine structure of such regions with high gradients by inverting helioseismic data, we have to find a way to preserve these zones in the inversion process. Setting a more adapted constraint on the solution leads to non-linear regularization methods that are in current use for edge-preserving regularization in computed imaging (e.g. Blanc-Feraud et al. 1995). In this work, we investigate their use in the helioseismic context of rotational inversions.
Geophysical Investigations at Pahute Mesa, Nevada.
1987-08-12
Kelley et al., 1976), the Aki-Larner method (Aki and Larner, 1970) and generalized ray theory (Helmberger et al., 1985) to name a few examples. These...Three-dimensional calculations should be possible. Ferguson et al. (1988) have demonstrated that the so called Parker- Oldenburg technique (Parker...1972 Oldenburg , 1974) is effective in the inversion of large, three-dimensional problems. In this report an extension of the original formulation to
Specific Features in Measuring Particle Size Distributions in Highly Disperse Aerosol Systems
NASA Astrophysics Data System (ADS)
Zagaynov, V. A.; Vasyanovich, M. E.; Maksimenko, V. V.; Lushnikov, A. A.; Biryukov, Yu. G.; Agranovskii, I. E.
2018-06-01
The distribution of highly dispersed aerosols is studied. Particular attention is given to the diffusion dynamic approach, as it is the best way to determine particle size distribution. It shown that the problem can be divided into two steps: directly measuring particle penetration through diffusion batteries and solving the inverse problem (obtaining a size distribution from the measured penetrations). No reliable way of solving the so-called inverse problem is found, but it can be done by introducing a parametrized size distribution (i.e., a gamma distribution). The integral equation is therefore reduced to a system of nonlinear equations that can be solved by elementary mathematical means. Further development of the method requires an increase in sensitivity (i.e., measuring the dimensions of molecular clusters with radioactive sources, along with the activity of diffusion battery screens).
NASA Astrophysics Data System (ADS)
Agapiou, Sergios; Burger, Martin; Dashti, Masoumeh; Helin, Tapio
2018-04-01
We consider the inverse problem of recovering an unknown functional parameter u in a separable Banach space, from a noisy observation vector y of its image through a known possibly non-linear map {{\\mathcal G}} . We adopt a Bayesian approach to the problem and consider Besov space priors (see Lassas et al (2009 Inverse Problems Imaging 3 87-122)), which are well-known for their edge-preserving and sparsity-promoting properties and have recently attracted wide attention especially in the medical imaging community. Our key result is to show that in this non-parametric setup the maximum a posteriori (MAP) estimates are characterized by the minimizers of a generalized Onsager-Machlup functional of the posterior. This is done independently for the so-called weak and strong MAP estimates, which as we show coincide in our context. In addition, we prove a form of weak consistency for the MAP estimators in the infinitely informative data limit. Our results are remarkable for two reasons: first, the prior distribution is non-Gaussian and does not meet the smoothness conditions required in previous research on non-parametric MAP estimates. Second, the result analytically justifies existing uses of the MAP estimate in finite but high dimensional discretizations of Bayesian inverse problems with the considered Besov priors.
Acoustic Inversion in Optoacoustic Tomography: A Review
Rosenthal, Amir; Ntziachristos, Vasilis; Razansky, Daniel
2013-01-01
Optoacoustic tomography enables volumetric imaging with optical contrast in biological tissue at depths beyond the optical mean free path by the use of optical excitation and acoustic detection. The hybrid nature of optoacoustic tomography gives rise to two distinct inverse problems: The optical inverse problem, related to the propagation of the excitation light in tissue, and the acoustic inverse problem, which deals with the propagation and detection of the generated acoustic waves. Since the two inverse problems have different physical underpinnings and are governed by different types of equations, they are often treated independently as unrelated problems. From an imaging standpoint, the acoustic inverse problem relates to forming an image from the measured acoustic data, whereas the optical inverse problem relates to quantifying the formed image. This review focuses on the acoustic aspects of optoacoustic tomography, specifically acoustic reconstruction algorithms and imaging-system practicalities. As these two aspects are intimately linked, and no silver bullet exists in the path towards high-performance imaging, we adopt a holistic approach in our review and discuss the many links between the two aspects. Four classes of reconstruction algorithms are reviewed: time-domain (so called back-projection) formulae, frequency-domain formulae, time-reversal algorithms, and model-based algorithms. These algorithms are discussed in the context of the various acoustic detectors and detection surfaces which are commonly used in experimental studies. We further discuss the effects of non-ideal imaging scenarios on the quality of reconstruction and review methods that can mitigate these effects. Namely, we consider the cases of finite detector aperture, limited-view tomography, spatial under-sampling of the acoustic signals, and acoustic heterogeneities and losses. PMID:24772060
NASA Astrophysics Data System (ADS)
Ahn, Chi Young; Jeon, Kiwan; Park, Won-Kwang
2015-06-01
This study analyzes the well-known MUltiple SIgnal Classification (MUSIC) algorithm to identify unknown support of thin penetrable electromagnetic inhomogeneity from scattered field data collected within the so-called multi-static response matrix in limited-view inverse scattering problems. The mathematical theories of MUSIC are partially discovered, e.g., in the full-view problem, for an unknown target of dielectric contrast or a perfectly conducting crack with the Dirichlet boundary condition (Transverse Magnetic-TM polarization) and so on. Hence, we perform further research to analyze the MUSIC-type imaging functional and to certify some well-known but theoretically unexplained phenomena. For this purpose, we establish a relationship between the MUSIC imaging functional and an infinite series of Bessel functions of integer order of the first kind. This relationship is based on the rigorous asymptotic expansion formula in the existence of a thin inhomogeneity with a smooth supporting curve. Various results of numerical simulation are presented in order to support the identified structure of MUSIC. Although a priori information of the target is needed, we suggest a least condition of range of incident and observation directions to apply MUSIC in the limited-view problem.
Genetics Home Reference: recombinant 8 syndrome
... with a change in chromosome 8 called an inversion . An inversion involves the breakage of a chromosome in two ... typically not lost as a result of this inversion in chromosome 8 , so people usually do not ...
Automatic alignment for three-dimensional tomographic reconstruction
NASA Astrophysics Data System (ADS)
van Leeuwen, Tristan; Maretzke, Simon; Joost Batenburg, K.
2018-02-01
In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset.
NASA Astrophysics Data System (ADS)
Avdyushev, Victor A.
2017-12-01
Orbit determination from a small sample of observations over a very short observed orbital arc is a strongly nonlinear inverse problem. In such problems an evaluation of orbital uncertainty due to random observation errors is greatly complicated, since linear estimations conventionally used are no longer acceptable for describing the uncertainty even as a rough approximation. Nevertheless, if an inverse problem is weakly intrinsically nonlinear, then one can resort to the so-called method of disturbed observations (aka observational Monte Carlo). Previously, we showed that the weaker the intrinsic nonlinearity, the more efficient the method, i.e. the more accurate it enables one to simulate stochastically the orbital uncertainty, while it is strictly exact only when the problem is intrinsically linear. However, as we ascertained experimentally, its efficiency was found to be higher than that of other stochastic methods widely applied in practice. In the present paper we investigate the intrinsic nonlinearity in complicated inverse problems of Celestial Mechanics when orbits are determined from little informative samples of observations, which typically occurs for recently discovered asteroids. To inquire into the question, we introduce an index of intrinsic nonlinearity. In asteroid problems it evinces that the intrinsic nonlinearity can be strong enough to affect appreciably probabilistic estimates, especially at the very short observed orbital arcs that the asteroids travel on for about a hundredth of their orbital periods and less. As it is known from regression analysis, the source of intrinsic nonlinearity is the nonflatness of the estimation subspace specified by a dynamical model in the observation space. Our numerical results indicate that when determining asteroid orbits it is actually very slight. However, in the parametric space the effect of intrinsic nonlinearity is exaggerated mainly by the ill-conditioning of the inverse problem. Even so, as for the method of disturbed observations, we conclude that it practically should be still entirely acceptable to adequately describe the orbital uncertainty since, from a geometrical point of view, the efficiency of the method directly depends only on the nonflatness of the estimation subspace and it gets higher as the nonflatness decreases.
[EEG source localization using LORETA (low resolution electromagnetic tomography)].
Puskás, Szilvia
2011-03-30
Eledctroencephalography (EEG) has excellent temporal resolution, but the spatial resolution is poor. Different source localization methods exist to solve the so-called inverse problem, thus increasing the accuracy of spatial localization. This paper provides an overview of the history of source localization and the main categories of techniques are discussed. LORETA (low resolution electromagnetic tomography) is introduced in details: technical informations are discussed and localization properties of LORETA method are compared to other inverse solutions. Validation of the method with different imaging techniques is also discussed. This paper reviews several publications using LORETA both in healthy persons and persons with different neurological and psychiatric diseases. Finally future possible applications are discussed.
NASA Technical Reports Server (NTRS)
Backus, George
1987-01-01
Let R be the real numbers, R(n) the linear space of all real n-tuples, and R(infinity) the linear space of all infinite real sequences x = (x sub 1, x sub 2,...). Let P sub n :R(infinity) approaches R(n) be the projection operator with P sub n (x) = (x sub 1,...,x sub n). Let p(infinity) be a probability measure on the smallest sigma-ring of subsets of R(infinity) which includes all of the cylinder sets P sub n(-1) (B sub n), where B sub n is an arbitrary Borel subset of R(n). Let p sub n be the marginal distribution of p(infinity) on R(n), so p sub n(B sub n) = p(infinity)(P sub n to the -1(B sub n)) for each B sub n. A measure on R(n) is isotropic if it is invariant under all orthogonal transformations of R(n). All members of the set of all isotropic probability distributions on R(n) are described. The result calls into question both stochastic inversion and Bayesian inference, as currently used in many geophysical inverse problems.
Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization
NASA Astrophysics Data System (ADS)
Yamagishi, Masao; Yamada, Isao
2017-04-01
Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.
NASA Astrophysics Data System (ADS)
Codd, A. L.; Gross, L.
2018-03-01
We present a new inversion method for Electrical Resistivity Tomography which, in contrast to established approaches, minimizes the cost function prior to finite element discretization for the unknown electric conductivity and electric potential. Minimization is performed with the Broyden-Fletcher-Goldfarb-Shanno method (BFGS) in an appropriate function space. BFGS is self-preconditioning and avoids construction of the dense Hessian which is the major obstacle to solving large 3-D problems using parallel computers. In addition to the forward problem predicting the measurement from the injected current, the so-called adjoint problem also needs to be solved. For this problem a virtual current is injected through the measurement electrodes and an adjoint electric potential is obtained. The magnitude of the injected virtual current is equal to the misfit at the measurement electrodes. This new approach has the advantage that the solution process of the optimization problem remains independent to the meshes used for discretization and allows for mesh adaptation during inversion. Computation time is reduced by using superposition of pole loads for the forward and adjoint problems. A smoothed aggregation algebraic multigrid (AMG) preconditioned conjugate gradient is applied to construct the potentials for a given electric conductivity estimate and for constructing a first level BFGS preconditioner. Through the additional reuse of AMG operators and coarse grid solvers inversion time for large 3-D problems can be reduced further. We apply our new inversion method to synthetic survey data created by the resistivity profile representing the characteristics of subsurface fluid injection. We further test it on data obtained from a 2-D surface electrode survey on Heron Island, a small tropical island off the east coast of central Queensland, Australia.
The incomplete inverse and its applications to the linear least squares problem
NASA Technical Reports Server (NTRS)
Morduch, G. E.
1977-01-01
A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.
Parameter estimation using meta-heuristics in systems biology: a comprehensive review.
Sun, Jianyong; Garibaldi, Jonathan M; Hodgman, Charlie
2012-01-01
This paper gives a comprehensive review of the application of meta-heuristics to optimization problems in systems biology, mainly focussing on the parameter estimation problem (also called the inverse problem or model calibration). It is intended for either the system biologist who wishes to learn more about the various optimization techniques available and/or the meta-heuristic optimizer who is interested in applying such techniques to problems in systems biology. First, the parameter estimation problems emerging from different areas of systems biology are described from the point of view of machine learning. Brief descriptions of various meta-heuristics developed for these problems follow, along with outlines of their advantages and disadvantages. Several important issues in applying meta-heuristics to the systems biology modelling problem are addressed, including the reliability and identifiability of model parameters, optimal design of experiments, and so on. Finally, we highlight some possible future research directions in this field.
Three-dimensional inversion of multisource array electromagnetic data
NASA Astrophysics Data System (ADS)
Tartaras, Efthimios
Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.
Topological geons with self-gravitating phantom scalar field
NASA Astrophysics Data System (ADS)
Kratovitch, P. V.; Potashov, I. M.; Tchemarina, Ju V.; Tsirulev, A. N.
2017-12-01
A topological geon is the quotient manifold M/Z 2 where M is a static spherically symmetric wormhole having the reflection symmetry with respect to its throat. We distinguish such asymptotically at solutions of the Einstein equations according to the form of the time-time metric function by using the quadrature formulas of the so-called inverse problem for self-gravitating spherically symmetric scalar fields. We distinguish three types of geon spacetimes and illustrate them by simple examples. We also study possible observational effects associated with bounded geodesic motion near topological geons.
The novel high-performance 3-D MT inverse solver
NASA Astrophysics Data System (ADS)
Kruglyakov, Mikhail; Geraskin, Alexey; Kuvshinov, Alexey
2016-04-01
We present novel, robust, scalable, and fast 3-D magnetotelluric (MT) inverse solver. The solver is written in multi-language paradigm to make it as efficient, readable and maintainable as possible. Separation of concerns and single responsibility concepts go through implementation of the solver. As a forward modelling engine a modern scalable solver extrEMe, based on contracting integral equation approach, is used. Iterative gradient-type (quasi-Newton) optimization scheme is invoked to search for (regularized) inverse problem solution, and adjoint source approach is used to calculate efficiently the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT responses, and supports massive parallelization. Moreover, different parallelization strategies implemented in the code allow optimal usage of available computational resources for a given problem statement. To parameterize an inverse domain the so-called mask parameterization is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to HPC Piz Daint (6th supercomputer in the world) demonstrate practically linear scalability of the code up to thousands of nodes.
Regularization of soft-X-ray imaging in the DIII-D tokamak
Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...
2015-03-02
We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less
Inverse dynamic substructuring using the direct hybrid assembly in the frequency domain
NASA Astrophysics Data System (ADS)
D'Ambrogio, Walter; Fregolent, Annalisa
2014-04-01
The paper deals with the identification of the dynamic behaviour of a structural subsystem, starting from the known dynamic behaviour of both the coupled system and the remaining part of the structural system (residual subsystem). This topic is also known as decoupling problem, subsystem subtraction or inverse dynamic substructuring. Whenever it is necessary to combine numerical models (e.g. FEM) and test models (e.g. FRFs), one speaks of experimental dynamic substructuring. Substructure decoupling techniques can be classified as inverse coupling or direct decoupling techniques. In inverse coupling, the equations describing the coupling problem are rearranged to isolate the unknown substructure instead of the coupled structure. On the contrary, direct decoupling consists in adding to the coupled system a fictitious subsystem that is the negative of the residual subsystem. Starting from a reduced version of the 3-field formulation (dynamic equilibrium using FRFs, compatibility and equilibrium of interface forces), a direct hybrid assembly is developed by requiring that both compatibility and equilibrium conditions are satisfied exactly, either at coupling DoFs only, or at additional internal DoFs of the residual subsystem. Equilibrium and compatibility DoFs might not be the same: this generates the so-called non-collocated approach. The technique is applied using experimental data from an assembled system made by a plate and a rigid mass.
Localization of synchronous cortical neural sources.
Zerouali, Younes; Herry, Christophe L; Jemel, Boutheina; Lina, Jean-Marc
2013-03-01
Neural synchronization is a key mechanism to a wide variety of brain functions, such as cognition, perception, or memory. High temporal resolution achieved by EEG recordings allows the study of the dynamical properties of synchronous patterns of activity at a very fine temporal scale but with very low spatial resolution. Spatial resolution can be improved by retrieving the neural sources of EEG signal, thus solving the so-called inverse problem. Although many methods have been proposed to solve the inverse problem and localize brain activity, few of them target the synchronous brain regions. In this paper, we propose a novel algorithm aimed at localizing specifically synchronous brain regions and reconstructing the time course of their activity. Using multivariate wavelet ridge analysis, we extract signals capturing the synchronous events buried in the EEG and then solve the inverse problem on these signals. Using simulated data, we compare results of source reconstruction accuracy achieved by our method to a standard source reconstruction approach. We show that the proposed method performs better across a wide range of noise levels and source configurations. In addition, we applied our method on real dataset and identified successfully cortical areas involved in the functional network underlying visual face perception. We conclude that the proposed approach allows an accurate localization of synchronous brain regions and a robust estimation of their activity.
Multiple crack detection in 3D using a stable XFEM and global optimization
NASA Astrophysics Data System (ADS)
Agathos, Konstantinos; Chatzi, Eleni; Bordas, Stéphane P. A.
2018-02-01
A numerical scheme is proposed for the detection of multiple cracks in three dimensional (3D) structures. The scheme is based on a variant of the extended finite element method (XFEM) and a hybrid optimizer solution. The proposed XFEM variant is particularly well-suited for the simulation of 3D fracture problems, and as such serves as an efficient solution to the so-called forward problem. A set of heuristic optimization algorithms are recombined into a multiscale optimization scheme. The introduced approach proves effective in tackling the complex inverse problem involved, where identification of multiple flaws is sought on the basis of sparse measurements collected near the structural boundary. The potential of the scheme is demonstrated through a set of numerical case studies of varying complexity.
Easy way to determine quantitative spatial resolution distribution for a general inverse problem
NASA Astrophysics Data System (ADS)
An, M.; Feng, M.
2013-12-01
The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.
ɛ-connectedness, finite approximations, shape theory and coarse graining in hyperspaces
NASA Astrophysics Data System (ADS)
Alonso-Morón, Manuel; Cuchillo-Ibanez, Eduardo; Luzón, Ana
2008-12-01
We use upper semifinite hyperspaces of compacta to describe ε-connectedness and to compute homology from finite approximations. We find a new connection between ε-connectedness and the so-called Shape Theory. We construct a geodesically complete R-tree, by means of ε-components at different resolutions, whose behavior at infinite captures the topological structure of the space of components of a given compact metric space. We also construct inverse sequences of finite spaces using internal finite approximations of compact metric spaces. These sequences can be converted into inverse sequences of polyhedra and simplicial maps by means of what we call the Alexandroff-McCord correspondence. This correspondence allows us to relate upper semifinite hyperspaces of finite approximation with the Vietoris-Rips complexes of such approximations at different resolutions. Two motivating examples are included in the introduction. We propose this procedure as a different mathematical foundation for problems on data analysis. This process is intrinsically related to the methodology of shape theory. This paper reinforces Robins’s idea of using methods from shape theory to compute homology from finite approximations.
NASA Astrophysics Data System (ADS)
Bobodzhanov, A. A.; Safonov, V. F.
2016-04-01
We consider an algorithm for constructing asymptotic solutions regularized in the sense of Lomov (see [1], [2]). We show that such problems can be reduced to integro-differential equations with inverse time. But in contrast to known papers devoted to this topic (see, for example, [3]), in this paper we study a fundamentally new case, which is characterized by the absence, in the differential part, of a linear operator that isolates, in the asymptotics of the solution, constituents described by boundary functions and by the fact that the integral operator has kernel with diagonal degeneration of high order. Furthermore, the spectrum of the regularization operator A(t) (see below) may contain purely imaginary eigenvalues, which causes difficulties in the application of the methods of construction of asymptotic solutions proposed in the monograph [3]. Based on an analysis of the principal term of the asymptotics, we isolate a class of inhomogeneities and initial data for which the exact solution of the original problem tends to the limit solution (as \\varepsilon\\to+0) on the entire time interval under consideration, also including a boundary-layer zone (that is, we solve the so-called initialization problem). The paper is of a theoretical nature and is designed to lead to a greater understanding of the problems in the theory of singular perturbations. There may be applications in various applied areas where models described by integro-differential equations are used (for example, in elasticity theory, the theory of electrical circuits, and so on).
Higgs mass corrections in the SUSY B - L model with inverse seesaw
NASA Astrophysics Data System (ADS)
Elsayed, A.; Khalil, S.; Moretti, S.
2012-08-01
In the context of the Supersymmetric (SUSY) B - L (Baryon minus Lepton number) model with inverse seesaw mechanism, we calculate the one-loop radiative corrections due to right-handed (s)neutrinos to the mass of the lightest Higgs boson when the latter is Standard Model (SM)-like. We show that such effects can be as large as O (100) GeV, thereby giving an absolute upper limit on such a mass around 180 GeV. The importance of this result from a phenomenological point of view is twofold. On the one hand, this enhancement greatly reconciles theory and experiment, by alleviating the so-called 'little hierarchy problem' of the minimal SUSY realization, whereby the current experimental limit on the SM-like Higgs mass is very near its absolute upper limit predicted theoretically, of 130 GeV. On the other hand, a SM-like Higgs boson with mass below 180 GeV is still well within the reach of the Large Hadron Collider (LHC), so that the SUSY realization discussed here is just as testable as the minimal version.
Some inversion formulas for the cone transform
NASA Astrophysics Data System (ADS)
Terzioglu, Fatma
2015-11-01
Several novel imaging applications have lead recently to a variety of Radon type transforms, where integration is made over a family of conical surfaces. We call them cone transforms (in 2D they are also called V-line or broken ray transforms). Most prominently, they are present in the so called Compton camera imaging that arises in medical diagnostics, astronomy, and lately in homeland security applications. Several specific incarnations of the cone transform have been considered separately. In this paper, we address the most general (and overdetermined) cone transform, obtain integral relations between cone and Radon transforms in {{{R}}}n, and a variety of inversion formulas. In many applications (e.g., in homeland security), the signal to noise ratio is very low. So, if overdetermined data is collected (as in the case of Compton imaging), attempts to reduce the dimensionality might lead to essential elimination of the signal. Thus, our main concentration is on obtaining formulas involving overdetermined data.
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
NASA Astrophysics Data System (ADS)
Franck, I. M.; Koutsourelakis, P. S.
2017-01-01
This paper is concerned with the numerical solution of model-based, Bayesian inverse problems. We are particularly interested in cases where the cost of each likelihood evaluation (forward-model call) is expensive and the number of unknown (latent) variables is high. This is the setting in many problems in computational physics where forward models with nonlinear PDEs are used and the parameters to be calibrated involve spatio-temporarily varying coefficients, which upon discretization give rise to a high-dimensional vector of unknowns. One of the consequences of the well-documented ill-posedness of inverse problems is the possibility of multiple solutions. While such information is contained in the posterior density in Bayesian formulations, the discovery of a single mode, let alone multiple, poses a formidable computational task. The goal of the present paper is two-fold. On one hand, we propose approximate, adaptive inference strategies using mixture densities to capture multi-modal posteriors. On the other, we extend our work in [1] with regard to effective dimensionality reduction techniques that reveal low-dimensional subspaces where the posterior variance is mostly concentrated. We validate the proposed model by employing Importance Sampling which confirms that the bias introduced is small and can be efficiently corrected if the analyst wishes to do so. We demonstrate the performance of the proposed strategy in nonlinear elastography where the identification of the mechanical properties of biological materials can inform non-invasive, medical diagnosis. The discovery of multiple modes (solutions) in such problems is critical in achieving the diagnostic objectives.
Structural Damage Detection Using Changes in Natural Frequencies: Theory and Applications
NASA Astrophysics Data System (ADS)
He, K.; Zhu, W. D.
2011-07-01
A vibration-based method that uses changes in natural frequencies of a structure to detect damage has advantages over conventional nondestructive tests in detecting various types of damage, including loosening of bolted joints, using minimum measurement data. Two major challenges associated with applications of the vibration-based damage detection method to engineering structures are addressed: accurate modeling of structures and the development of a robust inverse algorithm to detect damage, which are defined as the forward and inverse problems, respectively. To resolve the forward problem, new physics-based finite element modeling techniques are developed for fillets in thin-walled beams and for bolted joints, so that complex structures can be accurately modeled with a reasonable model size. To resolve the inverse problem, a logistical function transformation is introduced to convert the constrained optimization problem to an unconstrained one, and a robust iterative algorithm using a trust-region method, called the Levenberg-Marquardt method, is developed to accurately detect the locations and extent of damage. The new methodology can ensure global convergence of the iterative algorithm in solving under-determined system equations and deal with damage detection problems with relatively large modeling error and measurement noise. The vibration-based damage detection method is applied to various structures including lightning masts, a space frame structure and one of its components, and a pipeline. The exact locations and extent of damage can be detected in the numerical simulation where there is no modeling error and measurement noise. The locations and extent of damage can be successfully detected in experimental damage detection.
Localization of extended brain sources from EEG/MEG: the ExSo-MUSIC approach.
Birot, Gwénaël; Albera, Laurent; Wendling, Fabrice; Merlet, Isabelle
2011-05-01
We propose a new MUSIC-like method, called 2q-ExSo-MUSIC (q ≥ 1). This method is an extension of the 2q-MUSIC (q ≥ 1) approach for solving the EEG/MEG inverse problem, when spatially-extended neocortical sources ("ExSo") are considered. It introduces a novel ExSo-MUSIC principle. The novelty is two-fold: i) the parameterization of the spatial source distribution that leads to an appropriate metric in the context of distributed brain sources and ii) the introduction of an original, efficient and low-cost way of optimizing this metric. In 2q-ExSo-MUSIC, the possible use of higher order statistics (q ≥ 2) offers a better robustness with respect to Gaussian noise of unknown spatial coherence and modeling errors. As a result we reduced the penalizing effects of both the background cerebral activity that can be seen as a Gaussian and spatially correlated noise, and the modeling errors induced by the non-exact resolution of the forward problem. Computer results on simulated EEG signals obtained with physiologically-relevant models of both the sources and the volume conductor show a highly increased performance of our 2q-ExSo-MUSIC method as compared to the classical 2q-MUSIC algorithms. Copyright © 2011 Elsevier Inc. All rights reserved.
BOOK REVIEW: Inverse Problems. Activities for Undergraduates
NASA Astrophysics Data System (ADS)
Yamamoto, Masahiro
2003-06-01
This book is a valuable introduction to inverse problems. In particular, from the educational point of view, the author addresses the questions of what constitutes an inverse problem and how and why we should study them. Such an approach has been eagerly awaited for a long time. Professor Groetsch, of the University of Cincinnati, is a world-renowned specialist in inverse problems, in particular the theory of regularization. Moreover, he has made a remarkable contribution to educational activities in the field of inverse problems, which was the subject of his previous book (Groetsch C W 1993 Inverse Problems in the Mathematical Sciences (Braunschweig: Vieweg)). For this reason, he is one of the most qualified to write an introductory book on inverse problems. Without question, inverse problems are important, necessary and appear in various aspects. So it is crucial to introduce students to exercises in inverse problems. However, there are not many introductory books which are directly accessible by students in the first two undergraduate years. As a consequence, students often encounter diverse concrete inverse problems before becoming aware of their general principles. The main purpose of this book is to present activities to allow first-year undergraduates to learn inverse theory. To my knowledge, this book is a rare attempt to do this and, in my opinion, a great success. The author emphasizes that it is very important to teach inverse theory in the early years. He writes; `If students consider only the direct problem, they are not looking at the problem from all sides .... The habit of always looking at problems from the direct point of view is intellectually limiting ...' (page 21). The book is very carefully organized so that teachers will be able to use it as a textbook. After an introduction in chapter 1, sucessive chapters deal with inverse problems in precalculus, calculus, differential equations and linear algebra. In order to let one gain some insight into the nature of inverse problems and the appropriate mode of thought, chapter 1 offers historical vignettes, most of which have played an essential role in the development of natural science. These vignettes cover the first successful application of `non-destructive testing' by Archimedes (page 4) via Newton's laws of motion up to literary tomography, and readers will be able to enjoy a wide overview of inverse problems. Therefore, as the author asks, the reader should not skip this chapter. This may not be hard to do, since the headings of the sections are quite intriguing (`Archimedes' Bath', `Another World', `Got the Time?', `Head Games', etc). The author embarks on the technical approach to inverse problems in chapter 2. He has elegantly designed each section with a guide specifying course level, objective, mathematical and scientifical background and appropriate technology (e.g. types of calculators required). The guides are designed such that teachers may be able to construct effective and attractive courses by themselves. The book is not intended to offer one rigidly determined course, but should be used flexibly and independently according to the situation. Moreover, every section closes with activities which can be chosen according to the students' interests and levels of ability. Some of these exercises do not have ready solutions, but require long-term study, so readers are not required to solve all of them. After chapter 5, which contains discrete inverse problems such as the algebraic reconstruction technique and the Backus - Gilbert method, there are answers and commentaries to the activities. Finally, scripts in MATLAB are attached, although they can also be downloaded from the author's web page (http://math.uc.edu/~groetsch/). This book is aimed at students but it will be very valuable to researchers wishing to retain a wide overview of inverse problems in the midst of busy research activities. A Japanese version was published in 2002.
NLSE: Parameter-Based Inversion Algorithm
NASA Astrophysics Data System (ADS)
Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.
Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.
Seismic waveform inversion using neural networks
NASA Astrophysics Data System (ADS)
De Wit, R. W.; Trampert, J.
2012-12-01
Full waveform tomography aims to extract all available information on Earth structure and seismic sources from seismograms. The strongly non-linear nature of this inverse problem is often addressed through simplifying assumptions for the physical theory or data selection, thus potentially neglecting valuable information. Furthermore, the assessment of the quality of the inferred model is often lacking. This calls for the development of methods that fully appreciate the non-linear nature of the inverse problem, whilst providing a quantification of the uncertainties in the final model. We propose to invert seismic waveforms in a fully non-linear way by using artificial neural networks. Neural networks can be viewed as powerful and flexible non-linear filters. They are very common in speech, handwriting and pattern recognition. Mixture Density Networks (MDN) allow us to obtain marginal posterior probability density functions (pdfs) of all model parameters, conditioned on the data. An MDN can approximate an arbitrary conditional pdf as a linear combination of Gaussian kernels. Seismograms serve as input, Earth structure parameters are the so-called targets and network training aims to learn the relationship between input and targets. The network is trained on a large synthetic data set, which we construct by drawing many random Earth models from a prior model pdf and solving the forward problem for each of these models, thus generating synthetic seismograms. As a first step, we aim to construct a 1D Earth model. Training sets are constructed using the Mineos package, which computes synthetic seismograms in a spherically symmetric non-rotating Earth by summing normal modes. We train a network on the body waveforms present in these seismograms. Once the network has been trained, it can be presented with new unseen input data, in our case the body waves in real seismograms. We thus obtain the posterior pdf which represents our final state of knowledge given the information in the training set and the real data.
Adjoint-Based Sensitivity Kernels for Glacial Isostatic Adjustment in a Laterally Varying Earth
NASA Astrophysics Data System (ADS)
Crawford, O.; Al-Attar, D.; Tromp, J.; Mitrovica, J. X.; Austermann, J.; Lau, H. C. P.
2017-12-01
We consider a new approach to both the forward and inverse problems in glacial isostatic adjustment. We present a method for forward modelling GIA in compressible and laterally heterogeneous earth models with a variety of linear and non-linear rheologies. Instead of using the so-called sea level equation, which must be solved iteratively, the forward theory we present consists of a number of coupled evolution equations that can be straightforwardly numerically integrated. We also apply the adjoint method to the inverse problem in order to calculate the derivatives of measurements of GIA with respect to the viscosity structure of the Earth. Such derivatives quantify the sensitivity of the measurements to the model. The adjoint method enables efficient calculation of continuous and laterally varying derivatives, allowing us to calculate the sensitivity of measurements of glacial isostatic adjustment to the Earth's three-dimensional viscosity structure. The derivatives have a number of applications within the inverse method. Firstly, they can be used within a gradient-based optimisation method to find a model which minimises some data misfit function. The derivatives can also be used to quantify the uncertainty in such a model and hence to provide understanding of which parts of the model are well constrained. Finally, they enable construction of measurements which provide sensitivity to a particular part of the model space. We illustrate both the forward and inverse aspects with numerical examples in a spherically symmetric earth model.
Termination Proofs for String Rewriting Systems via Inverse Match-Bounds
NASA Technical Reports Server (NTRS)
Butler, Ricky (Technical Monitor); Geser, Alfons; Hofbauer, Dieter; Waldmann, Johannes
2004-01-01
Annotating a letter by a number, one can record information about its history during a reduction. A string rewriting system is called match-bounded if there is a global upper bound to these numbers. In earlier papers we established match-boundedness as a strong sufficient criterion for both termination and preservation of regular languages. We show now that the string rewriting system whose inverse (left and right hand sides exchanged) is match-bounded, also have exceptional properties, but slightly different ones. Inverse match-bounded systems effectively preserve context-free languages; their sets of normalized strings and their sets of immortal strings are effectively regular. These sets of strings can be used to decide the normalization, the termination and the uniform termination problems of inverse match-bounded systems. We also show that the termination problem is decidable in linear time, and that a certain strong reachability problem is deciable, thus solving two open problems of McNaughton's.
ERIC Educational Resources Information Center
Sangrigoli, Sandy; de Schonen, Scania
2004-01-01
In adults, three phenomena are taken to demonstrate an experience effect on face recognition: an inversion effect, a non-native face effect (so-called "other-race" effect) and their interaction. It is crucial for our understanding of the developmental perception mechanisms of object processing to discover when these effects are present in…
NASA Astrophysics Data System (ADS)
Justino, Júlia
2017-06-01
Matrices with coefficients having uncertainties of type o (.) or O (.), called flexible matrices, are studied from the point of view of nonstandard analysis. The uncertainties of the afore-mentioned kind will be given in the form of the so-called neutrices, for instance the set of all infinitesimals. Since flexible matrices have uncertainties in their coefficients, it is not possible to define the identity matrix in an unique way and so the notion of spectral identity matrix arises. Not all nonsingular flexible matrices can be turned into a spectral identity matrix using Gauss-Jordan elimination method, implying that that not all nonsingular flexible matrices have the inverse matrix. Under certain conditions upon the size of the uncertainties appearing in a nonsingular flexible matrix, a general theorem concerning the boundaries of its minors is presented which guarantees the existence of the inverse matrix of a nonsingular flexible matrix.
Layer Stripping Solutions of Inverse Seismic Problems.
1985-03-21
problems--more so than has generally been recognized. The subject of this thesis is the theoretical development of the . layer-stripping methodology , and...medium varies sharply at each interface, which would be expected to cause difficulties for the algorithm, since it was designed for a smoothy varying... methodology was applied in a novel way. The inverse problem considered in this chapter was that of reconstructing a layered medium from measurement of its
Space Objects Maneuvering Detection and Prediction via Inverse Reinforcement Learning
NASA Astrophysics Data System (ADS)
Linares, R.; Furfaro, R.
This paper determines the behavior of Space Objects (SOs) using inverse Reinforcement Learning (RL) to estimate the reward function that each SO is using for control. The approach discussed in this work can be used to analyze maneuvering of SOs from observational data. The inverse RL problem is solved using the Feature Matching approach. This approach determines the optimal reward function that a SO is using while maneuvering by assuming that the observed trajectories are optimal with respect to the SO's own reward function. This paper uses estimated orbital elements data to determine the behavior of SOs in a data-driven fashion.
An approach to quantum-computational hydrologic inverse analysis
O'Malley, Daniel
2018-05-02
Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less
An approach to quantum-computational hydrologic inverse analysis.
O'Malley, Daniel
2018-05-02
Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealer to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.
An approach to quantum-computational hydrologic inverse analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Malley, Daniel
Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less
NASA Astrophysics Data System (ADS)
Kuznetsov, N.; Maz'ya, V.; Vainberg, B.
2002-08-01
This book gives a self-contained and up-to-date account of mathematical results in the linear theory of water waves. The study of waves has many applications, including the prediction of behavior of floating bodies (ships, submarines, tension-leg platforms etc.), the calculation of wave-making resistance in naval architecture, and the description of wave patterns over bottom topography in geophysical hydrodynamics. The first section deals with time-harmonic waves. Three linear boundary value problems serve as the approximate mathematical models for these types of water waves. The next section uses a plethora of mathematical techniques in the investigation of these three problems. The techniques used in the book include integral equations based on Green's functions, various inequalities between the kinetic and potential energy and integral identities which are indispensable for proving the uniqueness theorems. The so-called inverse procedure is applied to constructing examples of non-uniqueness, usually referred to as 'trapped nodes.'
Photoneutron reactions in astrophysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varlamov, V. V., E-mail: Varlamov@depni.sinp.msu.ru; Ishkhanov, B. S.; Orlin, V. N.
Among key problems in nuclear astrophysics, that of obtaining deeper insight into the mechanism of synthesis of chemical elements is of paramount importance. The majority of heavy elements existing in nature are produced in stars via radiative neutron capture in so-called s- and r processes, which are, respectively, slow and fast, in relation to competing β{sup −}-decay processes. At the same time, we know 35 neutron-deficient so-called bypassed p-nuclei that lie between {sup 74}Se and {sup 196}Hg and which cannot originate from the aforementioned s- and r-processes. Their production is possible in (γ, n), (γ, p), or (γ, α) photonuclearmore » reactions. In view of this, data on photoneutron reactions play an important role in predicting and describing processes leading to the production of p-nuclei. Interest in determining cross sections for photoneutron reactions in the threshold energy region, which is of particular importance for astrophysics, has grown substantially in recent years. The use of modern sources of quasimonoenergetic photons obtained in processes of inverse Compton laser-radiation scattering on relativistic electronsmakes it possible to reveal rather interesting special features of respective cross sections, manifestations of pygmy E1 and M1 resonances, or the production of nuclei in isomeric states, on one hand, and to revisit the problem of systematic discrepancies between data on reaction cross sections from experiments of different types, on the other hand. Data obtained on the basis of our new experimental-theoretical approach to evaluating cross sections for partial photoneutron reactions are invoked in considering these problems.« less
Bowhead whale localization using time-difference-of-arrival data from asynchronous recorders.
Warner, Graham A; Dosso, Stan E; Hannay, David E
2017-03-01
This paper estimates bowhead whale locations and uncertainties using nonlinear Bayesian inversion of the time-difference-of-arrival (TDOA) of low-frequency whale calls recorded on onmi-directional asynchronous recorders in the shallow waters of the northeastern Chukchi Sea, Alaska. A Y-shaped cluster of seven autonomous ocean-bottom hydrophones, separated by 0.5-9.2 km, was deployed for several months over which time their clocks drifted out of synchronization. Hundreds of recorded whale calls are manually associated between recorders. The TDOA between hydrophone pairs are calculated from filtered waveform cross correlations and depend on the whale locations, hydrophone locations, relative recorder clock offsets, and effective waveguide sound speed. A nonlinear Bayesian inversion estimates all of these parameters and their uncertainties as well as data error statistics. The problem is highly nonlinear and a linearized inversion did not produce physically realistic results. Whale location uncertainties from nonlinear inversion can be low enough to allow accurate tracking of migrating whales that vocalize repeatedly over several minutes. Estimates of clock drift rates are obtained from inversions of TDOA data over two weeks and agree with corresponding estimates obtained from long-time averaged ambient noise cross correlations. The inversion is suitable for application to large data sets of manually or automatically detected whale calls.
Digital signal processing based on inverse scattering transform.
Turitsyna, Elena G; Turitsyn, Sergei K
2013-10-15
Through numerical modeling, we illustrate the possibility of a new approach to digital signal processing in coherent optical communications based on the application of the so-called inverse scattering transform. Considering without loss of generality a fiber link with normal dispersion and quadrature phase shift keying signal modulation, we demonstrate how an initial information pattern can be recovered (without direct backward propagation) through the calculation of nonlinear spectral data of the received optical signal.
Resolution enhancement of robust Bayesian pre-stack inversion in the frequency domain
NASA Astrophysics Data System (ADS)
Yin, Xingyao; Li, Kun; Zong, Zhaoyun
2016-10-01
AVO/AVA (amplitude variation with an offset or angle) inversion is one of the most practical and useful approaches to estimating model parameters. So far, publications on AVO inversion in the Fourier domain have been quite limited in view of its poor stability and sensitivity to noise compared with time-domain inversion. For the resolution and stability of AVO inversion in the Fourier domain, a novel robust Bayesian pre-stack AVO inversion based on the mixed domain formulation of stationary convolution is proposed which could solve the instability and achieve superior resolution. The Fourier operator will be integrated into the objective equation and it avoids the Fourier inverse transform in our inversion process. Furthermore, the background constraints of model parameters are taken into consideration to improve the stability and reliability of inversion which could compensate for the low-frequency components of seismic signals. Besides, the different frequency components of seismic signals can realize decoupling automatically. This will help us to solve the inverse problem by means of multi-component successive iterations and the convergence precision of the inverse problem could be improved. So, superior resolution compared with the conventional time-domain pre-stack inversion could be achieved easily. Synthetic tests illustrate that the proposed method could achieve high-resolution results with a high degree of agreement with the theoretical model and verify the quality of anti-noise. Finally, applications on a field data case demonstrate that the proposed method could obtain stable inversion results of elastic parameters from pre-stack seismic data in conformity with the real logging data.
Time-reversal and Bayesian inversion
NASA Astrophysics Data System (ADS)
Debski, Wojciech
2017-04-01
Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.
Recursive inverse factorization.
Rubensson, Emanuel H; Bock, Nicolas; Holmström, Erik; Niklasson, Anders M N
2008-03-14
A recursive algorithm for the inverse factorization S(-1)=ZZ(*) of Hermitian positive definite matrices S is proposed. The inverse factorization is based on iterative refinement [A.M.N. Niklasson, Phys. Rev. B 70, 193102 (2004)] combined with a recursive decomposition of S. As the computational kernel is matrix-matrix multiplication, the algorithm can be parallelized and the computational effort increases linearly with system size for systems with sufficiently sparse matrices. Recent advances in network theory are used to find appropriate recursive decompositions. We show that optimization of the so-called network modularity results in an improved partitioning compared to other approaches. In particular, when the recursive inverse factorization is applied to overlap matrices of irregularly structured three-dimensional molecules.
incaRNAfbinv: a web server for the fragment-based design of RNA sequences
Drory Retwitzer, Matan; Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme; Barash, Danny
2016-01-01
Abstract In recent years, new methods for computational RNA design have been developed and applied to various problems in synthetic biology and nanotechnology. Lately, there is considerable interest in incorporating essential biological information when solving the inverse RNA folding problem. Correspondingly, RNAfbinv aims at including biologically meaningful constraints and is the only program to-date that performs a fragment-based design of RNA sequences. In doing so it allows the design of sequences that do not necessarily exactly fold into the target, as long as the overall coarse-grained tree graph shape is preserved. Augmented by the weighted sampling algorithm of incaRNAtion, our web server called incaRNAfbinv implements the method devised in RNAfbinv and offers an interactive environment for the inverse folding of RNA using a fragment-based design approach. It takes as input: a target RNA secondary structure; optional sequence and motif constraints; optional target minimum free energy, neutrality and GC content. In addition to the design of synthetic regulatory sequences, it can be used as a pre-processing step for the detection of novel natural occurring RNAs. The two complementary methodologies RNAfbinv and incaRNAtion are merged together and fully implemented in our web server incaRNAfbinv, available at http://www.cs.bgu.ac.il/incaRNAfbinv. PMID:27185893
Time-marching multi-grid seismic tomography
NASA Astrophysics Data System (ADS)
Tong, P.; Yang, D.; Liu, Q.
2016-12-01
From the classic ray-based traveltime tomography to the state-of-the-art full waveform inversion, because of the nonlinearity of seismic inverse problems, a good starting model is essential for preventing the convergence of the objective function toward local minima. With a focus on building high-accuracy starting models, we propose the so-called time-marching multi-grid seismic tomography method in this study. The new seismic tomography scheme consists of a temporal time-marching approach and a spatial multi-grid strategy. We first divide the recording period of seismic data into a series of time windows. Sequentially, the subsurface properties in each time window are iteratively updated starting from the final model of the previous time window. There are at least two advantages of the time-marching approach: (1) the information included in the seismic data of previous time windows has been explored to build the starting models of later time windows; (2) seismic data of later time windows could provide extra information to refine the subsurface images. Within each time window, we use a multi-grid method to decompose the scale of the inverse problem. Specifically, the unknowns of the inverse problem are sampled on a coarse mesh to capture the macro-scale structure of the subsurface at the beginning. Because of the low dimensionality, it is much easier to reach the global minimum on a coarse mesh. After that, finer meshes are introduced to recover the micro-scale properties. That is to say, the subsurface model is iteratively updated on multi-grid in every time window. We expect that high-accuracy starting models should be generated for the second and later time windows. We will test this time-marching multi-grid method by using our newly developed eikonal-based traveltime tomography software package tomoQuake. Real application results in the 2016 Kumamoto earthquake (Mw 7.0) region in Japan will be demonstrated.
Query-based learning for aerospace applications.
Saad, E W; Choi, J J; Vian, J L; Wunsch, D C Ii
2003-01-01
Models of real-world applications often include a large number of parameters with a wide dynamic range, which contributes to the difficulties of neural network training. Creating the training data set for such applications becomes costly, if not impossible. In order to overcome the challenge, one can employ an active learning technique known as query-based learning (QBL) to add performance-critical data to the training set during the learning phase, thereby efficiently improving the overall learning/generalization. The performance-critical data can be obtained using an inverse mapping called network inversion (discrete network inversion and continuous network inversion) followed by oracle query. This paper investigates the use of both inversion techniques for QBL learning, and introduces an original heuristic to select the inversion target values for continuous network inversion method. Efficiency and generalization was further enhanced by employing node decoupled extended Kalman filter (NDEKF) training and a causality index (CI) as a means to reduce the input search dimensionality. The benefits of the overall QBL approach are experimentally demonstrated in two aerospace applications: a classification problem with large input space and a control distribution problem.
NASA Astrophysics Data System (ADS)
Nakamura, Gen; Wang, Haibing
2017-05-01
Consider the problem of reconstructing unknown Robin inclusions inside a heat conductor from boundary measurements. This problem arises from active thermography and is formulated as an inverse boundary value problem for the heat equation. In our previous works, we proposed a sampling-type method for reconstructing the boundary of the Robin inclusion and gave its rigorous mathematical justification. This method is non-iterative and based on the characterization of the solution to the so-called Neumann- to-Dirichlet map gap equation. In this paper, we give a further investigation of the reconstruction method from both the theoretical and numerical points of view. First, we clarify the solvability of the Neumann-to-Dirichlet map gap equation and establish a relation of its solution to the Green function associated with an initial-boundary value problem for the heat equation inside the Robin inclusion. This naturally provides a way of computing this Green function from the Neumann-to-Dirichlet map and explains what is the input for the linear sampling method. Assuming that the Neumann-to-Dirichlet map gap equation has a unique solution, we also show the convergence of our method for noisy measurements. Second, we give the numerical implementation of the reconstruction method for two-dimensional spatial domains. The measurements for our inverse problem are simulated by solving the forward problem via the boundary integral equation method. Numerical results are presented to illustrate the efficiency and stability of the proposed method. By using a finite sequence of transient input over a time interval, we propose a new sampling method over the time interval by single measurement which is most likely to be practical.
Nonlinear compression of temporal solitons in an optical waveguide via inverse engineering
NASA Astrophysics Data System (ADS)
Paul, Koushik; Sarma, Amarendra K.
2018-03-01
We propose a novel method based on the so-called shortcut-to-adiabatic passage techniques to achieve fast compression of temporal solitons in a nonlinear waveguide. We demonstrate that soliton compression could be achieved, in principle, at an arbitrarily small distance by inverse-engineering the pulse width and the nonlinearity of the medium. The proposed scheme could possibly be exploited for various short-distance communication protocols and may be even in nonlinear guided wave-optics devices and generation of ultrashort soliton pulses.
Isotropic probability measures in infinite-dimensional spaces
NASA Technical Reports Server (NTRS)
Backus, George
1987-01-01
Let R be the real numbers, R(n) the linear space of all real n-tuples, and R(infinity) the linear space of all infinite real sequences x = (x sub 1, x sub 2,...). Let P sub in :R(infinity) approaches R(n) be the projection operator with P sub n (x) = (x sub 1,...,x sub n). Let p(infinity) be a probability measure on the smallest sigma-ring of subsets of R(infinity) which includes all of the cylinder sets P sub n(-1) (B sub n), where B sub n is an arbitrary Borel subset of R(n). Let p sub n be the marginal distribution of p(infinity) on R(n), so p sub n(B sub n) = p(infinity) (P sub n to the -1 (B sub n)) for each B sub n. A measure on R(n) is isotropic if it is invariant under all orthogonal transformations of R(n). All members of the set of all isotropic probability distributions on R(n) are described. The result calls into question both stochastic inversion and Bayesian inference, as currently used in many geophysical inverse problems.
NASA Astrophysics Data System (ADS)
Ortega Gelabert, Olga; Zlotnik, Sergio; Afonso, Juan Carlos; Díez, Pedro
2017-04-01
The determination of the present-day physical state of the thermal and compositional structure of the Earth's lithosphere and sub-lithospheric mantle is one of the main goals in modern lithospheric research. All this data is essential to build Earth's evolution models and to reproduce many geophysical observables (e.g. elevation, gravity anomalies, travel time data, heat flow, etc) together with understanding the relationship between them. Determining the lithospheric state involves the solution of high-resolution inverse problems and, consequently, the solution of many direct models is required. The main objective of this work is to contribute to the existing inversion techniques in terms of improving the estimation of the elevation (topography) by including a dynamic component arising from sub-lithospheric mantle flow. In order to do so, we implement an efficient Reduced Order Method (ROM) built upon classic Finite Elements. ROM allows to reduce significantly the computational cost of solving a family of problems, for example all the direct models that are required in the solution of the inverse problem. The strategy of the method consists in creating a (reduced) basis of solutions, so that when a new problem has to be solved, its solution is sought within the basis instead of attempting to solve the problem itself. In order to check the Reduced Basis approach, we implemented the method in a 3D domain reproducing a portion of Earth that covers up to 400 km depth. Within the domain the Stokes equation is solved with realistic viscosities and densities. The different realizations (the family of problems) is created by varying viscosities and densities in a similar way as it would happen in an inversion problem. The Reduced Basis method is shown to be an extremely efficiently solver for the Stokes equation in this context.
Liu, Tian; Spincemaille, Pascal; de Rochefort, Ludovic; Kressler, Bryan; Wang, Yi
2009-01-01
Magnetic susceptibility differs among tissues based on their contents of iron, calcium, contrast agent, and other molecular compositions. Susceptibility modifies the magnetic field detected in the MR signal phase. The determination of an arbitrary susceptibility distribution from the induced field shifts is a challenging, ill-posed inverse problem. A method called "calculation of susceptibility through multiple orientation sampling" (COSMOS) is proposed to stabilize this inverse problem. The field created by the susceptibility distribution is sampled at multiple orientations with respect to the polarization field, B(0), and the susceptibility map is reconstructed by weighted linear least squares to account for field noise and the signal void region. Numerical simulations and phantom and in vitro imaging validations demonstrated that COSMOS is a stable and precise approach to quantify a susceptibility distribution using MRI.
Fast model updating coupling Bayesian inference and PGD model reduction
NASA Astrophysics Data System (ADS)
Rubio, Paul-Baptiste; Louf, François; Chamoin, Ludovic
2018-04-01
The paper focuses on a coupled Bayesian-Proper Generalized Decomposition (PGD) approach for the real-time identification and updating of numerical models. The purpose is to use the most general case of Bayesian inference theory in order to address inverse problems and to deal with different sources of uncertainties (measurement and model errors, stochastic parameters). In order to do so with a reasonable CPU cost, the idea is to replace the direct model called for Monte-Carlo sampling by a PGD reduced model, and in some cases directly compute the probability density functions from the obtained analytical formulation. This procedure is first applied to a welding control example with the updating of a deterministic parameter. In the second application, the identification of a stochastic parameter is studied through a glued assembly example.
3D+T motion analysis with nanosensors
NASA Astrophysics Data System (ADS)
Leduc, Jean-Pierre
2017-09-01
This paper addresses the problem of motion analysis performed in a signal sampled on an irregular grid spread in 3-dimensional space and time (3D+T). Nanosensors can be randomly scattered in the field to form a "sensor network". Once released, each nanosensor transmits at its own fixed pace information which corresponds to some physical variable measured in the field. Each nanosensor is supposed to have a limited lifetime given by a Poisson-exponential distribution after release. The motion analysis is supported by a model based on a Lie group called the Galilei group that refers to the actual mechanics that takes place on some given geometry. The Galilei group has representations in the Hilbert space of the captured signals. Those representations have the properties to be unitary, irreducible and square-integrable and to enable the existence of admissible continuous wavelets fit for motion analysis. The motion analysis can be considered as a so-called "inverse problem" where the physical model is inferred to estimate the kinematical parameters of interest. The estimation of the kinematical parameters is performed by a gradient algorithm. The gradient algorithm extends in the trajectory determination. Trajectory computation is related to a Lagrangian-Hamiltonian formulation and fits into a neuro-dynamic programming approach that can be implemented in the form of a Q-learning algorithm. Applications relevant for this problem can be found in medical imaging, Earth science, military, and neurophysiology.
NASA Technical Reports Server (NTRS)
Green, M. J.; Nachtsheim, P. R.
1972-01-01
A numerical method for the solution of large systems of nonlinear differential equations of the boundary-layer type is described. The method is a modification of the technique for satisfying asymptotic boundary conditions. The present method employs inverse interpolation instead of the Newton method to adjust the initial conditions of the related initial-value problem. This eliminates the so-called perturbation equations. The elimination of the perturbation equations not only reduces the user's preliminary work in the application of the method, but also reduces the number of time-consuming initial-value problems to be numerically solved at each iteration. For further ease of application, the solution of the overdetermined system for the unknown initial conditions is obtained automatically by applying Golub's linear least-squares algorithm. The relative ease of application of the proposed numerical method increases directly as the order of the differential-equation system increases. Hence, the method is especially attractive for the solution of large-order systems. After the method is described, it is applied to a fifth-order problem from boundary-layer theory.
Hydromagnetic conditions near the core-mantle boundary
NASA Technical Reports Server (NTRS)
Backus, George E.
1995-01-01
The main results of the grant were (1) finishing the manuscript of a proof of completeness of the Poincare modes in an incompressible nonviscous fluid corotating with a rigid ellipsoidal boundary, (2) partial completion of a manuscript describing a definition of helicity that resolved questions in the literature about calculating the helicities of vector fields with complicated topologies, and (3) the beginning of a reexamination of the inverse problem of inferring properties of the geomagnetic field B just outside the core-mantle boundary (CMB) from measurements of elements of B at and above the earth's surface. This last work has led to a simple general formalism for linear and nonlinear inverse problems that appears to include all the inversion schemes so far considered for the uniqueness problem in geomagnetic inversion. The technique suggests some new methods for error estimation that form part of this report.
Frnakenstein: multiple target inverse RNA folding.
Lyngsø, Rune B; Anderson, James W J; Sizikova, Elena; Badugu, Amarendra; Hyland, Tomas; Hein, Jotun
2012-10-09
RNA secondary structure prediction, or folding, is a classic problem in bioinformatics: given a sequence of nucleotides, the aim is to predict the base pairs formed in its three dimensional conformation. The inverse problem of designing a sequence folding into a particular target structure has only more recently received notable interest. With a growing appreciation and understanding of the functional and structural properties of RNA motifs, and a growing interest in utilising biomolecules in nano-scale designs, the interest in the inverse RNA folding problem is bound to increase. However, whereas the RNA folding problem from an algorithmic viewpoint has an elegant and efficient solution, the inverse RNA folding problem appears to be hard. In this paper we present a genetic algorithm approach to solve the inverse folding problem. The main aims of the development was to address the hitherto mostly ignored extension of solving the inverse folding problem, the multi-target inverse folding problem, while simultaneously designing a method with superior performance when measured on the quality of designed sequences. The genetic algorithm has been implemented as a Python program called Frnakenstein. It was benchmarked against four existing methods and several data sets totalling 769 real and predicted single structure targets, and on 292 two structure targets. It performed as well as or better at finding sequences which folded in silico into the target structure than all existing methods, without the heavy bias towards CG base pairs that was observed for all other top performing methods. On the two structure targets it also performed well, generating a perfect design for about 80% of the targets. Our method illustrates that successful designs for the inverse RNA folding problem does not necessarily have to rely on heavy biases in base pair and unpaired base distributions. The design problem seems to become more difficult on larger structures when the target structures are real structures, while no deterioration was observed for predicted structures. Design for two structure targets is considerably more difficult, but far from impossible, demonstrating the feasibility of automated design of artificial riboswitches. The Python implementation is available at http://www.stats.ox.ac.uk/research/genome/software/frnakenstein.
Frnakenstein: multiple target inverse RNA folding
2012-01-01
Background RNA secondary structure prediction, or folding, is a classic problem in bioinformatics: given a sequence of nucleotides, the aim is to predict the base pairs formed in its three dimensional conformation. The inverse problem of designing a sequence folding into a particular target structure has only more recently received notable interest. With a growing appreciation and understanding of the functional and structural properties of RNA motifs, and a growing interest in utilising biomolecules in nano-scale designs, the interest in the inverse RNA folding problem is bound to increase. However, whereas the RNA folding problem from an algorithmic viewpoint has an elegant and efficient solution, the inverse RNA folding problem appears to be hard. Results In this paper we present a genetic algorithm approach to solve the inverse folding problem. The main aims of the development was to address the hitherto mostly ignored extension of solving the inverse folding problem, the multi-target inverse folding problem, while simultaneously designing a method with superior performance when measured on the quality of designed sequences. The genetic algorithm has been implemented as a Python program called Frnakenstein. It was benchmarked against four existing methods and several data sets totalling 769 real and predicted single structure targets, and on 292 two structure targets. It performed as well as or better at finding sequences which folded in silico into the target structure than all existing methods, without the heavy bias towards CG base pairs that was observed for all other top performing methods. On the two structure targets it also performed well, generating a perfect design for about 80% of the targets. Conclusions Our method illustrates that successful designs for the inverse RNA folding problem does not necessarily have to rely on heavy biases in base pair and unpaired base distributions. The design problem seems to become more difficult on larger structures when the target structures are real structures, while no deterioration was observed for predicted structures. Design for two structure targets is considerably more difficult, but far from impossible, demonstrating the feasibility of automated design of artificial riboswitches. The Python implementation is available at http://www.stats.ox.ac.uk/research/genome/software/frnakenstein. PMID:23043260
Numerical Inverse Scattering for the Toda Lattice
NASA Astrophysics Data System (ADS)
Bilman, Deniz; Trogdon, Thomas
2017-06-01
We present a method to compute the inverse scattering transform (IST) for the famed Toda lattice by solving the associated Riemann-Hilbert (RH) problem numerically. Deformations for the RH problem are incorporated so that the IST can be evaluated in O(1) operations for arbitrary points in the ( n, t)-domain, including short- and long-time regimes. No time-stepping is required to compute the solution because ( n, t) appear as parameters in the associated RH problem. The solution of the Toda lattice is computed in long-time asymptotic regions where the asymptotics are not known rigorously.
X-38 Experimental Controls Laws
NASA Technical Reports Server (NTRS)
Munday, Steve; Estes, Jay; Bordano, Aldo J.
2000-01-01
X-38 Experimental Control Laws X-38 is a NASA JSC/DFRC experimental flight test program developing a series of prototypes for an International Space Station (ISS) Crew Return Vehicle, often called an ISS "lifeboat." X- 38 Vehicle 132 Free Flight 3, currently scheduled for the end of this month, will be the first flight test of a modem FCS architecture called Multi-Application Control-Honeywell (MACH), originally developed by the Honeywell Technology Center. MACH wraps classical P&I outer attitude loops around a modem dynamic inversion attitude rate loop. The dynamic inversion process requires that the flight computer have an onboard aircraft model of expected vehicle dynamics based upon the aerodynamic database. Dynamic inversion is computationally intensive, so some timing modifications were made to implement MACH on the slower flight computers of the subsonic test vehicles. In addition to linear stability margin analyses and high fidelity 6-DOF simulation, hardware-in-the-loop testing is used to verify the implementation of MACH and its robustness to aerodynamic and environmental uncertainties and disturbances.
Load identification approach based on basis pursuit denoising algorithm
NASA Astrophysics Data System (ADS)
Ginsberg, D.; Ruby, M.; Fritzen, C. P.
2015-07-01
The information of the external loads is of great interest in many fields of structural analysis, such as structural health monitoring (SHM) systems or assessment of damage after extreme events. However, in most cases it is not possible to measure the external forces directly, so they need to be reconstructed. Load reconstruction refers to the problem of estimating an input to a dynamic system when the system output and the impulse response functions are usually the knowns. Generally, this leads to a so called ill-posed inverse problem, which involves solving an underdetermined linear system of equations. For most practical applications it can be assumed that the applied loads are not arbitrarily distributed in time and space, at least some specific characteristics about the external excitation are known a priori. In this contribution this knowledge was used to develop a more suitable force reconstruction method, which allows identifying the time history and the force location simultaneously by employing significantly fewer sensors compared to other reconstruction approaches. The properties of the external force are used to transform the ill-posed problem into a sparse recovery task. The sparse solution is acquired by solving a minimization problem known as basis pursuit denoising (BPDN). The possibility of reconstructing loads based on noisy structural measurement signals will be demonstrated by considering two frequently occurring loading conditions: harmonic excitation and impact events, separately and combined. First a simulation study of a simple plate structure is carried out and thereafter an experimental investigation of a real beam is performed.
ERIC Educational Resources Information Center
Nunes, Terezinha; Bryant, Peter; Evans, Deborah; Bell, Daniel; Barros, Rossana
2012-01-01
The basis of this intervention study is a distinction between numerical calculus and relational calculus. The former refers to numerical calculations and the latter to the analysis of the quantitative relations in mathematical problems. The inverse relation between addition and subtraction is relevant to both kinds of calculus, but so far research…
NASA Astrophysics Data System (ADS)
Alkan, Hilal; Balkaya, Çağlayan
2018-02-01
We present an efficient inversion tool for parameter estimation from horizontal loop electromagnetic (HLEM) data using Differential Search Algorithm (DSA) which is a swarm-intelligence-based metaheuristic proposed recently. The depth, dip, and origin of a thin subsurface conductor causing the anomaly are the parameters estimated by the HLEM method commonly known as Slingram. The applicability of the developed scheme was firstly tested on two synthetically generated anomalies with and without noise content. Two control parameters affecting the convergence characteristic to the solution of the algorithm were tuned for the so-called anomalies including one and two conductive bodies, respectively. Tuned control parameters yielded more successful statistical results compared to widely used parameter couples in DSA applications. Two field anomalies measured over a dipping graphitic shale from Northern Australia were then considered, and the algorithm provided the depth estimations being in good agreement with those of previous studies and drilling information. Furthermore, the efficiency and reliability of the results obtained were investigated via probability density function. Considering the results obtained, we can conclude that DSA characterized by the simple algorithmic structure is an efficient and promising metaheuristic for the other relatively low-dimensional geophysical inverse problems. Finally, the researchers after being familiar with the content of developed scheme displaying an easy to use and flexible characteristic can easily modify and expand it for their scientific optimization problems.
Efficient Ab initio Modeling of Random Multicomponent Alloys
Jiang, Chao; Uberuaga, Blas P.
2016-03-08
Here, we present in this Letter a novel small set of ordered structures (SSOS) method that allows extremely efficient ab initio modeling of random multi-component alloys. Using inverse II-III spinel oxides and equiatomic quinary bcc (so-called high entropy) alloys as examples, we also demonstrate that a SSOS can achieve the same accuracy as a large supercell or a well-converged cluster expansion, but with significantly reduced computational cost. In particular, because of this efficiency, a large number of quinary alloy compositions can be quickly screened, leading to the identification of several new possible high entropy alloy chemistries. Furthermore, the SSOS methodmore » developed here can be broadly useful for the rapid computational design of multi-component materials, especially those with a large number of alloying elements, a challenging problem for other approaches.« less
NASA Astrophysics Data System (ADS)
Pi, E. I.; Siegel, E.
2010-03-01
Siegel[AMS Natl.Mtg.(2002)-Abs.973-60-124] digits logarithmic- law inversion to ONLY BEQS BEC:Quanta/Bosons=#: EMP-like SEVERE VULNERABILITY of ONLY #-networks(VS.ANALOG INvulnerability) via Barabasi NP(VS.dynamics[Not.AMS(5/2009)] critique);(so called)``quantum-computing''(QC) = simple-arithmetic (sansdivision);algorithmiccomplexities:INtractibility/UNdecidabi lity/INefficiency/NONcomputability/HARDNESS(so MIScalled) ``noise''-induced-phase-transition(NIT)ACCELERATION:Cook-Levin theorem Reducibility = RG fixed-points; #-Randomness DEFINITION via WHAT? Query(VS. Goldreich[Not.AMS(2002)] How? mea culpa)= ONLY MBCS hot-plasma v #-clumping NON-random BEC; Modular-Arithmetic Congruences = Signal x Noise PRODUCTS = clock-model; NON-Shor[Physica A,341,586(04)]BEC logarithmic-law inversion factorization: Watkins #-theory U statistical- physics); P=/=NP C-S TRIVIAL Proof: Euclid!!! [(So Miscalled) computational-complexity J-O obviation(3 millennia AGO geometry: NO:CC,``CS'';``Feet of Clay!!!'']; Query WHAT?:Definition: (so MIScalled)``complexity''=UTTER-SIMPLICITY!! v COMPLICATEDNESS MEASURE(S).
Inverse kinematic-based robot control
NASA Technical Reports Server (NTRS)
Wolovich, W. A.; Flueckiger, K. F.
1987-01-01
A fundamental problem which must be resolved in virtually all non-trivial robotic operations is the well-known inverse kinematic question. More specifically, most of the tasks which robots are called upon to perform are specified in Cartesian (x,y,z) space, such as simple tracking along one or more straight line paths or following a specified surfacer with compliant force sensors and/or visual feedback. In all cases, control is actually implemented through coordinated motion of the various links which comprise the manipulator; i.e., in link space. As a consequence, the control computer of every sophisticated anthropomorphic robot must contain provisions for solving the inverse kinematic problem which, in the case of simple, non-redundant position control, involves the determination of the first three link angles, theta sub 1, theta sub 2, and theta sub 3, which produce a desired wrist origin position P sub xw, P sub yw, and P sub zw at the end of link 3 relative to some fixed base frame. Researchers outline a new inverse kinematic solution and demonstrate its potential via some recent computer simulations. They also compare it to current inverse kinematic methods and outline some of the remaining problems which will be addressed in order to render it fully operational. Also discussed are a number of practical consequences of this technique beyond its obvious use in solving the inverse kinematic question.
Classical and quantum dynamics in an inverse square potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guillaumín-España, Elisa, E-mail: ege@correo.azc.uam.mx; Núñez-Yépez, H. N., E-mail: nyhn@xanum.uam.mx; Salas-Brito, A. L., E-mail: asb@correo.azc.uam.mx
2014-10-15
The classical motion of a particle in a 3D inverse square potential with negative energy, E, is shown to be geodesic, i.e., equivalent to the particle's free motion on a non-compact phase space manifold irrespective of the sign of the coupling constant. We thus establish that all its classical orbits with E < 0 are unbounded. To analyse the corresponding quantum problem, the Schrödinger equation is solved in momentum space. No discrete energy levels exist in the unrenormalized case and the system shows a complete “fall-to-the-center” with an energy spectrum unbounded by below. Such behavior corresponds to the non-existence ofmore » bound classical orbits. The symmetry of the problem is SO(3) × SO(2, 1) corroborating previously obtained results.« less
Regularity Aspects in Inverse Musculoskeletal Biomechanics
NASA Astrophysics Data System (ADS)
Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten
2008-09-01
Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.
On the inverse problem of blade design for centrifugal pumps and fans
NASA Astrophysics Data System (ADS)
Kruyt, N. P.; Westra, R. W.
2014-06-01
The inverse problem of blade design for centrifugal pumps and fans has been studied. The solution to this problem provides the geometry of rotor blades that realize specified performance characteristics, together with the corresponding flow field. Here a three-dimensional solution method is described in which the so-called meridional geometry is fixed and the distribution of the azimuthal angle at the three-dimensional blade surface is determined for blades of infinitesimal thickness. The developed formulation is based on potential-flow theory. Besides the blade impermeability condition at the pressure and suction side of the blades, an additional boundary condition at the blade surface is required in order to fix the unknown blade geometry. For this purpose the mean-swirl distribution is employed. The iterative numerical method is based on a three-dimensional finite element method approach in which the flow equations are solved on the domain determined by the latest estimate of the blade geometry, with the mean-swirl distribution boundary condition at the blade surface being enforced. The blade impermeability boundary condition is then used to find an improved estimate of the blade geometry. The robustness of the method is increased by specific techniques, such as spanwise-coupled solution of the discretized impermeability condition and the use of under-relaxation in adjusting the estimates of the blade geometry. Various examples are shown that demonstrate the effectiveness and robustness of the method in finding a solution for the blade geometry of different types of centrifugal pumps and fans. The influence of the employed mean-swirl distribution on the performance characteristics is also investigated.
Recursive partitioned inversion of large (1500 x 1500) symmetric matrices
NASA Technical Reports Server (NTRS)
Putney, B. H.; Brownd, J. E.; Gomez, R. A.
1976-01-01
A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.
An evaluation of collision models in the Method of Moments for rarefied gas problems
NASA Astrophysics Data System (ADS)
Emerson, David; Gu, Xiao-Jun
2014-11-01
The Method of Moments offers an attractive approach for solving gaseous transport problems that are beyond the limit of validity of the Navier-Stokes-Fourier equations. Recent work has demonstrated the capability of the regularized 13 and 26 moment equations for solving problems when the Knudsen number, Kn (where Kn is the ratio of the mean free path of a gas to a typical length scale of interest), is in the range 0.1 and 1.0-the so-called transition regime. In comparison to numerical solutions of the Boltzmann equation, the Method of Moments has captured both qualitatively, and quantitatively, results of classical test problems in kinetic theory, e.g. velocity slip in Kramers' problem, temperature jump in Knudsen layers, the Knudsen minimum etc. However, most of these results have been obtained for Maxwell molecules, where molecules repel each other according to an inverse fifth-power rule. Recent work has incorporated more traditional collision models such as BGK, S-model, and ES-BGK, the latter being important for thermal problems where the Prandtl number can vary. We are currently investigating the impact of these collision models on fundamental low-speed problems of particular interest to micro-scale flows that will be discussed and evaluated in the presentation. Engineering and Physical Sciences Research Council under Grant EP/I011927/1 and CCP12.
Un, M Kerem; Kaghazchi, Hamed
2018-01-01
When a signal is initiated in the nerve, it is transmitted along each nerve fiber via an action potential (called single fiber action potential (SFAP)) which travels with a velocity that is related with the diameter of the fiber. The additive superposition of SFAPs constitutes the compound action potential (CAP) of the nerve. The fiber diameter distribution (FDD) in the nerve can be computed from the CAP data by solving an inverse problem. This is usually achieved by dividing the fibers into a finite number of diameter groups and solve a corresponding linear system to optimize FDD. However, number of fibers in a nerve can be measured sometimes in thousands and it is possible to assume a continuous distribution for the fiber diameters which leads to a gradient optimization problem. In this paper, we have evaluated this continuous approach to the solution of the inverse problem. We have utilized an analytical function for SFAP and an assumed a polynomial form for FDD. The inverse problem involves the optimization of polynomial coefficients to obtain the best estimate for the FDD. We have observed that an eighth order polynomial for FDD can capture both unimodal and bimodal fiber distributions present in vivo, even in case of noisy CAP data. The assumed FDD distribution regularizes the ill-conditioned inverse problem and produces good results.
Variable-permittivity linear inverse problem for the H(sub z)-polarized case
NASA Technical Reports Server (NTRS)
Moghaddam, M.; Chew, W. C.
1993-01-01
The H(sub z)-polarized inverse problem has rarely been studied before due to the complicated way in which the unknown permittivity appears in the wave equation. This problem is equivalent to the acoustic inverse problem with variable density. We have recently reported the solution to the nonlinear variable-permittivity H(sub z)-polarized inverse problem using the Born iterative method. Here, the linear inverse problem is solved for permittivity (epsilon) and permeability (mu) using a different approach which is an extension of the basic ideas of diffraction tomography (DT). The key to solving this problem is to utilize frequency diversity to obtain the required independent measurements. The receivers are assumed to be in the far field of the object, and plane wave incidence is also assumed. It is assumed that the scatterer is weak, so that the Born approximation can be used to arrive at a relationship between the measured pressure field and two terms related to the spatial Fourier transform of the two unknowns, epsilon and mu. The term involving permeability corresponds to monopole scattering and that for permittivity to dipole scattering. Measurements at several frequencies are used and a least squares problem is solved to reconstruct epsilon and mu. It is observed that the low spatial frequencies in the spectra of epsilon and mu produce inaccuracies in the results. Hence, a regularization method is devised to remove this problem. Several results are shown. Low contrast objects for which the above analysis holds are used to show that good reconstructions are obtained for both permittivity and permeability after regularization is applied.
Battered Husbands and Battered Wives: Why One Is a Social Problem and the Other Is Not.
ERIC Educational Resources Information Center
Lucal, Betsy
A number of factors came together in the 1970s to create a social problem called "battered wives". Then, beginning in 1977, there was an attempt to create a social problem called "battered husbands." So far, such attempts have been unsuccessful. This analysis compares the issue of battered husbands and battered wives to…
Feynman propagators on static spacetimes
NASA Astrophysics Data System (ADS)
Dereziński, Jan; Siemssen, Daniel
We consider the Klein-Gordon equation on a static spacetime and minimally coupled to a static electromagnetic potential. We show that it is essentially self-adjoint on Cc∞. We discuss various distinguished inverses and bisolutions of the Klein-Gordon operator, focusing on the so-called Feynman propagator. We show that the Feynman propagator can be considered the boundary value of the resolvent of the Klein-Gordon operator, in the spirit of the limiting absorption principle known from the theory of Schrödinger operators. We also show that the Feynman propagator is the limit of the inverse of the Wick rotated Klein-Gordon operator.
NASA Astrophysics Data System (ADS)
Capdeville, Yann; Métivier, Ludovic
2018-05-01
Seismic imaging is an efficient tool to investigate the Earth interior. Many of the different imaging techniques currently used, including the so-called full waveform inversion (FWI), are based on limited frequency band data. Such data are not sensitive to the true earth model, but to a smooth version of it. This smooth version can be related to the true model by the homogenization technique. Homogenization for wave propagation in deterministic media with no scale separation, such as geological media, has been recently developed. With such an asymptotic theory, it is possible to compute an effective medium valid for a given frequency band such that effective waveforms and true waveforms are the same up to a controlled error. In this work we make the link between limited frequency band inversion, mainly FWI, and homogenization. We establish the relation between a true model and an FWI result model. This relation is important for a proper interpretation of FWI images. We numerically illustrate, in the 2-D case, that an FWI result is at best the homogenized version of the true model. Moreover, it appears that the homogenized FWI model is quite independent of the FWI parametrization, as long as it has enough degrees of freedom. In particular, inverting for the full elastic tensor is, in each of our tests, always a good choice. We show how the homogenization can help to understand FWI behaviour and help to improve its robustness and convergence by efficiently constraining the solution space of the inverse problem.
Gravitational Collapse of Magnetized Clouds. II. The Role of Ohmic Dissipation
NASA Astrophysics Data System (ADS)
Shu, Frank H.; Galli, Daniele; Lizano, Susana; Cai, Mike
2006-08-01
We formulate the problem of magnetic field dissipation during the accretion phase of low-mass star formation, and we carry out the first step of an iterative solution procedure by assuming that the gas is in free fall along radial field lines. This so-called ``kinematic approximation'' ignores the back reaction of the Lorentz force on the accretion flow. In quasi-steady state and assuming the resistivity coefficient to be spatially uniform, the problem is analytically soluble in terms of Legendre's polynomials and hypergeometric confluent functions. The dissipation of the magnetic field occurs inside a region of radius inversely proportional to the mass of the central star (the ``Ohm radius''), where the magnetic field becomes asymptotically straight and uniform. In our solution the magnetic flux problem of star formation is avoided because the magnetic flux dragged in the accreting protostar is always zero. Our results imply that the effective resistivity of the infalling gas must be higher by at least 1 order of magnitude than the microscopic electric resistivity, to avoid conflict with measurements of paleomagnetism in meteorites and with the observed luminosity of regions of low-mass star formation.
MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data
NASA Astrophysics Data System (ADS)
Key, Kerry
2016-10-01
This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data balancing normalization weights for the joint inversion of two or more data sets encourages the inversion to fit each data type equally well. A synthetic joint inversion of marine CSEM and MT data illustrates the algorithm's performance and parallel scaling on up to 480 processing cores. CSEM inversion of data from the Middle America Trench offshore Nicaragua demonstrates a real world application. The source code and MATLAB interface tools are freely available at http://mare2dem.ucsd.edu.
Hernandez-Ferrer, Carles; Quintela Garcia, Ines; Danielski, Katharina; Carracedo, Ángel; Pérez-Jurado, Luis A; González, Juan R
2015-05-20
The well-known Genome-Wide Association Studies (GWAS) had led to many scientific discoveries using SNP data. Even so, they were not able to explain the full heritability of complex diseases. Now, other structural variants like copy number variants or DNA inversions, either germ-line or in mosaicism events, are being studies. We present the R package affy2sv to pre-process Affymetrix CytoScan HD/750k array (also for Genome-Wide SNP 5.0/6.0 and Axiom) in structural variant studies. We illustrate the capabilities of affy2sv using two different complete pipelines on real data. The first one performing a GWAS and a mosaic alterations detection study, and the other detecting CNVs and performing an inversion calling. Both examples presented in the article show up how affy2sv can be used as part of more complex pipelines aimed to analyze Affymetrix SNP arrays data in genetic association studies, where different types of structural variants are considered.
Extreme values and fat tails of multifractal fluctuations
NASA Astrophysics Data System (ADS)
Muzy, J. F.; Bacry, E.; Kozhemyak, A.
2006-06-01
In this paper we discuss the problem of the estimation of extreme event occurrence probability for data drawn from some multifractal process. We also study the heavy (power-law) tail behavior of probability density function associated with such data. We show that because of strong correlations, the standard extreme value approach is not valid and classical tail exponent estimators should be interpreted cautiously. Extreme statistics associated with multifractal random processes turn out to be characterized by non-self-averaging properties. Our considerations rely upon some analogy between random multiplicative cascades and the physics of disordered systems and also on recent mathematical results about the so-called multifractal formalism. Applied to financial time series, our findings allow us to propose an unified framework that accounts for the observed multiscaling properties of return fluctuations, the volatility clustering phenomenon and the observed “inverse cubic law” of the return pdf tails.
NASA Astrophysics Data System (ADS)
Scolan, Y.-M.; Korobkin, A. A.
2003-02-01
Hydrodynamic impact phenomena are three dimensional in nature and naval architects need more advanced tools than a simple strip theory to calculate impact loads at the preliminary design stage. Three-dimensional analytical solutions have been obtained with the help of the so-called inverse Wagner problem as discussed by Scolan and Korobkin in 2001. The approach by Wagner provides a consistent way to evaluate the flow caused by a blunt body entering liquid through its free surface. However, this approach does not account for the spray jets and gives no idea regarding the energy evacuated from the main flow by the jets. Clear insight into the jet formation is required. Wagner provided certain elements of the answer for two-dimensional configurations. On the basis of those results, the energy distribution pattern is analysed for three-dimensional configurations in the present paper.
A partially reflecting random walk on spheres algorithm for electrical impedance tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maire, Sylvain, E-mail: maire@univ-tln.fr; Simon, Martin, E-mail: simon@math.uni-mainz.de
2015-12-15
In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance ofmore » the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.« less
Estimation of road profile variability from measured vehicle responses
NASA Astrophysics Data System (ADS)
Fauriat, W.; Mattrand, C.; Gayton, N.; Beakou, A.; Cembrzynski, T.
2016-05-01
When assessing the statistical variability of fatigue loads acting throughout the life of a vehicle, the question of the variability of road roughness naturally arises, as both quantities are strongly related. For car manufacturers, gathering information on the environment in which vehicles evolve is a long and costly but necessary process to adapt their products to durability requirements. In the present paper, a data processing algorithm is proposed in order to estimate the road profiles covered by a given vehicle, from the dynamic responses measured on this vehicle. The algorithm based on Kalman filtering theory aims at solving a so-called inverse problem, in a stochastic framework. It is validated using experimental data obtained from simulations and real measurements. The proposed method is subsequently applied to extract valuable statistical information on road roughness from an existing load characterisation campaign carried out by Renault within one of its markets.
An efficient and flexible Abel-inversion method for noisy data
NASA Astrophysics Data System (ADS)
Antokhin, Igor I.
2016-12-01
We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.
Namiot, V A
2016-01-01
It is known that in quantum mechanics the act of observing the experiment can affect the experimental findings in some cases. In particular, it happens under the so-called Zeno effect. In this work it is shown that in contrast to the "standard" Zeno-effect where the act of observing a process reduces the probability of its reality, an inverse situation when a particle transmits through a potential barrier (a so-called barrier anti-Zeno effect) can be observed, the observation of the particle essentially increases the probability of its transmission through the barrier. The possibility of using the barrier anti-Zeno effect is discussed to explain paradoxical results of experiments on "cold nuclear fusion" observed in various systems including biological ones. (According to the observers who performed the observations, the energy generation, which could not be explained by any chemical processes, as well as the change in the isotope and even element composition of the studied object may occur in these systems.
NASA Astrophysics Data System (ADS)
Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre
2014-12-01
In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.
Implementing wavelet inverse-transform processor with surface acoustic wave device.
Lu, Wenke; Zhu, Changchun; Liu, Qinghong; Zhang, Jingduan
2013-02-01
The objective of this research was to investigate the implementation schemes of the wavelet inverse-transform processor using surface acoustic wave (SAW) device, the length function of defining the electrodes, and the possibility of solving the load resistance and the internal resistance for the wavelet inverse-transform processor using SAW device. In this paper, we investigate the implementation schemes of the wavelet inverse-transform processor using SAW device. In the implementation scheme that the input interdigital transducer (IDT) and output IDT stand in a line, because the electrode-overlap envelope of the input IDT is identical with the one of the output IDT (i.e. the two transducers are identical), the product of the input IDT's frequency response and the output IDT's frequency response can be implemented, so that the wavelet inverse-transform processor can be fabricated. X-112(0)Y LiTaO(3) is used as a substrate material to fabricate the wavelet inverse-transform processor. The size of the wavelet inverse-transform processor using this implementation scheme is small, so its cost is low. First, according to the envelope function of the wavelet function, the length function of the electrodes is defined, then, the lengths of the electrodes can be calculated from the length function of the electrodes, finally, the input IDT and output IDT can be designed according to the lengths and widths for the electrodes. In this paper, we also present the load resistance and the internal resistance as the two problems of the wavelet inverse-transform processor using SAW devices. The solutions to these problems are achieved in this study. When the amplifiers are subjected to the input end and output end for the wavelet inverse-transform processor, they can eliminate the influence of the load resistance and the internal resistance on the output voltage of the wavelet inverse-transform processor using SAW device. Copyright © 2012 Elsevier B.V. All rights reserved.
Spatio-temporal reconstruction of brain dynamics from EEG with a Markov prior.
Hansen, Sofie Therese; Hansen, Lars Kai
2017-03-01
Electroencephalography (EEG) can capture brain dynamics in high temporal resolution. By projecting the scalp EEG signal back to its origin in the brain also high spatial resolution can be achieved. Source localized EEG therefore has potential to be a very powerful tool for understanding the functional dynamics of the brain. Solving the inverse problem of EEG is however highly ill-posed as there are many more potential locations of the EEG generators than EEG measurement points. Several well-known properties of brain dynamics can be exploited to alleviate this problem. More short ranging connections exist in the brain than long ranging, arguing for spatially focal sources. Additionally, recent work (Delorme et al., 2012) argues that EEG can be decomposed into components having sparse source distributions. On the temporal side both short and long term stationarity of brain activation are seen. We summarize these insights in an inverse solver, the so-called "Variational Garrote" (Kappen and Gómez, 2013). Using a Markov prior we can incorporate flexible degrees of temporal stationarity. Through spatial basis functions spatially smooth distributions are obtained. Sparsity of these are inherent to the Variational Garrote solver. We name our method the MarkoVG and demonstrate its ability to adapt to the temporal smoothness and spatial sparsity in simulated EEG data. Finally a benchmark EEG dataset is used to demonstrate MarkoVG's ability to recover non-stationary brain dynamics. Copyright © 2016 Elsevier Inc. All rights reserved.
Network-Physics(NP) Bec DIGITAL(#)-VULNERABILITY Versus Fault-Tolerant Analog
NASA Astrophysics Data System (ADS)
Alexander, G. K.; Hathaway, M.; Schmidt, H. E.; Siegel, E.
2011-03-01
Siegel[AMS Joint Mtg.(2002)-Abs.973-60-124] digits logarithmic-(Newcomb(1881)-Weyl(1914; 1916)-Benford(1938)-"NeWBe"/"OLDbe")-law algebraic-inversion to ONLY BEQS BEC:Quanta/Bosons= digits: Synthesis reveals EMP-like SEVERE VULNERABILITY of ONLY DIGITAL-networks(VS. FAULT-TOLERANT ANALOG INvulnerability) via Barabasi "Network-Physics" relative-``statics''(VS.dynamics-[Willinger-Alderson-Doyle(Not.AMS(5/09)]-]critique); (so called)"Quantum-computing is simple-arithmetic(sans division/ factorization); algorithmic-complexities: INtractibility/ UNdecidability/ INefficiency/NONcomputability / HARDNESS(so MIScalled) "noise"-induced-phase-transitions(NITS) ACCELERATION: Cook-Levin theorem Reducibility is Renormalization-(Semi)-Group fixed-points; number-Randomness DEFINITION via WHAT? Query(VS. Goldreich[Not.AMS(02)] How? mea culpa)can ONLY be MBCS "hot-plasma" versus digit-clumping NON-random BEC; Modular-arithmetic Congruences= Signal X Noise PRODUCTS = clock-model; NON-Shor[Physica A,341,586(04)] BEC logarithmic-law inversion factorization:Watkins number-thy. U stat.-phys.); P=/=NP TRIVIAL Proof: Euclid!!! [(So Miscalled) computational-complexity J-O obviation via geometry.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.; Kuvshinov, Alexey V.
2016-05-01
This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.
Stable Lévy motion with inverse Gaussian subordinator
NASA Astrophysics Data System (ADS)
Kumar, A.; Wyłomańska, A.; Gajda, J.
2017-09-01
In this paper we study the stable Lévy motion subordinated by the so-called inverse Gaussian process. This process extends the well known normal inverse Gaussian (NIG) process introduced by Barndorff-Nielsen, which arises by subordinating ordinary Brownian motion (with drift) with inverse Gaussian process. The NIG process found many interesting applications, especially in financial data description. We discuss here the main features of the introduced subordinated process, such as distributional properties, existence of fractional order moments and asymptotic tail behavior. We show the connection of the process with continuous time random walk. Further, the governing fractional partial differential equations for the probability density function is also obtained. Moreover, we discuss the asymptotic distribution of sample mean square displacement, the main tool in detection of anomalous diffusion phenomena (Metzler et al., 2014). In order to apply the stable Lévy motion time-changed by inverse Gaussian subordinator we propose a step-by-step procedure of parameters estimation. At the end, we show how the examined process can be useful to model financial time series.
Model-based elastography: a survey of approaches to the inverse elasticity problem
Doyley, M M
2012-01-01
Elastography is emerging as an imaging modality that can distinguish normal versus diseased tissues via their biomechanical properties. This article reviews current approaches to elastography in three areas — quasi-static, harmonic, and transient — and describes inversion schemes for each elastographic imaging approach. Approaches include: first-order approximation methods; direct and iterative inversion schemes for linear elastic; isotropic materials; and advanced reconstruction methods for recovering parameters that characterize complex mechanical behavior. The paper’s objective is to document efforts to develop elastography within the framework of solving an inverse problem, so that elastography may provide reliable estimates of shear modulus and other mechanical parameters. We discuss issues that must be addressed if model-based elastography is to become the prevailing approach to quasi-static, harmonic, and transient elastography: (1) developing practical techniques to transform the ill-posed problem with a well-posed one; (2) devising better forward models to capture the transient behavior of soft tissue; and (3) developing better test procedures to evaluate the performance of modulus elastograms. PMID:22222839
Preventing Boys' Problems in Schools through Psychoeducational Programming: A Call to Action
ERIC Educational Resources Information Center
O'Neil, James M.; Lujan, Melissa L.
2009-01-01
Controversy currently exists on whether boys are in crises and, if so, what to do about it. Research is reviewed that indicates that boys have problems that affect their emotional and interpersonal functioning. Psychoeducational and preventive programs for boys are recommended as a call to action in schools. Thematic areas for boys' programming…
ERIC Educational Resources Information Center
Schlotz, Wolff; Jones, Alexander; Godfrey, Keith M.; Phillips, David I. W.
2008-01-01
Background: Inverse associations of fetal growth with behavioural problems in childhood have been repeatedly reported, suggesting long-term effects of the prenatal developmental environment on behaviour later in life. However, no study so far has examined effects on temperament and potential developmental pathways. Temperamental traits may be…
NASA Astrophysics Data System (ADS)
Placko, Dominique; Bore, Thierry; Rivollet, Alain; Joubert, Pierre-Yves
2015-10-01
This paper deals with the problem of imaging defects in metallic structures through eddy current (EC) inspections, and proposes an original process for a possible tomographical crack evaluation. This process is based on a semi analytical modeling, called "distributed point source method" (DPSM) which is used to describe and equate the interactions between the implemented EC probes and the structure under test. Several steps will be successively described, illustrating the feasibility of this new imaging process dedicated to the quantitative evaluation of defects. The basic principles of this imaging process firstly consist in creating a 3D grid by meshing the volume potentially inspected by the sensor. As a result, a given number of elemental volumes (called voxels) are obtained. Secondly, the DPSM modeling is used to compute an image for all occurrences in which only one of the voxels has a different conductivity among all the other ones. The assumption consists to consider that a real defect may be truly represented by a superimposition of elemental voxels: the resulting accuracy will naturally depend on the density of space sampling. On other hand, the excitation device of the EC imager has the capability to be oriented in several directions, and driven by an excitation current at variable frequency. So, the simulation will be performed for several frequencies and directions of the eddy currents induced in the structure, which increases the signal entropy. All these results are merged in a so-called "observation matrix" containing all the probe/structure interaction configurations. This matrix is then used in an inversion scheme in order to perform the evaluation of the defect location and geometry. The modeled EC data provided by the DPSM are compared to the experimental images provided by an eddy current imager (ECI), implemented on aluminum plates containing some buried defects. In order to validate the proposed inversion process, we feed it with computed images of various acquisition configurations. Additive noise was added to the images so that they are more representative of actual EC data. In the case of simple notch type defects, for which the relative conductivity may only take two extreme values (1 or 0), a threshold was introduced on the inverted images, in a post processing step, taking advantage of a priori knowledge of the statistical properties of the restored images. This threshold allowed to enhance the image contrast and has contributed to eliminate both the residual noise and the pixels showing non-realistic values.
Face-Evoked Steady-State Visual Potentials: Effects of Presentation Rate and Face Inversion
Gruss, L. Forest; Wieser, Matthias J.; Schweinberger, Stefan R.; Keil, Andreas
2012-01-01
Face processing can be explored using electrophysiological methods. Research with event-related potentials has demonstrated the so-called face inversion effect, in which the N170 component is enhanced in amplitude and latency to inverted, compared to upright, faces. The present study explored the extent to which repetitive lower-level visual cortical engagement, reflected in flicker steady-state visual evoked potentials (ssVEPs), shows similar amplitude enhancement to face inversion. We also asked if inversion-related ssVEP modulation would be dependent on the stimulation rate at which upright and inverted faces were flickered. To this end, multiple tagging frequencies were used (5, 10, 15, and 20 Hz) across two studies (n = 21, n = 18). Results showed that amplitude enhancement of the ssVEP for inverted faces was found solely at higher stimulation frequencies (15 and 20 Hz). By contrast, lower frequency ssVEPs did not show this inversion effect. These findings suggest that stimulation frequency affects the sensitivity of ssVEPs to face inversion. PMID:23205009
NASA Astrophysics Data System (ADS)
Chen, Ye-Hong; Shi, Zhi-Cheng; Song, Jie; Xia, Yan
2018-02-01
In this paper, by invariant-based inverse engineering, we design classical driving fields to transfer quantum fluctuations between two suspended membranes in an optomechanical cavity system. The transfer can be quickly attained through a nonadiabatic evolution path determined by a so-called dynamical invariant. Such an evolution path allows one to optimize the occupancies of the unstable "intermediate" states; thus, the influence of cavity decays can be suppressed. Numerical simulation demonstrates that a perfect fluctuation transfer between two membranes can be rapidly achieved in one step, and the transfer is robust to both the amplitude noises and cavity decays.
3D CSEM inversion based on goal-oriented adaptive finite element method
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.
2016-12-01
We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.
Fundamental Mechanisms of NeuroInformation Processing: Inverse Problems and Spike Processing
2016-08-04
platform called Neurokernel for collaborative development of comprehensive models of the brain of the fruit fly Drosophila melanogaster and their execution...example. We investigated the following nonlinear identification problem: given both the input signal u and the time sequence (tk)k2Z at the output of...from a time sequence is to be contrasted with existing methods for rate-based models in neuroscience. In such models the output of the system is taken
Stochastic reduced order models for inverse problems under uncertainty
Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.
2014-01-01
This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115
Wang, G.L.; Chew, W.C.; Cui, T.J.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.
2004-01-01
Three-dimensional (3D) subsurface imaging by using inversion of data obtained from the very early time electromagnetic system (VETEM) was discussed. The study was carried out by using the distorted Born iterative method to match the internal nonlinear property of the 3D inversion problem. The forward solver was based on the total-current formulation bi-conjugate gradient-fast Fourier transform (BCCG-FFT). It was found that the selection of regularization parameter follow a heuristic rule as used in the Levenberg-Marquardt algorithm so that the iteration is stable.
Calculating tissue shear modulus and pressure by 2D log-elastographic methods
NASA Astrophysics Data System (ADS)
McLaughlin, Joyce R.; Zhang, Ning; Manduca, Armando
2010-08-01
Shear modulus imaging, often called elastography, enables detection and characterization of tissue abnormalities. In this paper the data are two displacement components obtained from successive MR or ultrasound data sets acquired while the tissue is excited mechanically. A 2D plane strain elastic model is assumed to govern the 2D displacement, u. The shear modulus, μ, is unknown and whether or not the first Lamé parameter, λ, is known the pressure p = λ∇ sdot u which is present in the plane strain model cannot be measured and is unreliably computed from measured data and can be shown to be an order one quantity in the units kPa. So here we present a 2D log-elastographic inverse algorithm that (1) simultaneously reconstructs the shear modulus, μ, and p, which together satisfy a first-order partial differential equation system, with the goal of imaging μ (2) controls potential exponential growth in the numerical error and (3) reliably reconstructs the quantity p in the inverse algorithm as compared to the same quantity computed with a forward algorithm. This work generalizes the log-elastographic algorithm in Lin et al (2009 Inverse Problems 25) which uses one displacement component, is derived assuming that the component satisfies the wave equation and is tested on synthetic data computed with the wave equation model. The 2D log-elastographic algorithm is tested on 2D synthetic data and 2D in vivo data from Mayo Clinic. We also exhibit examples to show that the 2D log-elastographic algorithm improves the quality of the recovered images as compared to the log-elastographic and direct inversion algorithms.
Spectral inversion of frequency-domain IP data obtained in Haenam, South Korea
NASA Astrophysics Data System (ADS)
Kim, B.; Nam, M. J.; Son, J. S.
2017-12-01
Spectral induced polarization (SIP) method using a range of source frequencies have been performed for not only exploring minerals resources, but also engineering or environmental application. SIP interpretation first makes inversion of individual frequency data to obtain complex resistivity structures, which will further analyzed employing Cole-Cole model to explain the frequency-dependent characteristics. However, due to the difficulty in fitting Cole-Cole model, there is a movement to interpret complex resistivity structure inverted only from a single frequency data: that is so-called "complex resistivity survey". Further, simultaneous inversion of multi-frequency SIP data, rather than making a single frequency SIP data, has been studied to improve ambiguity and artefacts of independent single frequency inversion in obtaining a complex resistivity structure, even though the dispersion characteristics of complex resistivity with respect to source frequency. Employing the simultaneous inversion method, this study makes inversion of field SIP data obtained over epithermal mineralized area, Haenam, in the southernmost tip of South Korea. The area has a polarizable structure because of extensive hydrothermal alteration, gold-silver deposits. After the inversion, we compare between inversion results considering multi-frequency data and single frequency data set to evaluate the performance of simultaneous inversion of multi-frequency SIP data.
Trimming and procrastination as inversion techniques
NASA Astrophysics Data System (ADS)
Backus, George E.
1996-12-01
By examining the processes of truncating and approximating the model space (trimming it), and by committing to neither the objectivist nor the subjectivist interpretation of probability (procrastinating), we construct a formal scheme for solving linear and non-linear geophysical inverse problems. The necessary prior information about the correct model xE can be either a collection of inequalities or a probability measure describing where xE was likely to be in the model space X before the data vector y0 was measured. The results of the inversion are (1) a vector z0 that estimates some numerical properties zE of xE; (2) an estimate of the error δz = z0 - zE. As y0 is finite dimensional, so is z0, and hence in principle inversion cannot describe all of xE. The error δz is studied under successively more specialized assumptions about the inverse problem, culminating in a complete analysis of the linear inverse problem with a prior quadratic bound on xE. Our formalism appears to encompass and provide error estimates for many of the inversion schemes current in geomagnetism, and would be equally applicable in geodesy and seismology if adequate prior information were available there. As an idealized example we study the magnetic field at the core-mantle boundary, using satellite measurements of field elements at sites assumed to be almost uniformly distributed on a single spherical surface. Magnetospheric currents are neglected and the crustal field is idealized as a random process with rotationally invariant statistics. We find that an appropriate data compression diagonalizes the variance matrix of the crustal signal and permits an analytic trimming of the idealized problem.
2011-04-01
L1u. Assume that geodesic lines, generated by the eikonal equation corresponding to the function c (x) are regular, i.e. any two points in R3 can be...source x0 is located far from Ω, then similarly with (107) ∆l (x, x0) ≈ 0 in Ω. The function l (x, x0) satisfies the eikonal equation [38] |∇xl (x, x0...called “inverse kinematic problem” which aims to recover the function c (x) from the eikonal equation assuming that the function l (x, x0) is known for
Directly data processing algorithm for multi-wavelength pyrometer (MWP).
Xing, Jian; Peng, Bo; Ma, Zhao; Guo, Xin; Dai, Li; Gu, Weihong; Song, Wenlong
2017-11-27
Data processing of multi-wavelength pyrometer (MWP) is a difficult problem because unknown emissivity. So far some solutions developed generally assumed particular mathematical relations for emissivity versus wavelength or emissivity versus temperature. Due to the deviation between the hypothesis and actual situation, the inversion results can be seriously affected. So directly data processing algorithm of MWP that does not need to assume the spectral emissivity model in advance is main aim of the study. Two new data processing algorithms of MWP, Gradient Projection (GP) algorithm and Internal Penalty Function (IPF) algorithm, each of which does not require to fix emissivity model in advance, are proposed. The novelty core idea is that data processing problem of MWP is transformed into constraint optimization problem, then it can be solved by GP or IPF algorithms. By comparison of simulation results for some typical spectral emissivity models, it is found that IPF algorithm is superior to GP algorithm in terms of accuracy and efficiency. Rocket nozzle temperature experiment results show that true temperature inversion results from IPF algorithm agree well with the theoretical design temperature as well. So the proposed combination IPF algorithm with MWP is expected to be a directly data processing algorithm to clear up the unknown emissivity obstacle for MWP.
NASA Astrophysics Data System (ADS)
Dorn, O.; Lesselier, D.
2010-07-01
Inverse problems in electromagnetics have a long history and have stimulated exciting research over many decades. New applications and solution methods are still emerging, providing a rich source of challenging topics for further investigation. The purpose of this special issue is to combine descriptions of several such developments that are expected to have the potential to fundamentally fuel new research, and to provide an overview of novel methods and applications for electromagnetic inverse problems. There have been several special sections published in Inverse Problems over the last decade addressing fully, or partly, electromagnetic inverse problems. Examples are: Electromagnetic imaging and inversion of the Earth's subsurface (Guest Editors: D Lesselier and T Habashy) October 2000 Testing inversion algorithms against experimental data (Guest Editors: K Belkebir and M Saillard) December 2001 Electromagnetic and ultrasonic nondestructive evaluation (Guest Editors: D Lesselier and J Bowler) December 2002 Electromagnetic characterization of buried obstacles (Guest Editors: D Lesselier and W C Chew) December 2004 Testing inversion algorithms against experimental data: inhomogeneous targets (Guest Editors: K Belkebir and M Saillard) December 2005 Testing inversion algorithms against experimental data: 3D targets (Guest Editors: A Litman and L Crocco) February 2009 In a certain sense, the current issue can be understood as a continuation of this series of special sections on electromagnetic inverse problems. On the other hand, its focus is intended to be more general than previous ones. Instead of trying to cover a well-defined, somewhat specialized research topic as completely as possible, this issue aims to show the broad range of techniques and applications that are relevant to electromagnetic imaging nowadays, which may serve as a source of inspiration and encouragement for all those entering this active and rapidly developing research area. Also, the construction of this special issue is likely to have been different from preceding ones. In addition to the invitations sent to specific research groups involved in electromagnetic inverse problems, the Guest Editors also solicited recommendations, from a large number of experts, of potential authors who were thereupon encouraged to contribute. Moreover, an open call for contributions was published on the homepage of Inverse Problems in order to attract as wide a scope of contributions as possible. This special issue's attempt at generality might also define its limitations: by no means could this collection of papers be exhaustive or complete, and as Guest Editors we are well aware that many exciting topics and potential contributions will be missing. This, however, also determines its very special flavor: besides addressing electromagnetic inverse problems in a broad sense, there were only a few restrictions on the contributions considered for this section. One requirement was plausible evidence of either novelty or the emergent nature of the technique or application described, judged mainly by the referees, and in some cases by the Guest Editors. The technical quality of the contributions always remained a stringent condition of acceptance, final adjudication (possibly questionable either way, not always positive) being made in most cases once a thorough revision process had been carried out. Therefore, we hope that the final result presented here constitutes an interesting collection of novel ideas and applications, properly refereed and edited, which will find its own readership and which can stimulate significant new research in the topics represented. Overall, as Guest Editors, we feel quite fortunate to have obtained such a strong response to the call for this issue and to have a really wide-ranging collection of high-quality contributions which, indeed, can be read from the first to the last page with sustained enthusiasm. A large number of applications and techniques is represented, overall via 16 contributions with 45 authors in total. This shows, in our opinion, that electromagnetic imaging and inversion remain amongst the most challenging and active research areas in applied inverse problems today. Below, we give a brief overview of the contributions included in this issue, ordered alphabetically by the surname of the leading author. 1. The complexity of handling potential randomness of the source in an inverse scattering problem is not minor, and the literature is far from being replete in this configuration. The contribution by G Bao, S N Chow, P Li and H Zhou, `Numerical solution of an inverse medium scattering problem with a stochastic source', exemplifies how to hybridize Wiener chaos expansion with a recursive linearization method in order to solve the stochastic problem as a set of decoupled deterministic ones. 2. In cases where the forward problem is expensive to evaluate, database methods might become a reliable method of choice, while enabling one to deliver more information on the inversion itself. The contribution by S Bilicz, M Lambert and Sz Gyimóthy, `Kriging-based generation of optimal databases as forward and inverse surrogate models', describes such a technique which uses kriging for constructing an efficient database with the goal of achieving an equidistant distribution of points in the measurement space. 3. Anisotropy remains a considerable challenge in electromagnetic imaging, which is tackled in the contribution by F Cakoni, D Colton, P Monk and J Sun, `The inverse electromagnetic scattering problem for anisotropic media', via the fact that transmission eigenvalues can be retrieved from a far-field scattering pattern, yielding, in particular, lower and upper bounds of the index of refraction of the unknown (dielectric anisotropic) scatterer. 4. So-called subspace optimization methods (SOM) have attracted a lot of interest recently in many fields. The contribution by X Chen, `Subspace-based optimization method for inverse scattering problems with an inhomogeneous background medium', illustrates how to address a realistic situation in which the medium containing the unknown obstacles is not homogeneous, via blending a properly developed SOM with a finite-element approach to the required Green's functions. 5. H Egger, M Hanke, C Schneider, J Schöberl and S Zaglmayr, in their contribution `Adjoint-based sampling methods for electromagnetic scattering', show how to efficiently develop sampling methods without explicit knowledge of the dyadic Green's function once an adjoint problem has been solved at much lower computational cost. This is demonstrated by examples in demanding propagative and diffusive situations. 6. Passive sensor arrays can be employed to image reflectors from ambient noise via proper migration of cross-correlation matrices into their embedding medium. This is investigated, and resolution, in particular, is considered in detail, as a function of the characteristics of the sensor array and those of the noise, in the contribution by J Garnier and G Papanicolaou, `Resolution analysis for imaging with noise'. 7. A direct reconstruction technique based on the conformal mapping theorem is proposed and investigated in depth in the contribution by H Haddar and R Kress, `Conformal mapping and impedance tomography'. This paper expands on previous work, with inclusions in homogeneous media, convergence results, and numerical illustrations. 8. The contribution by T Hohage and S Langer, `Acceleration techniques for regularized Newton methods applied to electromagnetic inverse medium scattering problems', focuses on a spectral preconditioner intended to accelerate regularized Newton methods as employed for the retrieval of a local inhomogeneity in a three-dimensional vector electromagnetic case, while also illustrating the implementation of a Lepskiĭ-type stopping rule outsmarting a traditional discrepancy principle. 9. Geophysical applications are a rich source of practically relevant inverse problems. The contribution by M Li, A Abubakar and T Habashy, `Application of a two-and-a-half dimensional model-based algorithm to crosswell electromagnetic data inversion', deals with a model-based inversion technique for electromagnetic imaging which addresses novel challenges such as multi-physics inversion, and incorporation of prior knowledge, such as in hydrocarbon recovery. 10. Non-stationary inverse problems, considered as a special class of Bayesian inverse problems, are framed via an orthogonal decomposition representation in the contribution by A Lipponen, A Seppänen and J P Kaipio, `Reduced order estimation of nonstationary flows with electrical impedance tomography'. The goal is to simultaneously estimate, from electrical impedance tomography data, certain characteristics of the Navier--Stokes fluid flow model together with time-varying concentration distribution. 11. Non-iterative imaging methods of thin, penetrable cracks, based on asymptotic expansion of the scattering amplitude and analysis of the multi-static response matrix, are discussed in the contribution by W-K Park, `On the imaging of thin dielectric inclusions buried within a half-space', completing, for a shallow burial case at multiple frequencies, the direct imaging of small obstacles (here, along their transverse dimension), MUSIC and non-MUSIC type indicator functions being used for that purpose. 12. The contribution by R Potthast, `A study on orthogonality sampling' envisages quick localization and shaping of obstacles from (portions of) far-field scattering patterns collected at one or more time-harmonic frequencies, via the simple calculation (and summation) of scalar products between those patterns and a test function. This is numerically exemplified for Neumann/Dirichlet boundary conditions and homogeneous/heterogeneous embedding media. 13. The contribution by J D Shea, P Kosmas, B D Van Veen and S C Hagness, `Contrast-enhanced microwave imaging of breast tumors: a computational study using 3D realistic numerical phantoms', aims at microwave medical imaging, namely the early detection of breast cancer. The use of contrast enhancing agents is discussed in detail and a number of reconstructions in three-dimensional geometry of realistic numerical breast phantoms are presented. 14. The contribution by D A Subbarayappa and V Isakov, `Increasing stability of the continuation for the Maxwell system', discusses enhanced log-type stability results for continuation of solutions of the time-harmonic Maxwell system, adding a fresh chapter to the interesting story of the study of the Cauchy problem for PDE. 15. In their contribution, `Recent developments of a monotonicity imaging method for magnetic induction tomography in the small skin-depth regime', A Tamburrino, S Ventre and G Rubinacci extend the recently developed monotonicity method toward the application of magnetic induction tomography in order to map surface-breaking defects affecting a damaged metal component. 16. The contribution by F Viani, P Rocca, M Benedetti, G Oliveri and A Massa, `Electromagnetic passive localization and tracking of moving targets in a WSN-infrastructured environment', contributes to what could still be seen as a niche problem, yet both useful in terms of applications, e.g., security, and challenging in terms of methodologies and experiments, in particular, in view of the complexity of environments in which this endeavor is to take place and the variability of the wireless sensor networks employed. To conclude, we would like to thank the able and tireless work of Kate Watt and Zoë Crossman, as past and present Publishers of the Journal, on what was definitely a long and exciting journey (sometimes a little discouraging when reports were not arriving, or authors were late, or Guest Editors overwhelmed) that started from a thorough discussion at the `Manchester workshop on electromagnetic inverse problems' held mid-June 2009, between Kate Watt and the Guest Editors. We gratefully acknowledge the fact that W W Symes gave us his full backing to carry out this special issue and that A K Louis completed it successfully. Last, but not least, the staff of Inverse Problems should be thanked, since they work together to make it a premier journal.
A Localized Ensemble Kalman Smoother
NASA Technical Reports Server (NTRS)
Butala, Mark D.
2012-01-01
Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.
Inverse Function: Pre-Service Teachers' Techniques and Meanings
ERIC Educational Resources Information Center
Paoletti, Teo; Stevens, Irma E.; Hobson, Natalie L. F.; Moore, Kevin C.; LaForest, Kevin R.
2018-01-01
Researchers have argued teachers and students are not developing connected meanings for function inverse, thus calling for a closer examination of teachers' and students' inverse function meanings. Responding to this call, we characterize 25 pre-service teachers' inverse function meanings as inferred from our analysis of clinical interviews. After…
NASA Astrophysics Data System (ADS)
Bassrei, A.; Terra, F. A.; Santos, E. T.
2007-12-01
Inverse problems in Applied Geophysics are usually ill-posed. One way to reduce such deficiency is through derivative matrices, which are a particular case of a more general family that receive the name regularization. The regularization by derivative matrices has an input parameter called regularization parameter, which choice is already a problem. It was suggested in the 1970's a heuristic approach later called L-curve, with the purpose to provide the optimum regularization parameter. The L-curve is a parametric curve, where each point is associated to a λ parameter. In the horizontal axis one represents the error between the observed data and the calculated one and in the vertical axis one represents the product between the regularization matrix and the estimated model. The ideal point is the L-curve knee, where there is a balance between the quantities represented in the Cartesian axes. The L-curve has been applied to a variety of inverse problems, also in Geophysics. However, the visualization of the knee is not always an easy task, in special when the L-curve does not the L shape. In this work three methodologies are employed for the search and obtainment of the optimal regularization parameter from the L curve. The first criterion is the utilization of Hansen's tool box which extracts λ automatically. The second criterion consists in to extract visually the optimal parameter. By third criterion one understands the construction of the first derivative of the L-curve, and the posterior automatic extraction of the inflexion point. The utilization of the L-curve with the three above criteria were applied and validated in traveltime tomography and 2-D gravity inversion. After many simulations with synthetic data, noise- free as well as data corrupted with noise, with the regularization orders 0, 1, and 2, we verified that the three criteria are valid and provide satisfactory results. The third criterion presented the best performance, specially in cases where the L-curve has an irregular shape.
On uncertainty quantification in hydrogeology and hydrogeophysics
NASA Astrophysics Data System (ADS)
Linde, Niklas; Ginsbourger, David; Irving, James; Nobile, Fabio; Doucet, Arnaud
2017-12-01
Recent advances in sensor technologies, field methodologies, numerical modeling, and inversion approaches have contributed to unprecedented imaging of hydrogeological properties and detailed predictions at multiple temporal and spatial scales. Nevertheless, imaging results and predictions will always remain imprecise, which calls for appropriate uncertainty quantification (UQ). In this paper, we outline selected methodological developments together with pioneering UQ applications in hydrogeology and hydrogeophysics. The applied mathematics and statistics literature is not easy to penetrate and this review aims at helping hydrogeologists and hydrogeophysicists to identify suitable approaches for UQ that can be applied and further developed to their specific needs. To bypass the tremendous computational costs associated with forward UQ based on full-physics simulations, we discuss proxy-modeling strategies and multi-resolution (Multi-level Monte Carlo) methods. We consider Bayesian inversion for non-linear and non-Gaussian state-space problems and discuss how Sequential Monte Carlo may become a practical alternative. We also describe strategies to account for forward modeling errors in Bayesian inversion. Finally, we consider hydrogeophysical inversion, where petrophysical uncertainty is often ignored leading to overconfident parameter estimation. The high parameter and data dimensions encountered in hydrogeological and geophysical problems make UQ a complicated and important challenge that has only been partially addressed to date.
Some New Results in Astrophysical Problems of Nonlinear Theory of Radiative Transfer
NASA Astrophysics Data System (ADS)
Pikichyan, H. V.
2017-07-01
In the interpretation of the observed astrophysical spectra, a decisive role is related to nonlinear problems of radiative transfer, because the processes of multiple interactions of matter of cosmic medium with the exciting intense radiation ubiquitously occur in astrophysical objects, and in their vicinities. Whereas, the intensity of the exciting radiation changes the physical properties of the original medium, and itself was modified, simultaneously, in a self-consistent manner under its influence. In the present report, we show that the consistent application of the principle of invariance in the nonlinear problem of bilateral external illumination of a scattering/absorbing one-dimensional anisotropic medium of finite geometrical thickness allows for simplifications that were previously considered as a prerogative only of linear problems. The nonlinear problem is analyzed through the three methods of the principle of invariance: (i) an adding of layers, (ii) its limiting form, described by differential equations of invariant imbedding, and (iii) a transition to the, so-called, functional equations of the "Ambartsumyan's complete invariance". Thereby, as an alternative to the Boltzmann equation, a new type of equations, so-called "kinetic equations of equivalence", are obtained. By the introduction of new functions - the so-called "linear images" of solution of nonlinear problem of radiative transfer, the linear structure of the solution of the nonlinear problem under study is further revealed. Linear images allow to convert naturally the statistical characteristics of random walk of a "single quantum" or their "beam of unit intensity", as well as widely known "probabilistic interpretation of phenomena of transfer", to the field of nonlinear problems. The structure of the equations obtained for determination of linear images is typical of linear problems.
Low-cost capacitor voltage inverter for outstanding performance in piezoelectric energy harvesting.
Lallart, Mickaël; Garbuio, Lauric; Richard, Claude; Guyomar, Daniel
2010-01-01
The purpose of this paper is to propose a new scheme for piezoelectric energy harvesting optimization. The proposed enhancement relies on a new topology for inverting the voltage across a single capacitor with reduced losses. The increase of the inversion quality allows a much more effective energy harvesting process using the so-called synchronized switch harvesting on inductor (SSHI) nonlinear technique. It is shown that the proposed architecture, based on a 2-step inversion, increases the harvested power by a theoretical factor up to square root of 2 (i.e., 40% gain) compared with classical SSHI, allowing an increase of the harvested power by a factor greater than 1000% compared with the standard energy harvesting technique for realistic values of inversion components. The proposed circuit, using only 4 digital switches and an intermediate capacitor, is also ultra-low power, because the inversion circuit does not require any external energy and the command signals are very simple.
The Three-Component Defocusing Nonlinear Schrödinger Equation with Nonzero Boundary Conditions
NASA Astrophysics Data System (ADS)
Biondini, Gino; Kraus, Daniel K.; Prinari, Barbara
2016-12-01
We present a rigorous theory of the inverse scattering transform (IST) for the three-component defocusing nonlinear Schrödinger (NLS) equation with initial conditions approaching constant values with the same amplitude as {xto±∞}. The theory combines and extends to a problem with non-zero boundary conditions three fundamental ideas: (i) the tensor approach used by Beals, Deift and Tomei for the n-th order scattering problem, (ii) the triangular decompositions of the scattering matrix used by Novikov, Manakov, Pitaevski and Zakharov for the N-wave interaction equations, and (iii) a generalization of the cross product via the Hodge star duality, which, to the best of our knowledge, is used in the context of the IST for the first time in this work. The combination of the first two ideas allows us to rigorously obtain a fundamental set of analytic eigenfunctions. The third idea allows us to establish the symmetries of the eigenfunctions and scattering data. The results are used to characterize the discrete spectrum and to obtain exact soliton solutions, which describe generalizations of the so-called dark-bright solitons of the two-component NLS equation.
Suppressing explosive synchronization by contrarians
NASA Astrophysics Data System (ADS)
Zhang, Xiyun; Guan, Shuguang; Zou, Yong; Chen, Xiaosong; Liu, Zonghua
2016-01-01
Explosive synchronization (ES) has recently received increasing attention and studies have mainly focused on the conditions of its onset so far. However, its inverse problem, i.e. the suppression of ES, has not been systematically studied so far. As ES is usually considered to be harmful in certain circumstances such as the cascading failure of power grids and epileptic seizure, etc., its suppression is definitely important and deserves to be studied. We here study this inverse problem by presenting an efficient approach to suppress ES from a first-order to second-order transition, without changing the intrinsic network structure. We find that ES can be suppressed by only changing a small fraction of oscillators into contrarians with negative couplings and the critical fraction for the transition from the first order to the second order increases with both the network size and the average degree. A brief theory is presented to explain the underlying mechanism. This finding underlines the importance of our method to improve the understanding of neural interactions underlying cognitive processes.
NASA Astrophysics Data System (ADS)
Karl, S.; Neuberg, J.
2011-12-01
Volcanoes exhibit a variety of seismic signals. One specific type, the so-called long-period (LP) or low-frequency event, has proven to be crucial for understanding the internal dynamics of the volcanic system. These long period (LP) seismic events have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements (Chouet, 1996; Neuberg et al., 2006). While the seismic wavefield is well established, the actual trigger mechanism of these events is still poorly understood. Neuberg et al. (2006) proposed a conceptual model for the trigger of LP events at Montserrat involving the brittle failure of magma in the glass transition in response to the upwards movement of magma. In an attempt to gain a better quantitative understanding of the driving forces of LPs, inversions for the physical source mechanisms have become increasingly common. Previous studies have assumed a point source for waveform inversion. Knowing that applying a point source model to synthetic seismograms representing an extended source process does not yield the real source mechanism, it can, however, still lead to apparent moment tensor elements which then can be compared to previous results in the literature. Therefore, this study follows the proposed concepts of Neuberg et al. (2006), modelling the extended LP source as an octagonal arrangement of double couples approximating a circular ringfault bounding the circumference of the volcanic conduit. Synthetic seismograms were inverted for the physical source mechanisms of LPs using the moment tensor inversion code TDMTISO_INVC by Dreger (2003). Here, we will present the effects of changing the source parameters on the apparent moment tensor elements. First results show that, due to negative interference, the amplitude of the seismic signals of a ringfault structure is greatly reduced when compared to a single double couple source. Furthermore, best inversion results yield a solution comprised of positive isotropic and compensated linear vector dipole components. Thus, the physical source mechanisms of volcano seismic signals may be misinterpreted as opening shear or tensile cracks when wrongly assuming a point source. In order to approach the real physical sources with our models, inversions based on higher-order tensors might have to be considered in the future. An inversion technique where the point source is replaced by a so-called moment tensor density would allow inversions of volcano seismic signals for sources that can then be temporally and spatially extended.
Function representation with circle inversion map systems
NASA Astrophysics Data System (ADS)
Boreland, Bryson; Kunze, Herb
2017-01-01
The fractals literature develops the now well-known concept of local iterated function systems (using affine maps) with grey-level maps (LIFSM) as an approach to function representation in terms of the associated fixed point of the so-called fractal transform. While originally explored as a method to achieve signal (and 2-D image) compression, more recent work has explored various aspects of signal and image processing using this machinery. In this paper, we develop a similar framework for function representation using circle inversion map systems. Given a circle C with centre õ and radius r, inversion with respect to C transforms the point p˜ to the point p˜', such that p˜ and p˜' lie on the same radial half-line from õ and d(õ, p˜)d(õ, p˜') = r2, where d is Euclidean distance. We demonstrate the results with an example.
Two-dimensional angular transmission characterization of CPV modules.
Herrero, R; Domínguez, C; Askins, S; Antón, I; Sala, G
2010-11-08
This paper proposes a fast method to characterize the two-dimensional angular transmission function of a concentrator photovoltaic (CPV) system. The so-called inverse method, which has been used in the past for the characterization of small optical components, has been adapted to large-area CPV modules. In the inverse method, the receiver cell is forward biased to produce a Lambertian light emission, which reveals the reverse optical path of the optics. Using a large-area collimator mirror, the light beam exiting the optics is projected on a Lambertian screen to create a spatially resolved image of the angular transmission function. An image is then obtained using a CCD camera. To validate this method, the angular transmission functions of a real CPV module have been measured by both direct illumination (flash CPV simulator and sunlight) and the inverse method, and the comparison shows good agreement.
NASA Astrophysics Data System (ADS)
Kaulakys, B.; Alaburda, M.; Ruseckas, J.
2016-05-01
A well-known fact in the financial markets is the so-called ‘inverse cubic law’ of the cumulative distributions of the long-range memory fluctuations of market indicators such as a number of events of trades, trading volume and the logarithmic price change. We propose the nonlinear stochastic differential equation (SDE) giving both the power-law behavior of the power spectral density and the long-range dependent inverse cubic law of the cumulative distribution. This is achieved using the suggestion that when the market evolves from calm to violent behavior there is a decrease of the delay time of multiplicative feedback of the system in comparison to the driving noise correlation time. This results in a transition from the Itô to the Stratonovich sense of the SDE and yields a long-range memory process.
Engineering bacteria to solve the Burnt Pancake Problem
Haynes, Karmella A; Broderick, Marian L; Brown, Adam D; Butner, Trevor L; Dickson, James O; Harden, W Lance; Heard, Lane H; Jessen, Eric L; Malloy, Kelly J; Ogden, Brad J; Rosemond, Sabriya; Simpson, Samantha; Zwack, Erin; Campbell, A Malcolm; Eckdahl, Todd T; Heyer, Laurie J; Poet, Jeffrey L
2008-01-01
Background We investigated the possibility of executing DNA-based computation in living cells by engineering Escherichia coli to address a classic mathematical puzzle called the Burnt Pancake Problem (BPP). The BPP is solved by sorting a stack of distinct objects (pancakes) into proper order and orientation using the minimum number of manipulations. Each manipulation reverses the order and orientation of one or more adjacent objects in the stack. We have designed a system that uses site-specific DNA recombination to mediate inversions of genetic elements that represent pancakes within plasmid DNA. Results Inversions (or "flips") of the DNA fragment pancakes are driven by the Salmonella typhimurium Hin/hix DNA recombinase system that we reconstituted as a collection of modular genetic elements for use in E. coli. Our system sorts DNA segments by inversions to produce different permutations of a promoter and a tetracycline resistance coding region; E. coli cells become antibiotic resistant when the segments are properly sorted. Hin recombinase can mediate all possible inversion operations on adjacent flippable DNA fragments. Mathematical modeling predicts that the system reaches equilibrium after very few flips, where equal numbers of permutations are randomly sorted and unsorted. Semiquantitative PCR analysis of in vivo flipping suggests that inversion products accumulate on a time scale of hours or days rather than minutes. Conclusion The Hin/hix system is a proof-of-concept demonstration of in vivo computation with the potential to be scaled up to accommodate larger and more challenging problems. Hin/hix may provide a flexible new tool for manipulating transgenic DNA in vivo. PMID:18492232
Introduction to the 30th volume of Inverse Problems
NASA Astrophysics Data System (ADS)
Louis, Alfred K.
2014-01-01
The field of inverse problems is a fast-developing domain of research originating from the practical demands of finding the cause when a result is observed. The woodpecker, searching for insects, is probing a tree using sound waves: the information searched for is whether there is an insect or not, hence a 0-1 decision. When the result has to contain more information, ad hoc solutions are not at hand and more sophisticated methods have to be developed. Right from its first appearance, the field of inverse problems has been characterized by an interdisciplinary nature: the interpretation of measured data, reinforced by mathematical models serving the analyzing questions of observability, stability and resolution, developing efficient, stable and accurate algorithms to gain as much information as possible from the input and to feedback to the questions of optimal measurement configuration. As is typical for a new area of research, facets of it are separated and studied independently. Hence, fields such as the theory of inverse scattering, tomography in general and regularization methods have developed. However, all aspects have to be reassembled to arrive at the best possible solution to the problem at hand. This development is reflected by the first and still leading journal in the field, Inverse Problems. Founded by pioneers Roy Pike from London and Pierre Sabatier from Montpellier, who enjoyably describes the journal's nascence in his book Rêves et Combats d'un Enseignant-Chercheur, Retour Inverse [1], the journal has developed successfully over the last few decades. Neither the Editors-in-Chief, formerly called Honorary Editors, nor the board or authors could have set the path to success alone. Their fruitful interplay, complemented by the efficient and highly competent publishing team at IOP Publishing, has been fundamental. As such it is my honor and pleasure to follow my renowned colleagues Pierre Sabatier, Mario Bertero, Frank Natterer, Alberto Grünbaum and Bill Symes in their big footsteps, and I consider it a privilege to thank all that have contributed to the success of the journal. In its 30 years of existence, the journal has evolved from a trimestral to monthly print publication, now paralleled by an electronic version that has led to publication speeds unheard of when the journal began. This timely publication is especially important for younger researchers, but equally for experienced ones, who in that respect still feel young. In addition, the scope has changed to focus more precisely on the core of inverse problems, characterized, for example, by data errors, incomplete information and so on. In the beginning, fields where questions were considered to lead to inverse problems were listed in the journal's scope to make it clear that the problems being discussed were inverse problems in character. With the development of the solution methods, we now see that inverse problems are fundamental to almost all areas of research. The journal now hosts a number of additional features. With Insights we provide a platform for authors to introduce themselves and their work group, and present their scientific results in a popular and non-specialist form. Insights are made freely available on the journal website to ensure that they are seen by a wider community, beyond the immediate readership of the journal. Special issues are devoted to fields that have matured in such a way that the readers of our journal can profit from their presentation when the time for writing text books has not yet come. In addition, the different approaches taken by different contributors to a special issue disclose the multiple aspects of that field. With Topical reviews we aim to present the new ideas and areas that are stimulating future research. We are thankful that highly acclaimed authors take the time to present the research at the forefront of their respective fields. It is always very enlightening to read these articles as they introduce challenging research domains in condensed form. The diversity of the different topics is especially impressive. The 25th anniversary of Inverse Problems was celebrated with a service to the community, the publication of an issue of topical reviews selected by board members, which presented the achievements and state-of-the-art of the field. The 30th birthday of the journal is now approaching and we found it appropriate to include in the celebration the scientific community that supports the journal by their submissions. A conference, IPTA 2014: Inverse Problems - From Theory to Application (http://ipta2014.iopconfs.org/home), will be held in the home town of our publisher, IOP Publishing, in Bristol on 26-28 August 2014. The conference brings together top researchers, both from academia and industry, and will look at the scientific future of the field. Presentations by keynote speakers, which summarize what the board considers to be new trends, are complemented by contributions submitted by specialists and younger researchers in several minisymposia. To build a bridge to the future generation of researchers, a scientist at the beginning of their career will be giving a lecture. Let me finish with cordial thanks to all of our authors, referees, the members of the Editorial Board and International Advisory Panel, and the publishing team. I wish all of you a successful and healthy New Year and hope to meet many of you in August in Bristol. References [1] Sabatier P C 2012 Rêves et Combats d'un Enseignant-Chercheur, Retour Inverse (Paris: L'Harmattan)
Confidence set inference with a prior quadratic bound
NASA Technical Reports Server (NTRS)
Backus, George E.
1989-01-01
In the uniqueness part of a geophysical inverse problem, the observer wants to predict all likely values of P unknown numerical properties z=(z sub 1,...,z sub p) of the earth from measurement of D other numerical properties y (sup 0) = (y (sub 1) (sup 0), ..., y (sub D (sup 0)), using full or partial knowledge of the statistical distribution of the random errors in y (sup 0). The data space Y containing y(sup 0) is D-dimensional, so when the model space X is infinite-dimensional the linear uniqueness problem usually is insoluble without prior information about the correct earth model x. If that information is a quadratic bound on x, Bayesian inference (BI) and stochastic inversion (SI) inject spurious structure into x, implied by neither the data nor the quadratic bound. Confidence set inference (CSI) provides an alternative inversion technique free of this objection. Confidence set inference is illustrated in the problem of estimating the geomagnetic field B at the core-mantle boundary (CMB) from components of B measured on or above the earth's surface.
NASA Astrophysics Data System (ADS)
Kreinovich, Vladik; Longpre, Luc; Koshelev, Misha
1998-09-01
Most practical applications of statistical methods are based on the implicit assumption that if an event has a very small probability, then it cannot occur. For example, the probability that a kettle placed on a cold stove would start boiling by itself is not 0, it is positive, but it is so small, that physicists conclude that such an event is simply impossible. This assumption is difficult to formalize in traditional probability theory, because this theory only describes measures on sets and does not allow us to divide functions into 'random' and non-random ones. This distinction was made possible by the idea of algorithmic randomness, introduce by Kolmogorov and his student Martin- Loef in the 1960s. We show that this idea can also be used for inverse problems. In particular, we prove that for every probability measure, the corresponding set of random functions is compact, and, therefore, the corresponding restricted inverse problem is well-defined. The resulting techniques turns out to be interestingly related with the qualitative esthetic measure introduced by G. Birkhoff as order/complexity.
Scaling of plane-wave functions in statistically optimized near-field acoustic holography.
Hald, Jørgen
2014-11-01
Statistically Optimized Near-field Acoustic Holography (SONAH) is a Patch Holography method, meaning that it can be applied in cases where the measurement area covers only part of the source surface. The method performs projections directly in the spatial domain, avoiding the use of spatial discrete Fourier transforms and the associated errors. First, an inverse problem is solved using regularization. For each calculation point a multiplication must then be performed with two transfer vectors--one to get the sound pressure and the other to get the particle velocity. Considering SONAH based on sound pressure measurements, existing derivations consider only pressure reconstruction when setting up the inverse problem, so the evanescent wave amplification associated with the calculation of particle velocity is not taken into account in the regularized solution of the inverse problem. The present paper introduces a scaling of the applied plane wave functions that takes the amplification into account, and it is shown that the previously published virtual source-plane retraction has almost the same effect. The effectiveness of the different solutions is verified through a set of simulated measurements.
The Role of Synthetic Reconstruction Tests in Seismic Tomography
NASA Astrophysics Data System (ADS)
Rawlinson, N.; Spakman, W.
2015-12-01
Synthetic reconstruction tests are widely used in seismic tomography as a means for assessing the robustness of solutions produced by linear or iterative non-linear inversion schemes. The most common test is the so-called checkerboard resolution test, which uses an alternating pattern of high and low wavespeeds (or some other seismic property such as attenuation). However, checkerboard tests have a number of limitations, including that they (1) only provide indirect evidence of quantitative measures of reliability such as resolution and uncertainty; (2) give a potentially misleading impression of the range of scale-lengths that can be resolved; (3) don't give a true picture of the structural distortion or smearing caused by the data coverage; and (4) result in an inverse problem that is biased towards an accurate reconstruction. The widespread use of synthetic reconstruction tests in seismic tomography is likely to continue for some time yet, so it is important to implement best practice where possible. The goal here is to provide a general set of guidelines, derived from the underlying theory and illustrated by a series of numerical experiments, on their implementation in seismic tomography. In particular, we recommend (1) using a sparse distribution of spikes, rather than the more conventional tightly-spaced checkerboard; (2) using the identical data coverage (e.g. geometric rays) for the synthetic model that was computed for the observation-based model; (3) carrying out multiple tests using anomalies of different scale length; (4) exercising caution when analysing synthetic recovery tests that use anomaly patterns that closely mimic the observation-based model; (5) investigating the trade-off between data noise levels and the minimum wavelength of recovered structure; (6) where possible, test the extent to which preconditioning (e.g. identical parameterization for input and output models) influences the recovery of anomalies.
Policy Issues and the Drug Abuse Problem in America: Overview, Critique, and Recommendations.
ERIC Educational Resources Information Center
Johnston, Lloyd D.
The so-called "drug abuse problem" in America is really a constellation of separate but related problems; since a variety of drugs are illicitly used, and drug abuse leads to many derivative problems, both within and outside the United States. This monograph begins by assessing the current state of the drug abuse problem in America, and analyzing…
Measurements of Interaction Cross Sections for 19-27F Isotopes
NASA Astrophysics Data System (ADS)
Homma, Akira; Takechi, Maya; Ohtsubo, Takashi; Nishimura, Daiki; Fukuda, Mitsunori; Suzuki, Takeshi; Yamaguchi, Takayuki; Kuboki, Takamasa; Ozawa, Akira; Suzuki, Sinji; Ooishi, Hiroto; Moriguchi, Tetsuaki; Sumikawa, Takashi; Geissel, H.; Aoi, Nori; Chen, Rui-jiu; Fang, De-Qing; Fukuda, Naoki; Fukuoka, Shota; Furuki, Hisahiro; Inaba, Naruki; Ishibashi, Nobuyuki; Ito, Takeshi; Izumikawa, Takuji; Kameda, Daisuke; Kubo, Toshiyuki; Lantz, M.; Lee, C. S.; Ma, Yu-Gang; Mihara, Mototsugu; Momota, Satao; Nagae, Daisuke; Nishikiori, Ryo; Niwa, Takahiro; Ohnishi, Tetsuya; Okumura, Kimitake; Ogura, Toshiyuki; Nagashima, Masayuki; Sakurai, Hiroyoshi; Sato, Kanae; Shimbara, Yoshiriro; Suzuki, Hiroshi; Takeda, Hiroyuki; Takeuchi, Satoshi; Tanaka, Kenji; Uenishi, Hideaki; Winkler, M.; Yanagisawa, Yoshiyuki
Interaction cross sections (σI) and reaction cross sections (σR) are physical quantities which are strongly related to the nuclear size. In our previous study of σI for Ne isotopes, the deformation features of neutron-rich Ne isotopes in the so-called "island of inversion" region have been successfully observed, and also the formation of the deformed halo structure in 31Ne has been indicated. In this study, σI for 19-27F, up to the vicinity of the island of inversion have been measured at around 240A MeV using BigRIPS at RIBF, RIKEN. Our preliminary results are slightly larger than A1/3 systematics and some of the data could be explained by nuclear deformation.
NASA Astrophysics Data System (ADS)
Linde, N.; Vrugt, J. A.
2009-04-01
Geophysical models are increasingly used in hydrological simulations and inversions, where they are typically treated as an artificial data source with known uncorrelated "data errors". The model appraisal problem in classical deterministic linear and non-linear inversion approaches based on linearization is often addressed by calculating model resolution and model covariance matrices. These measures offer only a limited potential to assign a more appropriate "data covariance matrix" for future hydrological applications, simply because the regularization operators used to construct a stable inverse solution bear a strong imprint on such estimates and because the non-linearity of the geophysical inverse problem is not explored. We present a parallelized Markov Chain Monte Carlo (MCMC) scheme to efficiently derive the posterior spatially distributed radar slowness and water content between boreholes given first-arrival traveltimes. This method is called DiffeRential Evolution Adaptive Metropolis (DREAM_ZS) with snooker updater and sampling from past states. Our inverse scheme does not impose any smoothness on the final solution, and uses uniform prior ranges of the parameters. The posterior distribution of radar slowness is converted into spatially distributed soil moisture values using a petrophysical relationship. To benchmark the performance of DREAM_ZS, we first apply our inverse method to a synthetic two-dimensional infiltration experiment using 9421 traveltimes contaminated with Gaussian errors and 80 different model parameters, corresponding to a model discretization of 0.3 m × 0.3 m. After this, the method is applied to field data acquired in the vadose zone during snowmelt. This work demonstrates that fully non-linear stochastic inversion can be applied with few limiting assumptions to a range of common two-dimensional tomographic geophysical problems. The main advantage of DREAM_ZS is that it provides a full view of the posterior distribution of spatially distributed soil moisture, which is key to appropriately treat geophysical parameter uncertainty and infer hydrologic models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, R. Derek; Gunther, Jacob H.; Moon, Todd K.
In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less
West, R. Derek; Gunther, Jacob H.; Moon, Todd K.
2016-12-01
In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less
Jolivet, Frédéric; Momey, Fabien; Denis, Loïc; Méès, Loïc; Faure, Nicolas; Grosjean, Nathalie; Pinston, Frédéric; Marié, Jean-Louis; Fournier, Corinne
2018-04-02
Reconstruction of phase objects is a central problem in digital holography, whose various applications include microscopy, biomedical imaging, and fluid mechanics. Starting from a single in-line hologram, there is no direct way to recover the phase of the diffracted wave in the hologram plane. The reconstruction of absorbing and phase objects therefore requires the inversion of the non-linear hologram formation model. We propose a regularized reconstruction method that includes several physically-grounded constraints such as bounds on transmittance values, maximum/minimum phase, spatial smoothness or the absence of any object in parts of the field of view. To solve the non-convex and non-smooth optimization problem induced by our modeling, a variable splitting strategy is applied and the closed-form solution of the sub-problem (the so-called proximal operator) is derived. The resulting algorithm is efficient and is shown to lead to quantitative phase estimation on reconstructions of accurate simulations of in-line holograms based on the Mie theory. As our approach is adaptable to several in-line digital holography configurations, we present and discuss the promising results of reconstructions from experimental in-line holograms obtained in two different applications: the tracking of an evaporating droplet (size ∼ 100μm) and the microscopic imaging of bacteria (size ∼ 1μm).
Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Khalik, Hany S.; Turinsky, Paul J.
2005-07-15
Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. A meaningful adaption will result in high-fidelity and robust adapted core simulator models. To perform adaption, we propose an inverse theory approach in which the multitudes of input data to core simulators, i.e., reactor physics and thermal-hydraulic data, are to be adjusted to improve agreement withmore » measured observables while keeping core simulator models unadapted. At first glance, devising such adaption for typical core simulators with millions of input and observables data would spawn not only several prohibitive challenges but also numerous disparaging concerns. The challenges include the computational burdens of the sensitivity-type calculations required to construct Jacobian operators for the core simulator models. Also, the computational burdens of the uncertainty-type calculations required to estimate the uncertainty information of core simulator input data present a demanding challenge. The concerns however are mainly related to the reliability of the adjusted input data. The methodologies of adaptive simulation are well established in the literature of data adjustment. We adopt the same general framework for data adjustment; however, we refrain from solving the fundamental adjustment equations in a conventional manner. We demonstrate the use of our so-called Efficient Subspace Methods (ESMs) to overcome the computational and storage burdens associated with the core adaption problem. We illustrate the successful use of ESM-based adaptive techniques for a typical boiling water reactor core simulator adaption problem.« less
On a New Approach to Education about Ethics for Engineers at Meijou University
NASA Astrophysics Data System (ADS)
Fukaya, Minoru; Morimoto, Tsukasa; Kimura, Noritsugu
We propose a new approach to education of so called “engineering ethics”. This approach has two important elements in its teaching system. One is “problem-solving learning”, and the other is “discussion ability”. So far, engineering ethics started at the ethical standpoint. But we put the viewpoint of problem-solving learning at the educational base of engineering ethics. Because many problems have complicated structures, so if we want to solve them, we should discuss each other. Problem-solving ability and discussion ability, they help engineers to solve the complex problems in their social everyday life. Therefore, Meijo University names engineering ethics “ethics for engineers”. At Meijou University about 1300 students take classes in both ethics for engineers and environmental ethics for one year.
NASA Astrophysics Data System (ADS)
Xu, J.; Heue, K.-P.; Coldewey-Egbers, M.; Romahn, F.; Doicu, A.; Loyola, D.
2018-04-01
Characterizing vertical distributions of ozone from nadir-viewing satellite measurements is known to be challenging, particularly the ozone information in the troposphere. A novel retrieval algorithm called Full-Physics Inverse Learning Machine (FP-ILM), has been developed at DLR in order to estimate ozone profile shapes based on machine learning techniques. In contrast to traditional inversion methods, the FP-ILM algorithm formulates the profile shape retrieval as a classification problem. Its implementation comprises a training phase to derive an inverse function from synthetic measurements, and an operational phase in which the inverse function is applied to real measurements. This paper extends the ability of the FP-ILM retrieval to derive tropospheric ozone columns from GOME- 2 measurements. Results of total and tropical tropospheric ozone columns are compared with the ones using the official GOME Data Processing (GDP) product and the convective-cloud-differential (CCD) method, respectively. Furthermore, the FP-ILM framework will be used for the near-real-time processing of the new European Sentinel sensors with their unprecedented spectral and spatial resolution and corresponding large increases in the amount of data.
Karaoulis, M.; Revil, A.; Werkema, D.D.; Minsley, B.J.; Woodruff, W.F.; Kemna, A.
2011-01-01
Induced polarization (more precisely the magnitude and phase of impedance of the subsurface) is measured using a network of electrodes located at the ground surface or in boreholes. This method yields important information related to the distribution of permeability and contaminants in the shallow subsurface. We propose a new time-lapse 3-D modelling and inversion algorithm to image the evolution of complex conductivity over time. We discretize the subsurface using hexahedron cells. Each cell is assigned a complex resistivity or conductivity value. Using the finite-element approach, we model the in-phase and out-of-phase (quadrature) electrical potentials on the 3-D grid, which are then transformed into apparent complex resistivity. Inhomogeneous Dirichlet boundary conditions are used at the boundary of the domain. The calculation of the Jacobian matrix is based on the principles of reciprocity. The goal of time-lapse inversion is to determine the change in the complex resistivity of each cell of the spatial grid as a function of time. Each model along the time axis is called a 'reference space model'. This approach can be simplified into an inverse problem looking for the optimum of several reference space models using the approximation that the material properties vary linearly in time between two subsequent reference models. Regularizations in both space domain and time domain reduce inversion artefacts and improve the stability of the inversion problem. In addition, the use of the time-lapse equations allows the simultaneous inversion of data obtained at different times in just one inversion step (4-D inversion). The advantages of this new inversion algorithm are demonstrated on synthetic time-lapse data resulting from the simulation of a salt tracer test in a heterogeneous random material described by an anisotropic semi-variogram. ?? 2011 The Authors Geophysical Journal International ?? 2011 RAS.
ERIC Educational Resources Information Center
Scheiter, Katharina; Gerjets, Peter; Schuh, Julia
2010-01-01
In this paper the augmentation of worked examples with animations for teaching problem-solving skills in mathematics is advocated as an effective instructional method. First, in a cognitive task analysis different knowledge prerequisites are identified for solving mathematical word problems. Second, it is argued that so called hybrid animations…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Billinge, S.
2010-03-22
Diffraction techniques are making progress in tackling the difficult problem of solving the structures of nanoparticles and nanoscale materials. The great gift of x-ray crystallography has made us almost complacent in our ability to locate the three-dimensional coordinates of atoms in a crystal with a precision of around 10{sup -4} nm. However, the powerful methods of crystallography break down for structures in which order only extends over a few nanometers. In fact, as we near the one hundred year mark since the birth of crystallography, we face a resilient frontier in condensed matter physics: our inability to routinely and robustlymore » determine the structure of complex nanostructured and amorphous materials. Knowing the structure and arrangement of atoms in a solid is so fundamental to understanding its properties that the topic routinely occupies the early chapters of every solid-state physics textbook. Yet what has become clear with the emergence of nanotechnology is that diffraction data alone may not be enough to uniquely solve the structure of nanomaterials. As part of a growing effort to incorporate the results of other techniques to constrain x-ray refinements - a method called 'complex modeling' which is a simple but elegant approach for combining information from spectroscopy with diffraction data to solve the structure of several amorphous and nanostructured materials. Crystallography just works, so we rarely question how and why this is so, yet understanding the physics of diffraction can be very helpful as we consider the nanostructure problem. The relationship between the electron density distribution in three dimensions (i.e., the crystal structure) and an x-ray diffraction pattern is well established: the measured intensity distribution in reciprocal space is the square of the Fourier transform of the autocorrelation function <{rho}(r){rho}(r+r')> of the electron density distribution {rho}(r). The fact that we get the autocorrelation function (rather than just the density distribution) by Fourier transforming the measured intensity leaves us with a very tricky inverse problem: we have to extract the density from its autocorrelation function. The direct problem of predicting the diffraction intensity given a particular density distribution is trivial, but the inverse, unraveling from the intensity distribution the density that gives rise to it, is a highly nontrivial problem in global optimization. In crystallography, this challenging, nontrivial task is sometimes referred to as the 'phase problem.' The diffraction pattern is a wave-interference pattern, but we measure only the intensities (the squares of the waves) not the wave amplitudes. To get the amplitude, you take the square root of the intensity I, but in so doing you lose any knowledge of the phase of the wave {phi}, and half the information needed to reconstruct the density is lost. When solving such inverse problems, you hope you can start with a uniqueness theorem that reassures you that, under ideal conditions, there is only one solution: one density distribution that corresponds to the measured intensity. Then you have to establish that your data set contains sufficient information to constrain that unique solution. This is a problem from information theory that originated with Reverend Thomas Bayes work in the 18th century, and the work of Nyquist and Shannon in the 20 th century, and describes the fact that the degrees of freedom in the model must not exceed the number of pieces of independent information in the data. Finally, you need an efficient algorithm for doing the reconstruction. This is exactly how crystallography works. The information is in the form of Bragg peak intensities and the degrees of freedom are the atomic coordinates. Crystal symmetry lets us confine the model to the contents of a unit cell, rather than all of the atoms in the crystal, keeping the degrees of freedom admirably small in number. A measurement yields a multitude of Bragg peak intensities, providing ample redundant intensity information to make up for the lost phases. Finally, there are highly efficient algorithms, such as 'direct methods,' that make excellent use of the available information and constraints to find the solution quickly from a horrendously large search space. The problem is often so overconstrained that we can cavalierly throw away lots of directional information. In particular, even though Bragg peaks are orientationally averaged to a 1D function in a powder diffraction measurement, we still can get a 3D structural solution. Now it becomes easy to understand the enormous challenge of solving nanostructures: the information content in the data is degraded while the complexity of the model is much greater.« less
Intelligent inversion method for pre-stack seismic big data based on MapReduce
NASA Astrophysics Data System (ADS)
Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua
2018-01-01
Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.
Adaptive multi-step Full Waveform Inversion based on Waveform Mode Decomposition
NASA Astrophysics Data System (ADS)
Hu, Yong; Han, Liguo; Xu, Zhuo; Zhang, Fengjiao; Zeng, Jingwen
2017-04-01
Full Waveform Inversion (FWI) can be used to build high resolution velocity models, but there are still many challenges in seismic field data processing. The most difficult problem is about how to recover long-wavelength components of subsurface velocity models when seismic data is lacking of low frequency information and without long-offsets. To solve this problem, we propose to use Waveform Mode Decomposition (WMD) method to reconstruct low frequency information for FWI to obtain a smooth model, so that the initial model dependence of FWI can be reduced. In this paper, we use adjoint-state method to calculate the gradient for Waveform Mode Decomposition Full Waveform Inversion (WMDFWI). Through the illustrative numerical examples, we proved that the low frequency which is reconstructed by WMD method is very reliable. WMDFWI in combination with the adaptive multi-step inversion strategy can obtain more faithful and accurate final inversion results. Numerical examples show that even if the initial velocity model is far from the true model and lacking of low frequency information, we still can obtain good inversion results with WMD method. From numerical examples of anti-noise test, we see that the adaptive multi-step inversion strategy for WMDFWI has strong ability to resist Gaussian noise. WMD method is promising to be able to implement for the land seismic FWI, because it can reconstruct the low frequency information, lower the dominant frequency in the adjoint source, and has a strong ability to resist noise.
Freyhult, Eva; Moulton, Vincent; Ardell, David H.
2006-01-01
Sequence logos are stacked bar graphs that generalize the notion of consensus sequence. They employ entropy statistics very effectively to display variation in a structural alignment of sequences of a common function, while emphasizing its over-represented features. Yet sequence logos cannot display features that distinguish functional subclasses within a structurally related superfamily nor do they display under-represented features. We introduce two extensions to address these needs: function logos and inverse logos. Function logos display subfunctions that are over-represented among sequences carrying a specific feature. Inverse logos generalize both sequence logos and function logos by displaying under-represented, rather than over-represented, features or functions in structural alignments. To make inverse logos, a compositional inverse is applied to the feature or function frequency distributions before logo construction, where a compositional inverse is a mathematical transform that makes common features or functions rare and vice versa. We applied these methods to a database of structurally aligned bacterial tDNAs to create highly condensed, birds-eye views of potentially all so-called identity determinants and antideterminants that confer specific amino acid charging or initiator function on tRNAs in bacteria. We recovered both known and a few potentially novel identity elements. Function logos and inverse logos are useful tools for exploratory bioinformatic analysis of structure–function relationships in sequence families and superfamilies. PMID:16473848
Regional inverse modeling for high reactive species with PYVAR-CHIMERE
NASA Astrophysics Data System (ADS)
Fortems-Cheiney, A.; Pison, I.; Dufour, G.; Broquet, G.; Costantino, L.
2017-12-01
The degradation of air quality is a worldwide environmental problem: according to the World Health Organization WHO, 92% of the world's population breathe polluted air in 2016. A number of air pollutants associated with respiratory disease and shortened life expectancy play a particularly important role in global outdoor air pollution. In addition to threatening both human health and ecosystems, these gaseous air pollutants including nitrogen oxides (NOx=NO+NO2), sulfur dioxide (SO2), ammonia (NH3), and volatile organic compounds (VOCs) could be precursors of ozone (O3) and Particulate Matter (PM). Without a strong scientific back-up to determine their different sources, the necessary regulations to improve air quality will not be efficient. To date, only chemistry-transport models (CTM) are able to describe pollutant concentrations at any location in the world and their evolution in the atmosphere. Consequently, they have become essential tools for studying air quality. However, CTM are hampered by incomplete information on gaseous precursors and one of the large shortcoming for simulating the gaseous pollutants budgets is the lack of high spatio-temporal variability for the emission estimations provided as inputs for chemistry-transport models. For all these reasons, an inverse system called PYVAR-CHIMERE has been developed, operating in synergy between a CTM and atmospheric observations, and being adjust for the highly reactive species of interest here, as NO2. We present here the first results of this Bayesian variational inverse method for the quantification of NO2 emissions both over Europe (in March 2011) and over China (in January 2015), with a spatial resolution of 0.5°x0.5° and at a weekly temporal resolution, constrained by surface measurements and OMI NO2 satellite observations.
Highly-optimized TWSM software package for seismic diffraction modeling adapted for GPU-cluster
NASA Astrophysics Data System (ADS)
Zyatkov, Nikolay; Ayzenberg, Alena; Aizenberg, Arkady
2015-04-01
Oil producing companies concern to increase resolution capability of seismic data for complex oil-and-gas bearing deposits connected with salt domes, basalt traps, reefs, lenses, etc. Known methods of seismic wave theory define shape of hydrocarbon accumulation with nonsufficient resolution, since they do not account for multiple diffractions explicitly. We elaborate alternative seismic wave theory in terms of operators of propagation in layers and reflection-transmission at curved interfaces. Approximation of this theory is realized in the seismic frequency range as the Tip-Wave Superposition Method (TWSM). TWSM based on the operator theory allows to evaluate of wavefield in bounded domains/layers with geometrical shadow zones (in nature it can be: salt domes, basalt traps, reefs, lenses, etc.) accounting for so-called cascade diffraction. Cascade diffraction includes edge waves from sharp edges, creeping waves near concave parts of interfaces, waves of the whispering galleries near convex parts of interfaces, etc. The basic algorithm of TWSM package is based on multiplication of large-size matrices (make hundreds of terabytes in size). We use advanced information technologies for effective realization of numerical procedures of the TWSM. In particular, we actively use NVIDIA CUDA technology and GPU accelerators allowing to significantly improve the performance of the TWSM software package, that is important in using it for direct and inverse problems. The accuracy, stability and efficiency of the algorithm are justified by numerical examples with curved interfaces. TWSM package and its separate components can be used in different modeling tasks such as planning of acquisition systems, physical interpretation of laboratory modeling, modeling of individual waves of different types and in some inverse tasks such as imaging in case of laterally inhomogeneous overburden, AVO inversion.
Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Peng, E-mail: peng@ices.utexas.edu; Schwab, Christoph, E-mail: christoph.schwab@sam.math.ethz.ch
2016-07-01
We extend the reduced basis (RB) accelerated Bayesian inversion methods for affine-parametric, linear operator equations which are considered in [16,17] to non-affine, nonlinear parametric operator equations. We generalize the analysis of sparsity of parametric forward solution maps in [20] and of Bayesian inversion in [48,49] to the fully discrete setting, including Petrov–Galerkin high-fidelity (“HiFi”) discretization of the forward maps. We develop adaptive, stochastic collocation based reduction methods for the efficient computation of reduced bases on the parametric solution manifold. The nonaffinity and nonlinearity with respect to (w.r.t.) the distributed, uncertain parameters and the unknown solution is collocated; specifically, by themore » so-called Empirical Interpolation Method (EIM). For the corresponding Bayesian inversion problems, computational efficiency is enhanced in two ways: first, expectations w.r.t. the posterior are computed by adaptive quadratures with dimension-independent convergence rates proposed in [49]; the present work generalizes [49] to account for the impact of the PG discretization in the forward maps on the convergence rates of the Quantities of Interest (QoI for short). Second, we propose to perform the Bayesian estimation only w.r.t. a parsimonious, RB approximation of the posterior density. Based on the approximation results in [49], the infinite-dimensional parametric, deterministic forward map and operator admit N-term RB and EIM approximations which converge at rates which depend only on the sparsity of the parametric forward map. In several numerical experiments, the proposed algorithms exhibit dimension-independent convergence rates which equal, at least, the currently known rate estimates for N-term approximation. We propose to accelerate Bayesian estimation by first offline construction of reduced basis surrogates of the Bayesian posterior density. The parsimonious surrogates can then be employed for online data assimilation and for Bayesian estimation. They also open a perspective for optimal experimental design.« less
Importance of a 3D forward modeling tool for surface wave analysis methods
NASA Astrophysics Data System (ADS)
Pageot, Damien; Le Feuvre, Mathieu; Donatienne, Leparoux; Philippe, Côte; Yann, Capdeville
2016-04-01
Since a few years, seismic surface waves analysis methods (SWM) have been widely developed and tested in the context of subsurface characterization and have demonstrated their effectiveness for sounding and monitoring purposes, e.g., high-resolution tomography of the principal geological units of California or real time monitoring of the Piton de la Fournaise volcano. Historically, these methods are mostly developed under the assumption of semi-infinite 1D layered medium without topography. The forward modeling is generally based on Thomson-Haskell matrix based modeling algorithm and the inversion is driven by Monte-Carlo sampling. Given their efficiency, SWM have been transfered to several scale of which civil engineering structures in order to, e.g., determine the so-called V s30 parameter or assess other critical constructional parameters in pavement engineering. However, at this scale, many structures may often exhibit 3D surface variations which drastically limit the efficiency of SWM application. Indeed, even in the case of an homogeneous structure, 3D geometry can bias the dispersion diagram of Rayleigh waves up to obtain discontinuous phase velocity curves which drastically impact the 1D mean velocity model obtained from dispersion inversion. Taking advantages of high-performance computing center accessibility and wave propagation modeling algorithm development, it is now possible to consider the use of a 3D elastic forward modeling algorithm instead of Thomson-Haskell method in the SWM inversion process. We use a parallelized 3D elastic modeling code based on the spectral element method which allows to obtain accurate synthetic data with very low numerical dispersion and a reasonable numerical cost. In this study, we choose dike embankments as an illustrative example. We first show that their longitudinal geometry may have a significant effect on dispersion diagrams of Rayleigh waves. Then, we demonstrate the necessity of 3D elastic modeling as a forward problem for the inversion of dispersion curves.
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghattas, Omar
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
VASP- VARIABLE DIMENSION AUTOMATIC SYNTHESIS PROGRAM
NASA Technical Reports Server (NTRS)
White, J. S.
1994-01-01
VASP is a variable dimension Fortran version of the Automatic Synthesis Program, ASP. The program is used to implement Kalman filtering and control theory. Basically, it consists of 31 subprograms for solving most modern control problems in linear, time-variant (or time-invariant) control systems. These subprograms include operations of matrix algebra, computation of the exponential of a matrix and its convolution integral, and the solution of the matrix Riccati equation. The user calls these subprograms by means of a FORTRAN main program, and so can easily obtain solutions to most general problems of extremization of a quadratic functional of the state of the linear dynamical system. Particularly, these problems include the synthesis of the Kalman filter gains and the optimal feedback gains for minimization of a quadratic performance index. VASP, as an outgrowth of the Automatic Synthesis Program, has the following improvements: more versatile programming language; more convenient input/output format; some new subprograms which consolidate certain groups of statements that are often repeated; and variable dimensioning. The pertinent difference between the two programs is that VASP has variable dimensioning and more efficient storage. The documentation for the VASP program contains a VASP dictionary and example problems. The dictionary contains a description of each subroutine and instructions on its use. The example problems include dynamic response, optimal control gain, solution of the sampled data matrix Riccati equation, matrix decomposition, and a pseudo-inverse of a matrix. This program is written in FORTRAN IV and has been implemented on the IBM 360. The VASP program was developed in 1971.
Communicating With the So-Called Disadvantaged -- Can We Find a Common Ground?
ERIC Educational Resources Information Center
Niemi, John A.
Focus on some of the problems of culturally different groups is the purpose of this paper; also, some implications are drawn for the adult educator. These problems are basically problems of communication caused by the apartness of these groups from the dominant society. The communication process is defined as involving an exchange of meaning…
Preliminary Validation of a New Clinical Tool for Identifying Problem Video Game Playing
ERIC Educational Resources Information Center
King, Daniel Luke; Delfabbro, Paul H.; Zajac, Ian T.
2011-01-01
Research has estimated that between 6 to 13% of individuals who play video games do so excessively. However, the methods and definitions used to identify "problem" video game players often vary considerably. This research presents preliminary validation data for a new measure of problematic video game play called the Problem Video Game…
NASA Astrophysics Data System (ADS)
Atzberger, C.
2013-12-01
The robust and accurate retrieval of vegetation biophysical variables using RTM is seriously hampered by the ill-posedness of the inverse problem. The contribution presents our object-based inversion approach and evaluate it against measured data. The proposed method takes advantage of the fact that nearby pixels are generally more similar than those at a larger distance. For example, within a given vegetation patch, nearby pixels often share similar leaf angular distributions. This leads to spectral co-variations in the n-dimensional spectral features space, which can be used for regularization purposes. Using a set of leaf area index (LAI) measurements (n=26) acquired over alfalfa, sugar beet and garlic crops of the Barrax test site (Spain), it is demonstrated that the proposed regularization using neighbourhood information yields more accurate results compared to the traditional pixel-based inversion. Principle of the ill-posed inverse problem and the proposed solution illustrated in the red-nIR feature space using (PROSAIL). [A] spectral trajectory ('soil trajectory') obtained for one leaf angle (ALA) and one soil brightness (αsoil), when LAI varies between 0 and 10, [B] 'soil trajectories' for 5 soil brightness values and three leaf angles, [C] ill-posed inverse problem: different combinations of ALA × αsoil yield an identical crossing point, [D] object-based RTM inversion; only one 'soil trajectory' fits all nine pixelswithin a gliding (3×3) window. The black dots (plus the rectangle=central pixel) represent the hypothetical position of nine pixels within a 3×3 (gliding) window. Assuming that over short distances (× 1 pixel) variations in soil brightness can be neglected, the proposed object-based inversion searches for one common set of ALA × αsoil so that the resulting 'soil trajectory' best fits the nine measured pixels. Ground measured vs. retrieved LAI values for three crops. Left: proposed object-based approach. Right: pixel-based inversion
Conductivity of an inverse lyotropic lamellar phase under shear flow
NASA Astrophysics Data System (ADS)
Panizza, P.; Soubiran, L.; Coulon, C.; Roux, D.
2001-08-01
We report conductivity measurements on solutions of closed compact monodisperse multilamellar vesicles (the so-called ``onion texture'') formed by shearing an inverse lyotropic lamellar Lα phase. The conductivity measured in different directions as a function of the applied shear rate reveals a small anisotropy of the onion structure due to the existence of free oriented membranes. The results are analyzed in terms of a simple model that allows one to deduce the conductivity tensor of the Lα phase itself and the proportion of free oriented membranes. The variation of these two parameters is measured along a dilution line and discussed. The high value of the conductivity perpendicular to the layers with respect to that of solvent suggests the existence of a mechanism of ionic transport through the insulating solvent.
NASA Astrophysics Data System (ADS)
Takechi, M.; Suzuki, S.; Nishimura, D.; Fukuda, M.; Ohtsubo, T.; Nagashima, M.; Suzuki, T.; Yamaguchi, T.; Ozawa, A.; Moriguchi, T.; Ohishi, H.; Sumikama, T.; Geissel, H.; Ishihara, M.; Aoi, N.; Chen, Rui-Jiu; Fang, De-Qing; Fukuda, N.; Fukuoka, S.; Furuki, H.; Inabe, N.; Ishibashi, Y.; Itoh, T.; Izumikawa, T.; Kameda, D.; Kubo, T.; Lee, C. S.; Lantz, M.; Ma, Yu-Gang; Matsuta, K.; Mihara, M.; Momota, S.; Nagae, D.; Nishikiori, R.; Niwa, T.; Ohnishi, T.; Okumura, K.; Ogura, T.; Sakurai, H.; Sato, K.; Shimbara, Y.; Suzuki, H.; Takeda, H.; Takeuchi, S.; Tanaka, K.; Uenishi, H.; Winkler, M.; Yanagisawa, Y.; Watanabe, S.; Minomo, K.; Tagami, S.; Shimada, M.; Kimura, M.; Matsumoto, T.; Shimizu, Y. R.; Yahiro, M.
2014-03-01
Reaction cross sections (σR) for 24-38Mg on C targets at the energies of around 240 MeV/nucleon have been measured precisely at RIBF, RIKEN for the purpose of obtaining the crucial information on the changes of nuclear structure in unstable nuclei, especially around the so-called "island of inversion" region. In the island of inversion region, which includes neutron-rich Ne, Na, and Mg isotopes, the vanishing of the N = 20 magic number for neutrons have been discussed along with nuclear deformation. The present result suggest deformation features of Mg isotopes and shows a large cross section of weakly-bound nucleus 37Mg, which could be caused by a neutron halo formation.
Comorbidity of Conduct Problems and ADHD: Identification of "Fledgling Psychopaths".
ERIC Educational Resources Information Center
Gresham, Frank M.; Lane, Kathleen L.; Lambros, Katina M.
2000-01-01
This article reviews the characteristics of children who exhibit a behavior pattern characterized by hyperactivity-impulsivity-inattention coupled with conduct problems such as fighting, stealing, truancy, noncompliance, and arguing. Procedures for early identification of these so-called "fledgling psychopaths" are described and discussed.…
Dynamically consistent hydrography and absolute velocity in the eastern North Atlantic Ocean
NASA Technical Reports Server (NTRS)
Wunsch, Carl
1994-01-01
The problem of mapping a dynamically consistent hydrographic field and associated absolute geostrophic flow in the eastern North Atlantic between 24 deg and 36 deg N is related directly to the solution of the so-called thermocline equations. A nonlinear optimization problem involving Needler's P equation is solved to find the hydrography and resulting flow that minimizes the vertical mixing above about 1500 m in the ocean and is simultaneously consistent with the observations. A sharp minimum (at least in some dimensions) is found, apparently corresponding to a solution nearly conserving potential vorticity and with vertical eddy coefficient less than about 10(exp -5) sq m/s. Estimates of `residual' quantities such as eddy coefficients are extremely sensitive to slight modifications to the observed fields. Boundary conditions, vertical velocities, etc., are a product of the optimization and produce estimates differing quantitatively from prior ones relying directly upon observed hydrography. The results are generally insensitive to particular elements of the solution methodology, but many questions remain concerning the extent to which different synoptic sections can be asserted to represent the same ocean. The method can be regarded as a practical generalization of the beta spiral and geostrophic balance inverses for the estimate of absolute geostrophic flows. Numerous improvements to the methodology used in this preliminary attempt are possible.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Aspects of the inverse problem for the Toda chain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozlowski, K. K., E-mail: karol.kozlowski@u-bourgogne.fr
We generalize Babelon's approach to equations in dual variables so as to be able to treat new types of operators which we build out of the sub-constituents of the model's monodromy matrix. Further, we also apply Sklyanin's recent monodromy matrix identities so as to obtain equations in dual variables for yet other operators. The schemes discussed in this paper appear to be universal and thus, in principle, applicable to many models solvable through the quantum separation of variables.
Wei, Jianing; Bouman, Charles A; Allebach, Jan P
2014-05-01
Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.
NASA Astrophysics Data System (ADS)
Huhn, Stefan; Peeling, Derek; Burkart, Maximilian
2017-10-01
With the availability of die face design tools and incremental solver technologies to provide detailed forming feasibility results in a timely fashion, the use of inverse solver technologies and resulting process improvements during the product development process of stamped parts often is underestimated. This paper presents some applications of inverse technologies that are currently used in the automotive industry to streamline the product development process and greatly increase the quality of a developed process and the resulting product. The first focus is on the so-called target strain technology. Application examples will show how inverse forming analysis can be applied to support the process engineer during the development of a die face geometry for Class `A' panels. The drawing process is greatly affected by the die face design and the process designer has to ensure that the resulting drawn panel will meet specific requirements regarding surface quality and a minimum strain distribution to ensure dent resistance. The target strain technology provides almost immediate feedback to the process engineer during the die face design process if a specific change of the die face design will help to achieve these specific requirements or will be counterproductive. The paper will further show how an optimization of the material flow can be achieved through the use of a newly developed technology called Sculptured Die Face (SDF). The die face generation in SDF is more suited to be used in optimization loops than any other conventional die face design technology based on cross section design. A second focus in this paper is on the use of inverse solver technologies for secondary forming operations. The paper will show how the application of inverse technology can be used to accurately and quickly develop trim lines on simple as well as on complex support geometries.
Steven J. Ostro: Pioneer in Asteroid Lightcurve Inversion
NASA Astrophysics Data System (ADS)
Harris, Alan W.
2009-09-01
In 1906, Henry Norris Russell wrote a landmark paper (Astrophys. J. 24, 1-18, 1906) that set the field of lightcurve inversion back by more than three quarters of a century, until Steve Ostro and Robert Connolly published a paper on "convex profile inversion” (Icarus 57, 443-463, 1984). Russell's stifling contribution was innocent enough, and entirely correct: he showed that with "two cans of paint", one can decorate any arbitrarily shaped body in an infinite number of ways to yield any particular lightcurve, even, for example, a cigar shape that is brightest viewed end-on. This sufficed to discourage serious mathematical attack on the problem until Ostro & Connolly's landmark paper of 1984. They showed that if you have only "one can of paint", that is, in the absence of albedo variegation, the problem is tractable and one can make remarkable progress in lightcurve inversion to obtain shapes, or at least the "convex profile” of the real shape. As we now know, nature appears to have only one can of paint (per asteroid), that is, asteroids seem to paint themselves grey so that the uniform reflectivity assumption is quite excellent. Both radar and optical lightcurve inversion techniques are now quite mature, thanks to Steve's pioneering insights.
A practical method to assess model sensitivity and parameter uncertainty in C cycle models
NASA Astrophysics Data System (ADS)
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
2015-04-01
The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary data streams or by considering longer observation windows no systematic analysis has been carried out so far to explain the large differences among results. We consider adjoint based methods to investigate inverse problems using DALEC and various data streams. Using resolution matrices we study the nature of the inverse problems (solution existence, uniqueness and stability) and show how standard regularization techniques affect resolution and stability properties. Instead of using standard prior information as a penalty term in the cost function to regularize the problems we constraint the parameter space using ecological balance conditions and inequality constraints. The efficiency and rapidity of this approach allows us to compute ensembles of solutions to the inverse problems from which we can establish the robustness of the variational method and obtain non Gaussian posterior distributions for the model parameters and initial carbon stocks.
A reversible-jump Markov chain Monte Carlo algorithm for 1D inversion of magnetotelluric data
NASA Astrophysics Data System (ADS)
Mandolesi, Eric; Ogaya, Xenia; Campanyà, Joan; Piana Agostinetti, Nicola
2018-04-01
This paper presents a new computer code developed to solve the 1D magnetotelluric (MT) inverse problem using a Bayesian trans-dimensional Markov chain Monte Carlo algorithm. MT data are sensitive to the depth-distribution of rock electric conductivity (or its reciprocal, resistivity). The solution provided is a probability distribution - the so-called posterior probability distribution (PPD) for the conductivity at depth, together with the PPD of the interface depths. The PPD is sampled via a reversible-jump Markov Chain Monte Carlo (rjMcMC) algorithm, using a modified Metropolis-Hastings (MH) rule to accept or discard candidate models along the chains. As the optimal parameterization for the inversion process is generally unknown a trans-dimensional approach is used to allow the dataset itself to indicate the most probable number of parameters needed to sample the PPD. The algorithm is tested against two simulated datasets and a set of MT data acquired in the Clare Basin (County Clare, Ireland). For the simulated datasets the correct number of conductive layers at depth and the associated electrical conductivity values is retrieved, together with reasonable estimates of the uncertainties on the investigated parameters. Results from the inversion of field measurements are compared with results obtained using a deterministic method and with well-log data from a nearby borehole. The PPD is in good agreement with the well-log data, showing as a main structure a high conductive layer associated with the Clare Shale formation. In this study, we demonstrate that our new code go beyond algorithms developend using a linear inversion scheme, as it can be used: (1) to by-pass the subjective choices in the 1D parameterizations, i.e. the number of horizontal layers in the 1D parameterization, and (2) to estimate realistic uncertainties on the retrieved parameters. The algorithm is implemented using a simple MPI approach, where independent chains run on isolated CPU, to take full advantage of parallel computer architectures. In case of a large number of data, a master/slave appoach can be used, where the master CPU samples the parameter space and the slave CPUs compute forward solutions.
Path Planning For A Class Of Cutting Operations
NASA Astrophysics Data System (ADS)
Tavora, Jose
1989-03-01
Optimizing processing time in some contour-cutting operations requires solving the so-called no-load path problem. This problem is formulated and an approximate resolution method (based on heuristic search techniques) is described. Results for real-life instances (clothing layouts in the apparel industry) are presented and evaluated.
NASA Astrophysics Data System (ADS)
Bradshaw, John; Siegel, E.
2010-03-01
``Sciences''/SEANCES(!!!) rampant UNethics!!! WITNESS: Yau v Perelman Poincare-conj.-pf. [Naser, NewYorker(8/06)]; digits log- law Siegel[AMS Nat.Mtg.(02)-Abs.973-60-124] inversion to ONLY BEQS: Newcomb(1881)<<
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods.
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-04-07
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ₂ norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ₁/ℓ₂ mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ₁/ℓ₂ norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-01-01
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459
On improving the algorithm efficiency in the particle-particle force calculations
NASA Astrophysics Data System (ADS)
Kozynchenko, Alexander I.; Kozynchenko, Sergey A.
2016-09-01
The problem of calculating inter-particle forces in the particle-particle (PP) simulation models takes an important place in scientific computing. Such simulation models are used in diverse scientific applications arising in astrophysics, plasma physics, particle accelerators, etc., where the long-range forces are considered. The inverse-square laws such as Coulomb's law of electrostatic forces and Newton's law of universal gravitation are the examples of laws pertaining to the long-range forces. The standard naïve PP method outlined, for example, by Hockney and Eastwood [1] is straightforward, processing all pairs of particles in a double nested loop. The PP algorithm provides the best accuracy of all possible methods, but its computational complexity is O (Np2), where Np is a total number of particles involved. Too low efficiency of the PP algorithm seems to be the challenging issue in some cases where the high accuracy is required. An example can be taken from the charged particle beam dynamics where, under computing the own space charge of the beam, so-called macro-particles are used (see e.g., Humphries Jr. [2], Kozynchenko and Svistunov [3]).
Three-dimensional electrical impedance tomography based on the complete electrode model.
Vauhkonen, P J; Vauhkonen, M; Savolainen, T; Kaipio, J P
1999-09-01
In electrical impedance tomography an approximation for the internal resistivity distribution is computed based on the knowledge of the injected currents and measured voltages on the surface of the body. It is often assumed that the injected currents are confined to the two-dimensional (2-D) electrode plane and the reconstruction is based on 2-D assumptions. However, the currents spread out in three dimensions and, therefore, off-plane structures have significant effect on the reconstructed images. In this paper we propose a finite element-based method for the reconstruction of three-dimensional resistivity distributions. The proposed method is based on the so-called complete electrode model that takes into account the presence of the electrodes and the contact impedances. Both the forward and the inverse problems are discussed and results from static and dynamic (difference) reconstructions with real measurement data are given. It is shown that in phantom experiments with accurate finite element computations it is possible to obtain static images that are comparable with difference images that are reconstructed from the same object with the empty (saline filled) tank as a reference.
Self-Assembly of Octopus Nanoparticles into Pre-Programmed Finite Clusters
NASA Astrophysics Data System (ADS)
Halverson, Jonathan; Tkachenko, Alexei
2012-02-01
The precise control of the spatial arrangement of nanoparticles (NP) is often required to take full advantage of their novel optical and electronic properties. NPs have been shown to self-assemble into crystalline structures using either patchy surface regions or complementary DNA strands to direct the assembly. Due to a lack of specificity of the interactions these methods lead to only a limited number of structures. An emerging approach is to bind ssDNA at specific sites on the particle surface making so-called octopus NPs. Using octopus NPs we investigate the inverse problem of the self-assembly of finite clusters. That is, for a given target cluster (e.g., arranging the NPs on the vertices of a dodecahedron) what are the minimum number of complementary DNA strands needed for the robust self-assembly of the cluster from an initially homogeneous NP solution? Based on the results of Brownian dynamics simulations we have compiled a set of design rules for various target clusters including cubes, pyramids, dodecahedrons and truncated icosahedrons. Our approach leads to control over the kinetic pathway and has demonstrated nearly perfect yield of the target.
MToS: A Tree of Shapes for Multivariate Images.
Carlinet, Edwin; Géraud, Thierry
2015-12-01
The topographic map of a gray-level image, also called tree of shapes, provides a high-level hierarchical representation of the image contents. This representation, invariant to contrast changes and to contrast inversion, has been proved very useful to achieve many image processing and pattern recognition tasks. Its definition relies on the total ordering of pixel values, so this representation does not exist for color images, or more generally, multivariate images. Common workarounds, such as marginal processing, or imposing a total order on data, are not satisfactory and yield many problems. This paper presents a method to build a tree-based representation of multivariate images, which features marginally the same properties of the gray-level tree of shapes. Briefly put, we do not impose an arbitrary ordering on values, but we only rely on the inclusion relationship between shapes in the image definition domain. The interest of having a contrast invariant and self-dual representation of multivariate image is illustrated through several applications (filtering, segmentation, and object recognition) on different types of data: color natural images, document images, satellite hyperspectral imaging, multimodal medical imaging, and videos.
Wavelength-dependent excess permittivity as indicator of kerosene in diesel oil.
Kanyathare, Boniphace; Peiponen, Kai-Erik
2018-04-20
Adulteration of diesel oil by kerosene is a serious problem because of air pollution resulting from car exhaust gases. The objective of this study was to develop a relatively simple optical measurement and data analysis method to screen low-adulterated diesel oils. For this purpose, we introduce the utilization of refractive index measurement with a refractometer, scanning of visible-near-infrared transmittance, transmittance data inversion using the singly subtractive Kramers-Kronig relation, and exploitation of so-called wavelength-dependent relative excess permittivity. It is shown for three different diesel oil grades, adulterated with kerosene, that the excess permittivity is a powerful measure for screening fake diesel oils. The excess relative permittivity of such binary mixtures also reveals hidden spectral fingerprints that are neither visible in dispersion data alone nor in spectral transmittance measurements alone. We believe that the excess permittivity data are useful in the case of screening adulteration of diesel oil by kerosene and can further be explored for practical sensing solutions, e.g., in quality inspection of diesel oils in refineries.
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.
2005-01-01
This paper is the second of a set of two papers in which we study the inverse refraction problem. The first paper, "Types of Geophysical Nonuniqueness through Minimization," studies and classifies the types of nonuniqueness that exist when solving inverse problems depending on the participation of a priori information required to obtain reliable solutions of inverse geophysical problems. In view of the classification developed, in this paper we study the type of nonuniqueness associated with the inverse refraction problem. An approach for obtaining a realistic solution to the inverse refraction problem is offered in a third paper that is in preparation. The nonuniqueness of the inverse refraction problem is examined by using a simple three-layer model. Like many other inverse geophysical problems, the inverse refraction problem does not have a unique solution. Conventionally, nonuniqueness is considered to be a result of insufficient data and/or error in the data, for any fixed number of model parameters. This study illustrates that even for overdetermined and error free data, nonlinear inverse refraction problems exhibit exact-data nonuniqueness, which further complicates the problem of nonuniqueness. By evaluating the nonuniqueness of the inverse refraction problem, this paper targets the improvement of refraction inversion algorithms, and as a result, the achievement of more realistic solutions. The nonuniqueness of the inverse refraction problem is examined initially by using a simple three-layer model. The observations and conclusions of the three-layer model nonuniqueness study are used to evaluate the nonuniqueness of more complicated n-layer models and multi-parameter cell models such as in refraction tomography. For any fixed number of model parameters, the inverse refraction problem exhibits continuous ranges of exact-data nonuniqueness. Such an unfavorable type of nonuniqueness can be uniquely solved only by providing abundant a priori information. Insufficient a priori information during the inversion is the reason why refraction methods often may not produce desired results or even fail. This work also demonstrates that the application of the smoothing constraints, typical when solving ill-posed inverse problems, has a dual and contradictory role when applied to the ill-posed inverse problem of refraction travel times. This observation indicates that smoothing constraints may play such a two-fold role when applied to other inverse problems. Other factors that contribute to inverse-refraction-problem nonuniqueness are also considered, including indeterminacy, statistical data-error distribution, numerical error and instability, finite data, and model parameters. ?? Birkha??user Verlag, Basel, 2005.
The Coronal Abundance Anomalies of M Dwarfs
NASA Astrophysics Data System (ADS)
Wood, Brian E.; Laming, J. Martin; Karovska, Margarita
2012-07-01
We analyze Chandra X-ray spectra of the M0 V+M0 V binary GJ 338. As quantified by X-ray surface flux, these are the most inactive M dwarfs ever observed with X-ray grating spectroscopy. We focus on measuring coronal abundances, in particular searching for evidence of abundance anomalies related to first ionization potential (FIP). In the solar corona and wind, low-FIP elements are overabundant, which is the so-called FIP effect. For other stars, particularly very active ones, an "inverse FIP effect" is often observed, with low-FIP elements being underabundant. For both members of the GJ 338 binary, we find evidence for a modest inverse FIP effect, consistent with expectations from a previously reported correlation between spectral type and FIP bias. This amounts to strong evidence that all M dwarfs should exhibit the inverse FIP effect phenomenon, not just the active ones. We take the first step toward modeling the inverse FIP phenomenon in M dwarfs, building on past work that has demonstrated that MHD waves coursing through coronal loops can lead to a ponderomotive force that fractionates elements in a manner consistent with the FIP effect. We demonstrate that in certain circumstances this model can also lead to an inverse FIP effect, pointing the way to more detailed modeling of M dwarf coronal abundances in the future.
Periodic Landau-Zener problem in long-range migration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oksengendler, B. L.; Turaeva, N. N.
From studies of radiation effects in semiconductors at low temperatures, it is known that an interstitial atom migrates over a distance of up to 1000 A (Watkins effect). The interpretation of this effect is based on the inversion of potential energy curves of an interstitial atom in semiconductors when it changes its charge. At low temperatures, a cascade of radiationless transitions can occur between the ground and excited states of a relocalized electron, which leads to the coherent tunneling of the interstitial atom through the lattice. The description of this effect using the scattering matrix S leads to the dispersionmore » law and to an equation for the effective mass of such a quasiparticle called an inversion.« less
Prada, Carlos F; Delprat, Alejandra; Ruiz, Alfredo
2011-02-01
The chromosomal relationships of the four martensis cluster species are among the most complex and intricate within the entire Drosophila repleta group, due to the so-called sharing of inversions. Here, we have revised these relationships using comparative mapping of bacterial artificial chromosome (BAC) clones on the salivary gland chromosomes. A physical map of chromosome 2 of Drosophila uniseta (one of the cluster members) was generated by in situ hybridization of 82 BAC clones from the physical map of the Drosophila buzzatii genome (an outgroup that represents the ancestral arrangement). By comparing the marker positions, we determined the number, order, and orientation of conserved chromosomal segments between chromosome 2 of D. buzzatii and D. uniseta. GRIMM software was used to infer that a minimum of five chromosomal inversions are necessary to transform the chromosome 2 of D. buzzatii into that of D. uniseta. Two of these inversions have been overlooked in previous cytological analyses. The five fixed inversions entail two breakpoint reuses because only nine syntenic segments and eight interruptions were observed. We tested for the presence of the five inversions fixed in D. uniseta in the other three species of the martensis cluster by in situ hybridization of eight breakpoint-bearing BAC clones. The results shed light on the chromosomal phylogeny of the martensis cluster, yet leave a number of questions open.
Bilinear Inverse Problems: Theory, Algorithms, and Applications
NASA Astrophysics Data System (ADS)
Ling, Shuyang
We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical guarantees and stability theory are derived and the number of sampling complexity is nearly optimal (up to a poly-log factor). Applications in imaging sciences and signal processing are discussed and numerical simulations are presented to demonstrate the effectiveness and efficiency of our approach.
NASA Astrophysics Data System (ADS)
Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar
2017-11-01
Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.
Zhou, Yu; Ren, Jie
2011-04-01
We put forward a new concept of software oversampling mapping system for electrocardiogram (ECG) to assist the research of the ECG inverse problem to improve the generality of mapping system and the quality of mapping signals. We then developed a conceptual system based on the traditional ECG detecting circuit, Labview and DAQ card produced by National Instruments, and at the same time combined the newly-developed oversampling method into the system. The results indicated that the system could map ECG signals accurately and the quality of the signals was good. The improvement of hardware and enhancement of software made the system suitable for mapping in different situations. So the primary development of the software for oversampling mapping system was successful and further research and development can make the system a powerful tool for researching ECG inverse problem.
An inverse problem for a mathematical model of aquaponic agriculture
NASA Astrophysics Data System (ADS)
Bobak, Carly; Kunze, Herb
2017-01-01
Aquaponic agriculture is a sustainable ecosystem that relies on a symbiotic relationship between fish and macrophytes. While the practice has been growing in popularity, relatively little mathematical models exist which aim to study the system processes. In this paper, we present a system of ODEs which aims to mathematically model the population and concetrations dynamics present in an aquaponic environment. Values of the parameters in the system are estimated from the literature so that simulated results can be presented to illustrate the nature of the solutions to the system. As well, a brief sensitivity analysis is performed in order to identify redundant parameters and highlight those which may need more reliable estimates. Specifically, an inverse problem with manufactured data for fish and plants is presented to demonstrate the ability of the collage theorem to recover parameter estimates.
Upper-tropospheric inversion and easterly jet in the tropics
NASA Astrophysics Data System (ADS)
Fujiwara, M.; Xie, S.-P.; Shiotani, M.; Hashizume, H.; Hasebe, F.; VöMel, H.; Oltmans, S. J.; Watanabe, T.
2003-12-01
Shipboard radiosonde measurements revealed a persistent temperature inversion layer with a thickness of ˜200 m at 12-13 km in a nonconvective region over the tropical eastern Pacific, along 2°N, in September 1999. Simultaneous relative humidity measurements indicated that the thin inversion layer was located at the top of a very wet layer with a thickness of 3-4 km, which was found to originate from the intertropical convergence zone (ITCZ) to the north. Radiative transfer calculations suggested that this upper tropospheric inversion (UTI) was produced and maintained by strong longwave cooling in this wet layer. A strong easterly jet stream was also observed at 12-13 km, centered around 4°-5°N. This easterly jet was in the thermal wind balance, with meridional temperature gradients produced by the cloud and radiative processes in the ITCZ and the wet outflow. Furthermore, the jet, in turn, acted to spread inversions further downstream through the transport of radiatively active water vapor. This feedback mechanism may explain the omnipresence of temperature inversions and layering structures in trace gases in the tropical troposphere. Examination of high-resolution radiosonde data at other sites in the tropical Pacific indicates that similar UTIs often appear around 12-15 km. The UTI around 12-15 km may thus be characterized as one of the "climatological" inversions in the tropical troposphere, forming the lower boundary of the so-called tropical tropopause layer, where the tropospheric air is processed photochemically and microphysically before entering the stratosphere.
Merging information in geophysics: the triumvirat of geology, geophysics, and petrophysics
NASA Astrophysics Data System (ADS)
Revil, A.
2016-12-01
We know that geophysical inversion is non-unique and that many classical regularization techniques are unphysical. Despite this, we like to use them because of their simplicity and because geophysicists are often afraid to bias the inverse problem by introducing too much prior information (in a broad sense). It is also clear that geophysics is done on geological objects that are not random structures. Spending some time with a geologist in the field, before organizing a field geophysical campaign, is always an instructive experience. Finally, the measured properties are connected to physicochemical and textural parameters of the porous media and the interfaces between the various phases of a porous body. .Some fundamental parameters may control the geophysical observtions or their time variations. If we want to improve our geophysical tomograms, we need to be risk-takers and acknowledge, or rather embrqce, the cross-fertilization arising by coupling geology, geophysics, and ptrophysics. In this presentation, I will discuss various techniques to do so. They will include non-stationary geostatistical descriptors, facies deformation, cross-coupled petrophysical properties using petrophysical clustering, and image-guided inversion. I will show various applications to a number of relevant cases in hydrogeophysics. From these applications, it may become clear that there are many ways to address inverse or time-lapse inverse problems and geophysicists have to be pragmatic regarding the methods used depending on the degree of available prior information.
Tomographic Neutron Imaging using SIRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregor, Jens; FINNEY, Charles E A; Toops, Todd J
2013-01-01
Neutron imaging is complementary to x-ray imaging in that materials such as water and plastic are highly attenuating while material such as metal is nearly transparent. We showcase tomographic imaging of a diesel particulate filter. Reconstruction is done using a modified version of SIRT called PSIRT. We expand on previous work and introduce Tikhonov regularization. We show that near-optimal relaxation can still be achieved. The algorithmic ideas apply to cone beam x-ray CT and other inverse problems.
Controlled wavelet domain sparsity for x-ray tomography
NASA Astrophysics Data System (ADS)
Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli
2018-01-01
Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \
ERIC Educational Resources Information Center
Asghari, Amir
2012-01-01
This article is the story of a very non-standard, absolutely student-centered multivariable calculus course. The course advocates the so-called problem method in which the problems used are a bridge between what the learners know and what they are about to know. The main feature of the course is a unique conceptual story that runs through the…
Applying Global Workspace Theory to the Frame Problem
ERIC Educational Resources Information Center
Shanahan, Murray; Baars, Bernard
2005-01-01
The subject of this article is the frame problem, as conceived by certain cognitive scientists and philosophers of mind, notably Fodor for whom it stands as a fundamental obstacle to progress in cognitive science. The challenge is to explain the capacity of so-called informationally unencapsulated cognitive processes to deal effectively with…
Faith Matters: Race/Ethnicity, Religion and Substance Use
ERIC Educational Resources Information Center
Wallace, John M., Jr.; Myers, Valerie L.; Osai, Esohe R.
2004-01-01
As a result of stereotypes and limited research, many people perceive substance use, abuse, and dependence as problems resulting from the use of so called "street drugs " like crack and heroin, used primarily by poor black and Hispanic populations. In reality, America's substance use problem encompasses not only these illegal drugs, but also the…
Two-Dimensional Crystallography Introduced by the Sprinkler Watering Problem
ERIC Educational Resources Information Center
De Toro, Jose A.; Calvo, Gabriel F.; Muniz, Pablo
2012-01-01
The problem of optimizing the number of circular sprinklers watering large fields is used to introduce, from a purely elementary geometrical perspective, some basic concepts in crystallography and comment on a few size effects in condensed matter physics. We examine square and hexagonal lattices to build a function describing the, so-called, dry…
New Bernstein type inequalities for polynomials on ellipses
NASA Technical Reports Server (NTRS)
Freund, Roland; Fischer, Bernd
1990-01-01
New and sharp estimates are derived for the growth in the complex plane of polynomials known to have a curved majorant on a given ellipse. These so-called Bernstein type inequalities are closely connected with certain constrained Chebyshev approximation problems on ellipses. Also presented are some new results for approximation problems of this type.
Inverse statistical physics of protein sequences: a key issues review.
Cocco, Simona; Feinauer, Christoph; Figliuzzi, Matteo; Monasson, Rémi; Weigt, Martin
2018-03-01
In the course of evolution, proteins undergo important changes in their amino acid sequences, while their three-dimensional folded structure and their biological function remain remarkably conserved. Thanks to modern sequencing techniques, sequence data accumulate at unprecedented pace. This provides large sets of so-called homologous, i.e. evolutionarily related protein sequences, to which methods of inverse statistical physics can be applied. Using sequence data as the basis for the inference of Boltzmann distributions from samples of microscopic configurations or observables, it is possible to extract information about evolutionary constraints and thus protein function and structure. Here we give an overview over some biologically important questions, and how statistical-mechanics inspired modeling approaches can help to answer them. Finally, we discuss some open questions, which we expect to be addressed over the next years.
Inverse statistical physics of protein sequences: a key issues review
NASA Astrophysics Data System (ADS)
Cocco, Simona; Feinauer, Christoph; Figliuzzi, Matteo; Monasson, Rémi; Weigt, Martin
2018-03-01
In the course of evolution, proteins undergo important changes in their amino acid sequences, while their three-dimensional folded structure and their biological function remain remarkably conserved. Thanks to modern sequencing techniques, sequence data accumulate at unprecedented pace. This provides large sets of so-called homologous, i.e. evolutionarily related protein sequences, to which methods of inverse statistical physics can be applied. Using sequence data as the basis for the inference of Boltzmann distributions from samples of microscopic configurations or observables, it is possible to extract information about evolutionary constraints and thus protein function and structure. Here we give an overview over some biologically important questions, and how statistical-mechanics inspired modeling approaches can help to answer them. Finally, we discuss some open questions, which we expect to be addressed over the next years.
Facts about Child Care. NCJW Center for the Child Fact Sheet Number 3.
ERIC Educational Resources Information Center
National Council of Jewish Women, New York, NY. Center for the Child.
Some may believe that most married women do not really need to work; that nonmaternal care is bad for children; that the government is already spending a lot on child care; that the so-called child care crisis is not society's problem, but the parents' problem; and that interventions by the federal government will solve the child care problem.…
Inverse modeling methods for indoor airborne pollutant tracking: literature review and fundamentals.
Liu, X; Zhai, Z
2007-12-01
Reduction in indoor environment quality calls for effective control and improvement measures. Accurate and prompt identification of contaminant sources ensures that they can be quickly removed and contaminated spaces isolated and cleaned. This paper discusses the use of inverse modeling to identify potential indoor pollutant sources with limited pollutant sensor data. The study reviews various inverse modeling methods for advection-dispersion problems and summarizes the methods into three major categories: forward, backward, and probability inverse modeling methods. The adjoint probability inverse modeling method is indicated as an appropriate model for indoor air pollutant tracking because it can quickly find source location, strength and release time without prior information. The paper introduces the principles of the adjoint probability method and establishes the corresponding adjoint equations for both multi-zone airflow models and computational fluid dynamics (CFD) models. The study proposes a two-stage inverse modeling approach integrating both multi-zone and CFD models, which can provide a rapid estimate of indoor pollution status and history for a whole building. Preliminary case study results indicate that the adjoint probability method is feasible for indoor pollutant inverse modeling. The proposed method can help identify contaminant source characteristics (location and release time) with limited sensor outputs. This will ensure an effective and prompt execution of building management strategies and thus achieve a healthy and safe indoor environment. The method can also help design optimal sensor networks.
NASA Astrophysics Data System (ADS)
Schumacher, F.; Friederich, W.; Lamara, S.
2016-02-01
We present a new conceptual approach to scattering-integral-based seismic full waveform inversion (FWI) that allows a flexible, extendable, modular and both computationally and storage-efficient numerical implementation. To achieve maximum modularity and extendability, interactions between the three fundamental steps carried out sequentially in each iteration of the inversion procedure, namely, solving the forward problem, computing waveform sensitivity kernels and deriving a model update, are kept at an absolute minimum and are implemented by dedicated interfaces. To realize storage efficiency and maximum flexibility, the spatial discretization of the inverted earth model is allowed to be completely independent of the spatial discretization employed by the forward solver. For computational efficiency reasons, the inversion is done in the frequency domain. The benefits of our approach are as follows: (1) Each of the three stages of an iteration is realized by a stand-alone software program. In this way, we avoid the monolithic, unflexible and hard-to-modify codes that have often been written for solving inverse problems. (2) The solution of the forward problem, required for kernel computation, can be obtained by any wave propagation modelling code giving users maximum flexibility in choosing the forward modelling method. Both time-domain and frequency-domain approaches can be used. (3) Forward solvers typically demand spatial discretizations that are significantly denser than actually desired for the inverted model. Exploiting this fact by pre-integrating the kernels allows a dramatic reduction of disk space and makes kernel storage feasible. No assumptions are made on the spatial discretization scheme employed by the forward solver. (4) In addition, working in the frequency domain effectively reduces the amount of data, the number of kernels to be computed and the number of equations to be solved. (5) Updating the model by solving a large equation system can be done using different mathematical approaches. Since kernels are stored on disk, it can be repeated many times for different regularization parameters without need to solve the forward problem, making the approach accessible to Occam's method. Changes of choice of misfit functional, weighting of data and selection of data subsets are still possible at this stage. We have coded our approach to FWI into a program package called ASKI (Analysis of Sensitivity and Kernel Inversion) which can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. It is written in modern FORTRAN language using object-oriented concepts that reflect the modular structure of the inversion procedure. We validate our FWI method by a small-scale synthetic study and present first results of its application to high-quality seismological data acquired in the southern Aegean.
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang; Lamara, Samir; Gutt, Phillip; Paffrath, Marcel
2015-04-01
We present a seismic full waveform inversion concept for applications ranging from seismological to enineering contexts, based on sensitivity kernels for full waveforms. The kernels are derived from Born scattering theory as the Fréchet derivatives of linearized frequency-domain full waveform data functionals, quantifying the influence of elastic earth model parameters and density on the data values. For a specific source-receiver combination, the kernel is computed from the displacement and strain field spectrum originating from the source evaluated throughout the inversion domain, as well as the Green function spectrum and its strains originating from the receiver. By storing the wavefield spectra of specific sources/receivers, they can be re-used for kernel computation for different specific source-receiver combinations, optimizing the total number of required forward simulations. In the iterative inversion procedure, the solution of the forward problem, the computation of sensitivity kernels and the derivation of a model update is held completely separate. In particular, the model description for the forward problem and the description of the inverted model update are kept independent. Hence, the resolution of the inverted model as well as the complexity of solving the forward problem can be iteratively increased (with increasing frequency content of the inverted data subset). This may regularize the overall inverse problem and optimizes the computational effort of both, solving the forward problem and computing the model update. The required interconnection of arbitrary unstructured volume and point grids is realized by generalized high-order integration rules and 3D-unstructured interpolation methods. The model update is inferred solving a minimization problem in a least-squares sense, resulting in Gauss-Newton convergence of the overall inversion process. The inversion method was implemented in the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion), which provides a generalized interface to arbitrary external forward modelling codes. So far, the 3D spectral-element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework are supported. The creation of interfaces to further forward codes is planned in the near future. ASKI is freely available under the terms of the GPL at www.rub.de/aski . Since the independent modules of ASKI must communicate via file output/input, large storage capacities need to be accessible conveniently. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion. In the presentation, we will show some aspects of the theory behind the full waveform inversion method and its practical realization by the software package ASKI, as well as synthetic and real-data applications from different scales and geometries.
NASA Astrophysics Data System (ADS)
Avdeev, Maxim V.; Proshin, Yurii N.
2017-10-01
We theoretically study the proximity effect in the thin-film layered ferromagnet (F) - superconductor (S) heterostructures in F1F2S design. We consider the boundary value problem for the Usadel-like equations in the case of so-called ;dirty; limit. The ;latent; superconducting pairing interaction in F layers taken into account. The focus is on the recipe of experimental preparation the state with so-called solitary superconductivity. We also propose and discuss the model of the superconducting spin valve based on F1F2S trilayers in solitary superconductivity regime.
Comparison between different adsorption-desorption kinetics schemes in two dimensional lattice gas
NASA Astrophysics Data System (ADS)
Huespe, V. J.; Belardinelli, R. E.; Pereyra, V. D.; Manzi, S. J.
2017-12-01
Monte Carlo simulation is used to study the adsorption-desorption kinetics in the framework of the kinetic lattice-gas model. Three schemes of the so-called hard dynamics and five schemes of the so called soft dynamics were used for this purpose. It is observed that for the hard dynamic schemes, the equilibrium and non-equilibrium observable, such as adsorption isotherms, sticking coefficients, and thermal desorption spectra, have a normal or physical sustainable behavior. While for the soft dynamics schemes, with the exception of the transition state theory, the equilibrium and non-equilibrium observables have several problems.
Semisupervised kernel marginal Fisher analysis for face recognition.
Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun
2013-01-01
Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.
Vehicle Routing Problem Using Genetic Algorithm with Multi Compartment on Vegetable Distribution
NASA Astrophysics Data System (ADS)
Kurnia, Hari; Gustri Wahyuni, Elyza; Cergas Pembrani, Elang; Gardini, Syifa Tri; Kurnia Aditya, Silfa
2018-03-01
The problem that is often gained by the industries of managing and distributing vegetables is how to distribute vegetables so that the quality of the vegetables can be maintained properly. The problems encountered include optimal route selection and little travel time or so-called TSP (Traveling Salesman Problem). These problems can be modeled using the Vehicle Routing Problem (VRP) algorithm with rating ranking, a cross order based crossing, and also order based mutation mutations on selected chromosomes. This study uses limitations using only 20 market points, 2 point warehouse (multi compartment) and 5 vehicles. It is determined that for one distribution, one vehicle can only distribute to 4 market points only from 1 particular warehouse, and also one such vehicle can only accommodate 100 kg capacity.
Numerical reconstruction of tsunami source using combined seismic, satellite and DART data
NASA Astrophysics Data System (ADS)
Krivorotko, Olga; Kabanikhin, Sergey; Marinin, Igor
2014-05-01
Recent tsunamis, for instance, in Japan (2011), in Sumatra (2004), and at the Indian coast (2004) showed that a system of producing exact and timely information about tsunamis is of a vital importance. Numerical simulation is an effective instrument for providing such information. Bottom relief characteristics and the initial perturbation data (a tsunami source) are required for the direct simulation of tsunamis. The seismic data about the source are usually obtained in a few tens of minutes after an event has occurred (the seismic waves velocity being about five hundred kilometres per minute, while the velocity of tsunami waves is less than twelve kilometres per minute). A difference in the arrival times of seismic and tsunami waves can be used when operationally refining the tsunami source parameters and modelling expected tsunami wave height on the shore. The most suitable physical models related to the tsunamis simulation are based on the shallow water equations. The problem of identification parameters of a tsunami source using additional measurements of a passing wave is called inverse tsunami problem. We investigate three different inverse problems of determining a tsunami source using three different additional data: Deep-ocean Assessment and Reporting of Tsunamis (DART) measurements, satellite wave-form images and seismic data. These problems are severely ill-posed. We apply regularization techniques to control the degree of ill-posedness such as Fourier expansion, truncated singular value decomposition, numerical regularization. The algorithm of selecting the truncated number of singular values of an inverse problem operator which is agreed with the error level in measured data is described and analyzed. In numerical experiment we used gradient methods (Landweber iteration and conjugate gradient method) for solving inverse tsunami problems. Gradient methods are based on minimizing the corresponding misfit function. To calculate the gradient of the misfit function, the adjoint problem is solved. The conservative finite-difference schemes for solving the direct and adjoint problems in the approximation of shallow water are constructed. Results of numerical experiments of the tsunami source reconstruction are presented and discussed. We show that using a combination of three different types of data allows one to increase the stability and efficiency of tsunami source reconstruction. Non-profit organization WAPMERR (World Agency of Planetary Monitoring and Earthquake Risk Reduction) in collaboration with Informap software development department developed the Integrated Tsunami Research and Information System (ITRIS) to simulate tsunami waves and earthquakes, river course changes, coastal zone floods, and risk estimates for coastal constructions at wave run-ups and earthquakes. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. This work was supported by the Russian Foundation for Basic Research (project No. 12-01-00773 'Theory and Numerical Methods for Solving Combined Inverse Problems of Mathematical Physics') and interdisciplinary project of SB RAS 14 'Inverse Problems and Applications: Theory, Algorithms, Software'.
On the inversion-indel distance
2013-01-01
Background The inversion distance, that is the distance between two unichromosomal genomes with the same content allowing only inversions of DNA segments, can be computed thanks to a pioneering approach of Hannenhalli and Pevzner in 1995. In 2000, El-Mabrouk extended the inversion model to allow the comparison of unichromosomal genomes with unequal contents, thus insertions and deletions of DNA segments besides inversions. However, an exact algorithm was presented only for the case in which we have insertions alone and no deletion (or vice versa), while a heuristic was provided for the symmetric case, that allows both insertions and deletions and is called the inversion-indel distance. In 2005, Yancopoulos, Attie and Friedberg started a new branch of research by introducing the generic double cut and join (DCJ) operation, that can represent several genome rearrangements (including inversions). Among others, the DCJ model gave rise to two important results. First, it has been shown that the inversion distance can be computed in a simpler way with the help of the DCJ operation. Second, the DCJ operation originated the DCJ-indel distance, that allows the comparison of genomes with unequal contents, considering DCJ, insertions and deletions, and can be computed in linear time. Results In the present work we put these two results together to solve an open problem, showing that, when the graph that represents the relation between the two compared genomes has no bad components, the inversion-indel distance is equal to the DCJ-indel distance. We also give a lower and an upper bound for the inversion-indel distance in the presence of bad components. PMID:24564182
Nonlinear Rayleigh wave inversion based on the shuffled frog-leaping algorithm
NASA Astrophysics Data System (ADS)
Sun, Cheng-Yu; Wang, Yan-Yan; Wu, Dun-Shi; Qin, Xiao-Jun
2017-12-01
At present, near-surface shear wave velocities are mainly calculated through Rayleigh wave dispersion-curve inversions in engineering surface investigations, but the required calculations pose a highly nonlinear global optimization problem. In order to alleviate the risk of falling into a local optimal solution, this paper introduces a new global optimization method, the shuffle frog-leaping algorithm (SFLA), into the Rayleigh wave dispersion-curve inversion process. SFLA is a swarm-intelligence-based algorithm that simulates a group of frogs searching for food. It uses a few parameters, achieves rapid convergence, and is capability of effective global searching. In order to test the reliability and calculation performance of SFLA, noise-free and noisy synthetic datasets were inverted. We conducted a comparative analysis with other established algorithms using the noise-free dataset, and then tested the ability of SFLA to cope with data noise. Finally, we inverted a real-world example to examine the applicability of SFLA. Results from both synthetic and field data demonstrated the effectiveness of SFLA in the interpretation of Rayleigh wave dispersion curves. We found that SFLA is superior to the established methods in terms of both reliability and computational efficiency, so it offers great potential to improve our ability to solve geophysical inversion problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Lee, Jina; Lefantzi, Sophia
The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. To that end, we construct a multiresolution spatial parametrization for fossil-fuel CO2 emissions (ffCO2), to be used in atmospheric inversions. Such a parametrization does not currently exist. The parametrization uses wavelets to accurately capture the multiscale, nonstationary nature of ffCO2 emissions and employs proxies of human habitation, e.g., images of lights at night and maps of built-up areas to reduce the dimensionality of the multiresolution parametrization.more » The parametrization is used in a synthetic data inversion to test its suitability for use in atmospheric inverse problem. This linear inverse problem is predicated on observations of ffCO2 concentrations collected at measurement towers. We adapt a convex optimization technique, commonly used in the reconstruction of compressively sensed images, to perform sparse reconstruction of the time-variant ffCO2 emission field. We also borrow concepts from compressive sensing to impose boundary conditions i.e., to limit ffCO2 emissions within an irregularly shaped region (the United States, in our case). We find that the optimization algorithm performs a data-driven sparsification of the spatial parametrization and retains only of those wavelets whose weights could be estimated from the observations. Further, our method for the imposition of boundary conditions leads to a 10computational saving over conventional means of doing so. We conclude with a discussion of the accuracy of the estimated emissions and the suitability of the spatial parametrization for use in inverse problems with a significant degree of regularization.« less
Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willcox, Karen; Marzouk, Youssef
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Understanding and Managing Menopause | NIH MedlinePlus the Magazine
... of life," is different for each woman. For example, hot flashes and sleep problems may trouble your ... menopause. So can some types of operations. For example, surgery to remove your uterus called a hysterectomy) ...
Numerical methods for the inverse problem of density functional theory
Jensen, Daniel S.; Wasserman, Adam
2017-07-17
Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less
Numerical methods for the inverse problem of density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Daniel S.; Wasserman, Adam
Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less
Isometric deformations of planar quadrilaterals with constant index
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zaputryaeva, E S
We consider isometric deformations (motions) of polygons (so-called carpenter's rule problem) in the case of self-intersecting polygons with the additional condition that the index of the polygon is preserved by the motion. We provide general information about isometric deformations of planar polygons and give a complete solution of the carpenter's problem for quadrilaterals. Bibliography: 17 titles.
Quantum Zeno Effect in the Measurement Problem
NASA Technical Reports Server (NTRS)
Namiki, Mikio; Pasaczio, Saverio
1996-01-01
Critically analyzing the so-called quantum Zeno effect in the measurement problem, we show that observation of this effect does not necessarily mean experimental evidence for the naive notion of wave-function collapse by measurement (the simple projection rule). We also examine what kind of limitation the uncertainty relation and others impose on the observation of the quantum Zeno effect.
Scientific Paradigms and Falsification: Kuhn, Popper, and Problems in Education Research
ERIC Educational Resources Information Center
Hyslop-Margison, Emery James
2010-01-01
By examining the respective contributions of Karl Popper and Thomas Kuhn to the philosophy of science, the author highlights some prevailing problems in this article with the methods of so-called scientific research in education. The author enumerates a number of reasons why such research, in spite of its limited tangible return, continues to gain…
Revisiting Data Related to the Age of Onset and Developmental Course of Female Conduct Problems
ERIC Educational Resources Information Center
Brennan, Lauretta M.; Shaw, Daniel S.
2013-01-01
Children who exhibit persistently elevated levels of conduct problems (CP) from early childhood, so-called early-starters, are known to be at increased risk for continued CP throughout middle childhood, adolescence, and adulthood. Theoretical and empirical work has focused on this subgroup of children characterized by similar risk factors, an…
Problem Solving in the Borderland between Mathematics and Physics
ERIC Educational Resources Information Center
Jensen, Jens Højgaard; Niss, Martin; Jankvist, Uffe Thomas
2017-01-01
The article addresses the problématique of where mathematization is taught in the educational system, and who teaches it. Mathematization is usually not a part of mathematics programs at the upper secondary level, but we argue that physics teaching has something to offer in this respect, if it focuses on solving so-called unformalized problems,…
"Needle and Stick" Save the World: Sustainable Development and the Universal Child
ERIC Educational Resources Information Center
Dahlbeck, Johan; De Lucia Dahlbeck, Moa
2012-01-01
This text deals with a problem concerning processes of the productive power of knowledge. We draw on the so-called poststructural theories challenging the classical image of thought--as hinged upon a representational logic identifying entities in a rigid sense--when formulating a problem concerning the gap between knowledge and the object of…
FOREWORD: Imaging from coupled physics Imaging from coupled physics
NASA Astrophysics Data System (ADS)
Arridge, S. R.; Scherzer, O.
2012-08-01
Due to the increased demand for tomographic imaging in applied sciences, such as medicine, biology and nondestructive testing, the field has expanded enormously in the past few decades. The common task of tomography is to image the interior of three-dimensional objects from indirect measurement data. In practical realizations, the specimen to be investigated is exposed to probing fields. A variety of these, such as acoustic, electromagnetic or thermal radiation, amongst others, have been advocated in the literature. In all cases, the field is measured after interaction with internal mechanisms of attenuation and/or scattering and images are reconstructed using inverse problems techniques, representing spatial maps of the parameters of these perturbation mechanisms. In the majority of these imaging modalities, either the useful contrast is of low resolution, or high resolution images are obtained with limited contrast or quantitative discriminatory ability. In the last decade, an alternative phenomenon has become of increasing interest, although its origins can be traced much further back; see Widlak and Scherzer [1], Kuchment and Steinhaur [2], and Seo et al [3] in this issue for references to this historical context. Rather than using the same physical field for probing and measurement, with a contrast caused by perturbation, these methods exploit the generation of a secondary physical field which can be measured in addition to, or without, the often dominating effect of the primary probe field. These techniques are variously called 'hybrid imaging' or 'multimodality imaging'. However, in this article and special section we suggest the term 'imaging from coupled physics' (ICP) to more clearly distinguish this methodology from those that simply measure several types of data simultaneously. The key idea is that contrast induced by one type of radiation is read by another kind, so that both high resolution and high contrast are obtained simultaneously. As with all new imaging techniques, the discovery of physical principles which can be exploited to yield information about internal physical parameters has led, hand in hand, to the development of new mathematical methods for solving the corresponding inverse problems. In many cases, the coupled physics imaging problems are expected to be much better posed than conventional tomographical imaging problems. However, still, at the current state of research, there exist a variety of open mathematical questions regarding uniqueness, existence and stability. In this special section we have invited contributions from many of the leading researchers in the mathematics, physics and engineering of these techniques to survey and to elaborate on these novel methodologies, and to present recent research directions. Historically, one of the best studied strongly ill-posed problems in the mathematical literature is the Calderón problem occuring in conductivity imaging, and one of the first examples of ICP is the use of magnetic resonance imaging (MRI) to detect internal current distributions. This topic, known as current density imaging (CDI) or magnetic resonance elecrical impedance tomography (MREIT), and its related technique of magnetic resonance electrical property tomography (MREPT), is reviewed by Wildak and Scherzer [1], and also by Seo et al [3], where experimental studies are documented. Mathematically, several of the ICP problems can be analyzed in terms of the 'p-Laplacian' which raises interesting research questions of non-linear partial differential equations. One approach for analyzing and for the solution of the CDI problem, using characteristics of the 1-Laplacian, is discussed by Tamasan and Veras [4]. Moreover, Moradifam et al [5] present a novel iterative algorithm based on Bregman splitting for solving the CDI problem. Probably the most active research areas in ICP are related to acoustic detection, because most of these techniques rely on the photoacoustic effect wherein absorption of an ultrashort pulse of light, having propagated by multiple scattering some distance into a diffusing medium, generates a source of acoustic waves that are propagated with hyperbolic stability to a surface detector. A complementary problem is that of 'acousto-optics' which uses focussed acoustic waves as the primary field to induce perturbations in optical or electrical properties, which are thus spatially localized. Similar physical principles apply to implement ultrasound modulated electrical impedance tomography (UMEIT). These topics are included in the review of Wildak and Scherzer [1], and Kuchment and Steinhauer [2] offer a general analysis of their structure in terms of pseudo-differential operators. 'Acousto-electrical' imaging is analyzed as a particular case by Ammari et al [6]. In the paper by Tarvainen et al [7], the photo-acoustic problem is studied with respect to different models of the light propagation step. In the paper by Monard and Bal [8], a more general problem for the reconstruction of an anisotropic diffusion parameter from power density measurements is considered; here, issues of uniqueness with respect to the number of measurements is of great importance. A distinctive, and highly important, example of ICP is that of elastography, in which the primary field is low-frequency ultrasound giving rise to mechanical displacement that reveals information on the local elasticity tensor. As in all the methods discussed in this section, this contrast mechanism is measured internally, with a secondary technique, which in this case can be either MRI or ultrasound. McLaughlin et al [9] give a comprehensive analysis of this problem. Our intention for this special section was to provide both an overview and a snapshot of current work in this exciting area. The increasing interest, and the involvement of cross-disciplinary groups of scientists, will continue to lead to the rapid expansion and important new results in this novel area of imaging science. References [1] Widlak T and Scherzer O 2012 Inverse Problems 28 084008 [2] Kuchment P and Steinhauer D 2012 Inverse Problems 28 084007 [3] Seo J K, Kim D-H, Lee J, Kwon O I, Sajib S Z K and Woo E J 2012 Inverse Problems 28 084002 [4] Tamasan A and Veras J 2012 Inverse Problems 28 084006 [5] Moradifam A, Nachman A and Timonov A 2012 Inverse Problems 28 084003 [6] Ammari H, Garnier J and Jing W 2012 Inverse Problems 28 084005 [7] Tarvainen T, Cox B T, Kaipio J P and Arridge S R 2012 Inverse Problems 28 084009 [8] Monard F and Bal G 2012 Inverse Problems 28 084001 [9] McLaughlin J, Oberai A and Yoon J R 2012 Inverse Problems 28 084004
NASA Astrophysics Data System (ADS)
Vinson, Benjamin R.; Chiang, Eugene
2018-03-01
The behaviour of an interior test particle in the secular three-body problem has been studied extensively. A well-known feature is the Lidov-Kozai resonance in which the test particle's argument of periastron librates about ±90° and large oscillations in eccentricity and inclination are possible. Less explored is the inverse problem: the dynamics of an exterior test particle and an interior perturber. We survey numerically the inverse secular problem, expanding the potential to hexadecapolar order and correcting an error in the published expansion. Four secular resonances are uncovered that persist in full N-body treatments (in what follows, ϖ and Ω are the longitudes of periapse and of ascending node, ω is the argument of periapse, and subscripts 1 and 2 refer to the inner perturber and the outer test particle): (i) an orbit-flipping quadrupole resonance requiring a non-zero perturber eccentricity e1, in which Ω2 - ϖ1 librates about ±90°; (ii) a hexadecapolar resonance (the `inverse Kozai' resonance) for perturbers that are circular or nearly so and inclined by I ≃ 63°/117°, in which ω2 librates about ±90° and which can vary the particle eccentricity by Δe2 ≃ 0.2 and lead to orbit crossing; (iii) an octopole `apse-aligned' resonance at I ≃ 46°/107° wherein ϖ2 - ϖ1 librates about 0° and Δe2 grows with e1; and (iv) an octopole resonance at I ≃ 73°/134° wherein ϖ2 + ϖ1 - 2Ω2 librates about 0° and Δe2 can be as large as 0.3 for small but non-zero e1. Qualitatively, the more eccentric the perturber, the more the particle's eccentricity and inclination vary; also, more polar orbits are more chaotic. Our solutions to the inverse problem have potential application to the Kuiper belt and debris discs, circumbinary planets, and hierarchical stellar systems.
NASA Astrophysics Data System (ADS)
Zielke, O.; McDougall, D.; Mai, P. M.; Babuska, I.
2014-12-01
One fundamental aspect of seismic hazard mitigation is gaining a better understanding of the rupture process. Because direct observation of the relevant parameters and properties is not possible, other means such as kinematic source inversions are used instead. By constraining the spatial and temporal evolution of fault slip during an earthquake, those inversion approaches may enable valuable insights in the physics of the rupture process. However, due to the underdetermined nature of this inversion problem (i.e., inverting a kinematic source model for an extended fault based on seismic data), the provided solutions are generally non-unique. Here we present a statistical (Bayesian) inversion approach based on an open-source library for uncertainty quantification (UQ) called QUESO that was developed at ICES (UT Austin). The approach has advantages with respect to deterministic inversion approaches as it provides not only a single (non-unique) solution but also provides uncertainty bounds with it. Those uncertainty bounds help to qualitatively and quantitatively judge how well constrained an inversion solution is and how much rupture complexity the data reliably resolve. The presented inversion scheme uses only tele-seismically recorded body waves but future developments may lead us towards joint inversion schemes. After giving an insight in the inversion scheme ifself (based on delayed rejection adaptive metropolis, DRAM) we explore the method's resolution potential. For that, we synthetically generate tele-seismic data, add for example different levels of noise and/or change fault plane parameterization and then apply our inversion scheme in the attempt to extract the (known) kinematic rupture model. We conclude with exemplary inverting real tele-seismic data of a recent large earthquake and compare those results with deterministically derived kinematic source models provided by other research groups.
NASA Astrophysics Data System (ADS)
Fernández-López, Sheila; Carrera, Jesús; Ledo, Juanjo; Queralt, Pilar; Luquot, Linda; Martínez, Laura; Bellmunt, Fabián
2016-04-01
Seawater intrusion in aquifers is a complex phenomenon that can be characterized with the help of electric resistivity tomography (ERT) because of the low resistivity of seawater, which underlies the freshwater floating on top. The problem is complex because of the need for joint inversion of electrical and hydraulic (density dependent flow) data. Here we present an adjoint-state algorithm to treat electrical data. This method is a common technique to obtain derivatives of an objective function, depending on potentials with respect to model parameters. The main advantages of it are its simplicity in stationary problems and the reduction of computational cost respect others methodologies. The relationship between the concentration of chlorides and the resistivity values of the field is well known. Also, these resistivities are related to the values of potentials measured using ERT. Taking this into account, it will be possible to define the different resistivities zones from the field data of potential distribution using the basis of inverse problem. In this case, the studied zone is situated in Argentona (Baix Maresme, Catalonia), where the values of chlorides obtained in some wells of the zone are too high. The adjoint-state method will be used to invert the measured data using a new finite element code in C ++ language developed in an open-source framework called Kratos. Finally, the information obtained numerically with our code will be checked with the information obtained with other codes.
On the Definition of Mass in Mechanics: Why Is It so Difficult?
ERIC Educational Resources Information Center
Coelho, Ricardo Lopes
2012-01-01
In spite of the concerted efforts of physicists, philosophers, mathematicians, and logicians, no final clarification of the concept of mass has been reached. So concludes Jammer in his book on the history of the concept. The Nobel laureate Wilczek called our attention to the problem in his papers on the concepts of the fundamental equation of…
Recurrent Neural Network for Computing the Drazin Inverse.
Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin
2015-11-01
This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.
Conformational space annealing scheme in the inverse design of functional materials
NASA Astrophysics Data System (ADS)
Kim, Sunghyun; Lee, In-Ho; Lee, Jooyoung; Oh, Young Jun; Chang, Kee Joo
2015-03-01
Recently, the so-called inverse method has drawn much attention, in which specific electronic properties are initially assigned and target materials are subsequently searched. In this work, we develop a new scheme for the inverse design of functional materials, in which the conformational space annealing (CSA) algorithm for global optimization is combined with first-principles density functional calculations. To implement the CSA, we need a series of ingredients, (i) an objective function to minimize, (ii) a 'distance' measure between two conformations, (iii) a local enthalpy minimizer of a given conformation, (iv) ways to combine two parent conformations to generate a daughter one, (v) a special conformation update scheme, and (vi) an annealing method in the 'distance' parameter axis. We show the results of applications for searching for Si crystals with direct band gaps and the lowest-enthalpy phase of boron at a finite pressure and discuss the efficiency of the present scheme. This work is supported by the National Research Foundation of Korea (NRF) under Grant No. NRF-2005-0093845 and by Samsung Science and Technology Foundation under Grant No. SSTFBA1401-08.
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Alexandrov, B.
2014-12-01
The identification of the physical sources causing spatial and temporal fluctuations of state variables such as river stage levels and aquifer hydraulic heads is challenging. The fluctuations can be caused by variations in natural and anthropogenic sources such as precipitation events, infiltration, groundwater pumping, barometric pressures, etc. The source identification and separation can be crucial for conceptualization of the hydrological conditions and characterization of system properties. If the original signals that cause the observed state-variable transients can be successfully "unmixed", decoupled physics models may then be applied to analyze the propagation of each signal independently. We propose a new model-free inverse analysis of transient data based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS) coupled with k-means clustering algorithm, which we call NMFk. NMFk is capable of identifying a set of unique sources from a set of experimentally measured mixed signals, without any information about the sources, their transients, and the physical mechanisms and properties controlling the signal propagation through the system. A classical BSS conundrum is the so-called "cocktail-party" problem where several microphones are recording the sounds in a ballroom (music, conversations, noise, etc.). Each of the microphones is recording a mixture of the sounds. The goal of BSS is to "unmix'" and reconstruct the original sounds from the microphone records. Similarly to the "cocktail-party" problem, our model-freee analysis only requires information about state-variable transients at a number of observation points, m, where m > r, and r is the number of unknown unique sources causing the observed fluctuations. We apply the analysis on a dataset from the Los Alamos National Laboratory (LANL) site. We identify and estimate the impact and sources are barometric pressure and water-supply pumping effects. We also estimate the location of the water-supply pumping wells based on the available data. The possible applications of the NMFk algorithm are not limited to hydrology problems; NMFk can be applied to any problem where temporal system behavior is observed at multiple locations and an unknown number of physical sources are causing these fluctuations.
Regularized minimum I-divergence methods for the inverse blackbody radiation problem
NASA Astrophysics Data System (ADS)
Choi, Kerkil; Lanterman, Aaron D.; Shin, Jaemin
2006-08-01
This paper proposes iterative methods for estimating the area temperature distribution of a blackbody from its total radiated power spectrum measurements. This is called the inverse blackbody radiation problem. This problem is inherently ill-posed due to the characteristics of the kernel in the underlying integral equation given by Planck's law. The functions involved in the problem are all non-negative. Csiszár's I-divergence is an information-theoretic discrepancy measure between two non-negative functions. We derive iterative methods for minimizing Csiszár's I-divergence between the measured power spectrum and the power spectrum arising from the estimate according to the integral equation. Due to the ill-posedness of the problem, unconstrained algorithms often produce poor estimates, especially when the measurements are corrupted by noise. To alleviate this difficulty, we apply regularization methods to our algorithms. Penalties based on Shannon's entropy, the L1-norm and Good's roughness are chosen to suppress the undesirable artefacts. When a penalty is applied, the pertinent optimization that needs to be performed at each iteration is no longer trivial. In particular, Good's roughness causes couplings between estimate components. To handle this issue, we adapt Green's one-step-late method. This choice is based on the important fact that our minimum I-divergence algorithms can be interpreted as asymptotic forms of certain expectation-maximization algorithms. The effectiveness of our methods is illustrated via various numerical experiments.
Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm
Chen, C.; Xia, J.; Liu, J.; Feng, G.
2006-01-01
Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.
SeisFlows-Flexible waveform inversion software
NASA Astrophysics Data System (ADS)
Modrak, Ryan T.; Borisov, Dmitry; Lefebvre, Matthieu; Tromp, Jeroen
2018-06-01
SeisFlows is an open source Python package that provides a customizable waveform inversion workflow and framework for research in oil and gas exploration, earthquake tomography, medical imaging, and other areas. New methods can be rapidly prototyped in SeisFlows by inheriting from default inversion or migration classes, and code can be tested on 2D examples before application to more expensive 3D problems. Wave simulations must be performed using an external software package such as SPECFEM3D. The ability to interface with external solvers lends flexibility, and the choice of SPECFEM3D as a default option provides optional GPU acceleration and other useful capabilities. Through support for massively parallel solvers and interfaces for high-performance computing (HPC) systems, inversions with thousands of seismic traces and billions of model parameters can be performed. So far, SeisFlows has run on clusters managed by the Department of Defense, Chevron Corp., Total S.A., Princeton University, and the University of Alaska, Fairbanks.
Joint inversion of acoustic and resistivity data for the estimation of gas hydrate concentration
Lee, Myung W.
2002-01-01
Downhole log measurements, such as acoustic or electrical resistivity logs, are frequently used to estimate in situ gas hydrate concentrations in the pore space of sedimentary rocks. Usually the gas hydrate concentration is estimated separately based on each log measurement. However, measurements are related to each other through the gas hydrate concentration, so the gas hydrate concentrations can be estimated by jointly inverting available logs. Because the magnitude of slowness of acoustic and resistivity values differs by more than an order of magnitude, a least-squares method, weighted by the inverse of the observed values, is attempted. Estimating the resistivity of connate water and gas hydrate concentration simultaneously is problematic, because the resistivity of connate water is independent of acoustics. In order to overcome this problem, a coupling constant is introduced in the Jacobian matrix. In the use of different logs to estimate gas hydrate concentration, a joint inversion of different measurements is preferred to the averaging of each inversion result.
Fraass, Benedick A.; Steers, Jennifer M.; Matuszak, Martha M.; McShan, Daniel L.
2012-01-01
Purpose: Inverse planned intensity modulated radiation therapy (IMRT) has helped many centers implement highly conformal treatment planning with beamlet-based techniques. The many comparisons between IMRT and 3D conformal (3DCRT) plans, however, have been limited because most 3DCRT plans are forward-planned while IMRT plans utilize inverse planning, meaning both optimization and delivery techniques are different. This work avoids that problem by comparing 3D plans generated with a unique inverse planning method for 3DCRT called inverse-optimized 3D (IO-3D) conformal planning. Since IO-3D and the beamlet IMRT to which it is compared use the same optimization techniques, cost functions, and plan evaluation tools, direct comparisons between IMRT and simple, optimized IO-3D plans are possible. Though IO-3D has some similarity to direct aperture optimization (DAO), since it directly optimizes the apertures used, IO-3D is specifically designed for 3DCRT fields (i.e., 1–2 apertures per beam) rather than starting with IMRT-like modulation and then optimizing aperture shapes. The two algorithms are very different in design, implementation, and use. The goals of this work include using IO-3D to evaluate how close simple but optimized IO-3D plans come to nonconstrained beamlet IMRT, showing that optimization, rather than modulation, may be the most important aspect of IMRT (for some sites). Methods: The IO-3D dose calculation and optimization functionality is integrated in the in-house 3D planning/optimization system. New features include random point dose calculation distributions, costlet and cost function capabilities, fast dose volume histogram (DVH) and plan evaluation tools, optimization search strategies designed for IO-3D, and an improved, reimplemented edge/octree calculation algorithm. The IO-3D optimization, in distinction to DAO, is designed to optimize 3D conformal plans (one to two segments per beam) and optimizes MLC segment shapes and weights with various user-controllable search strategies which optimize plans without beamlet or pencil beam approximations. IO-3D allows comparisons of beamlet, multisegment, and conformal plans optimized using the same cost functions, dose points, and plan evaluation metrics, so quantitative comparisons are straightforward. Here, comparisons of IO-3D and beamlet IMRT techniques are presented for breast, brain, liver, and lung plans. Results: IO-3D achieves high quality results comparable to beamlet IMRT, for many situations. Though the IO-3D plans have many fewer degrees of freedom for the optimization, this work finds that IO-3D plans with only one to two segments per beam are dosimetrically equivalent (or nearly so) to the beamlet IMRT plans, for several sites. IO-3D also reduces plan complexity significantly. Here, monitor units per fraction (MU/Fx) for IO-3D plans were 22%–68% less than that for the 1 cm × 1 cm beamlet IMRT plans and 72%–84% than the 0.5 cm × 0.5 cm beamlet IMRT plans. Conclusions: The unique IO-3D algorithm illustrates that inverse planning can achieve high quality 3D conformal plans equivalent (or nearly so) to unconstrained beamlet IMRT plans, for many sites. IO-3D thus provides the potential to optimize flat or few-segment 3DCRT plans, creating less complex optimized plans which are efficient and simple to deliver. The less complex IO-3D plans have operational advantages for scenarios including adaptive replanning, cases with interfraction and intrafraction motion, and pediatric patients. PMID:22755717
NASA Astrophysics Data System (ADS)
Pedesseau, Laurent; Jouanna, Paul
2004-12-01
The SASP (semianalytical stochastic perturbations) method is an original mixed macro-nano-approach dedicated to the mass equilibrium of multispecies phases, periphases, and interphases. This general method, applied here to the reflexive relation Ck⇔μk between the concentrations Ck and the chemical potentials μk of k species within a fluid in equilibrium, leads to the distribution of the particles at the atomic scale. The macroaspects of the method, based on analytical Taylor's developments of chemical potentials, are intimately mixed with the nanoaspects of molecular mechanics computations on stochastically perturbed states. This numerical approach, directly linked to definitions, is universal by comparison with current approaches, DLVO Derjaguin-Landau-Verwey-Overbeek, grand canonical Monte Carlo, etc., without any restriction on the number of species, concentrations, or boundary conditions. The determination of the relation Ck⇔μk implies in fact two problems: a direct problem Ck⇒μk and an inverse problem μk⇒Ck. Validation of the method is demonstrated in case studies A and B which treat, respectively, a direct problem and an inverse problem within a free saturated gypsum solution. The flexibility of the method is illustrated in case study C dealing with an inverse problem within a solution interphase, confined between two (120) gypsum faces, remaining in connection with a reference solution. This last inverse problem leads to the mass equilibrium of ions and water molecules within a 3 Å thick gypsum interface. The major unexpected observation is the repulsion of SO42- ions towards the reference solution and the attraction of Ca2+ ions from the reference solution, the concentration being 50 times higher within the interphase as compared to the free solution. The SASP method is today the unique approach able to tackle the simulation of the number and distribution of ions plus water molecules in such extreme confined conditions. This result is of prime importance for all coupled chemical-mechanical problems dealing with interfaces, and more generally for a wide variety of applications such as phase changes, osmotic equilibrium, surface energy, etc., in complex chemical-physics situations.
The inverse problem to the evaluation of magnetic fields
NASA Astrophysics Data System (ADS)
Caspi, S.; Helm, M.; Laslett, L. J.; Brady, V.
1992-12-01
In the design of superconducting magnet elements, such as may be required to guide and focus ions in a particle accelerator, one frequently premises some particular current distribution and then proceeds to compute the consequent magnetic field through use of the laws of Biot and Savart or of Ampere. When working in this manner one of course may need to revise frequently the postulated current distribution before arriving at a resulting magnetic field of acceptable field quality. It therefore is of interest to consider an alternative ('inverse') procedure in which one specifies a desired character for the field required in the region interior to the winding and undertakes them to evaluate the current distribution on the specified winding surface that would provide this desired field. We may note that in undertaking such an inverse procedure we would wish, on practical grounds, to avoid the use of any 'double-layer' distributions of current on the winding surface or interface but would not demand that no fields be generated in the exterior region, so that in this respect the goal would differ in detail from that discussed by other authors, in analogy to the distribution sought in electrostatics by the so-caged Green's equivalent stratum.
New approach to wireless data communication in a propagation environment
NASA Astrophysics Data System (ADS)
Hunek, Wojciech P.; Majewski, Paweł
2017-10-01
This paper presents a new idea of perfect signal reconstruction in multivariable wireless communications systems including a different number of transmitting and receiving antennas. The proposed approach is based on the polynomial matrix S-inverse associated with Smith factorization. Crucially, the above mentioned inverse implements the so-called degrees of freedom. It has been confirmed by simulation study that the degrees of freedom allow to minimalize the negative impact of the propagation environment in terms of increasing the robustness of whole signal reconstruction process. Now, the parasitic drawbacks in form of dynamic ISI and ICI effects can be eliminated in framework described by polynomial calculus. Therefore, the new method guarantees not only reducing the financial impact but, more importantly, provides potentially the lower consumption energy systems than other classical ones. In order to show the potential of new approach, the simulation studies were performed by author's simulator based on well-known OFDM technique.
Inhomogeneity and velocity fields effects on scattering polarization in solar prominences
NASA Astrophysics Data System (ADS)
Milić, I.; Faurobert, M.
2015-10-01
One of the methods for diagnosing vector magnetic fields in solar prominences is the so called "inversion" of observed polarized spectral lines. This inversion usually assumes a fairly simple generative model and in this contribution we aim to study the possible systematic errors that are introduced by this assumption. On two-dimensional toy model of a prominence, we first demonstrate importance of multidimensional radiative transfer and horizontal inhomogeneities. These are able to induce a significant level of polarization in Stokes U, without the need for the magnetic field. We then compute emergent Stokes spectrum from a prominence which is pervaded by the vector magnetic field and use a simple, one-dimensional model to interpret these synthetic observations. We find that inferred values for the magnetic field vector generally differ from the original ones. Most importantly, the magnetic field might seem more inclined than it really is.
NASA Technical Reports Server (NTRS)
Melbourne, William G.
1986-01-01
In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.
Dynamic inverse models in human-cyber-physical systems
NASA Astrophysics Data System (ADS)
Robinson, Ryan M.; Scobee, Dexter R. R.; Burden, Samuel A.; Sastry, S. Shankar
2016-05-01
Human interaction with the physical world is increasingly mediated by automation. This interaction is characterized by dynamic coupling between robotic (i.e. cyber) and neuromechanical (i.e. human) decision-making agents. Guaranteeing performance of such human-cyber-physical systems will require predictive mathematical models of this dynamic coupling. Toward this end, we propose a rapprochement between robotics and neuromechanics premised on the existence of internal forward and inverse models in the human agent. We hypothesize that, in tele-robotic applications of interest, a human operator learns to invert automation dynamics, directly translating from desired task to required control input. By formulating the model inversion problem in the context of a tracking task for a nonlinear control system in control-a_ne form, we derive criteria for exponential tracking and show that the resulting dynamic inverse model generally renders a portion of the physical system state (i.e., the internal dynamics) unobservable from the human operator's perspective. Under stability conditions, we show that the human can achieve exponential tracking without formulating an estimate of the system's state so long as they possess an accurate model of the system's dynamics. These theoretical results are illustrated using a planar quadrotor example. We then demonstrate that the automation can intervene to improve performance of the tracking task by solving an optimal control problem. Performance is guaranteed to improve under the assumption that the human learns and inverts the dynamic model of the altered system. We conclude with a discussion of practical limitations that may hinder exact dynamic model inversion.
NASA Astrophysics Data System (ADS)
Buchmann, Jens; Kaplan, Bernhard A.; Prohaska, Steffen; Laufer, Jan
2017-03-01
Quantitative photoacoustic tomography (qPAT) aims to extract physiological parameters, such as blood oxygen saturation (sO2), from measured multi-wavelength image data sets. The challenge of this approach lies in the inherently nonlinear fluence distribution in the tissue, which has to be accounted for by using an appropriate model, and the large scale of the inverse problem. In addition, the accuracy of experimental and scanner-specific parameters, such as the wavelength dependence of the incident fluence, the acoustic detector response, the beam profile and divergence, needs to be considered. This study aims at quantitative imaging of blood sO2, as it has been shown to be a more robust parameter compared to absolute concentrations. We propose a Monte-Carlo-based inversion scheme in conjunction with a reduction in the number of variables achieved using image segmentation. The inversion scheme is experimentally validated in tissue-mimicking phantoms consisting of polymer tubes suspended in a scattering liquid. The tubes were filled with chromophore solutions at different concentration ratios. 3-D multi-spectral image data sets were acquired using a Fabry-Perot based PA scanner. A quantitative comparison of the measured data with the output of the forward model is presented. Parameter estimates of chromophore concentration ratios were found to be within 5 % of the true values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haber, Eldad
2014-03-17
The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequality constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pahn, T.; Rolfes, R.; Jonkman, J.
A significant number of wind turbines installed today have reached their designed service life of 20 years, and the number will rise continuously. Most of these turbines promise a more economical performance if they operate for more than 20 years. To assess a continued operation, we have to analyze the load-bearing capacity of the support structure with respect to site-specific conditions. Such an analysis requires the comparison of the loads used for the design of the support structure with the actual loads experienced. This publication presents the application of a so-called inverse load calculation to a 5-MW wind turbine supportmore » structure. The inverse load calculation determines external loads derived from a mechanical description of the support structure and from measured structural responses. Using numerical simulations with the software fast, we investigated the influence of wind-turbine-specific effects such as the wind turbine control or the dynamic interaction between the loads and the support structure to the presented inverse load calculation procedure. fast is used to study the inverse calculation of simultaneously acting wind and wave loads, which has not been carried out until now. Furthermore, the application of the inverse load calculation procedure to a real 5-MW wind turbine support structure is demonstrated. In terms of this practical application, setting up the mechanical system for the support structure using measurement data is discussed. The paper presents results for defined load cases and assesses the accuracy of the inversely derived dynamic loads for both the simulations and the practical application.« less
The Use of Original Sources and Its Potential Relation to the Recruitment Problem
ERIC Educational Resources Information Center
Jankvist, Uffe Thomas
2014-01-01
Based on a study about using original sources with Danish upper secondary students, the paper addresses the potential outcome of such an approach in regard to the so-called recruitment problem to the mathematical sciences. 24 students were exposed to questionnaire questions and 16 of these to follow-up interviews, which form the basis for both a…
Continuous-Time Public Good Contribution Under Uncertainty: A Stochastic Control Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrari, Giorgio, E-mail: giorgio.ferrari@uni-bielefeld.de; Riedel, Frank, E-mail: frank.riedel@uni-bielefeld.de; Steg, Jan-Henrik, E-mail: jsteg@uni-bielefeld.de
In this paper we study continuous-time stochastic control problems with both monotone and classical controls motivated by the so-called public good contribution problem. That is the problem of n economic agents aiming to maximize their expected utility allocating initial wealth over a given time period between private consumption and irreversible contributions to increase the level of some public good. We investigate the corresponding social planner problem and the case of strategic interaction between the agents, i.e. the public good contribution game. We show existence and uniqueness of the social planner’s optimal policy, we characterize it by necessary and sufficient stochasticmore » Kuhn–Tucker conditions and we provide its expression in terms of the unique optional solution of a stochastic backward equation. Similar stochastic first order conditions prove to be very useful for studying any Nash equilibria of the public good contribution game. In the symmetric case they allow us to prove (qualitative) uniqueness of the Nash equilibrium, which we again construct as the unique optional solution of a stochastic backward equation. We finally also provide a detailed analysis of the so-called free rider effect.« less
Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard
2014-09-03
Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results.
Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard
2014-01-01
Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results. PMID:25192146
Dynamic gamma knife radiosurgery
NASA Astrophysics Data System (ADS)
Luan, Shuang; Swanson, Nathan; Chen, Zhe; Ma, Lijun
2009-03-01
Gamma knife has been the treatment of choice for various brain tumors and functional disorders. Current gamma knife radiosurgery is planned in a 'ball-packing' approach and delivered in a 'step-and-shoot' manner, i.e. it aims to 'pack' the different sized spherical high-dose volumes (called 'shots') into a tumor volume. We have developed a dynamic scheme for gamma knife radiosurgery based on the concept of 'dose-painting' to take advantage of the new robotic patient positioning system on the latest Gamma Knife C™ and Perfexion™ units. In our scheme, the spherical high dose volume created by the gamma knife unit will be viewed as a 3D spherical 'paintbrush', and treatment planning reduces to finding the best route of this 'paintbrush' to 'paint' a 3D tumor volume. Under our dose-painting concept, gamma knife radiosurgery becomes dynamic, where the patient moves continuously under the robotic positioning system. We have implemented a fully automatic dynamic gamma knife radiosurgery treatment planning system, where the inverse planning problem is solved as a traveling salesman problem combined with constrained least-square optimizations. We have also carried out experimental studies of dynamic gamma knife radiosurgery and showed the following. (1) Dynamic gamma knife radiosurgery is ideally suited for fully automatic inverse planning, where high quality radiosurgery plans can be obtained in minutes of computation. (2) Dynamic radiosurgery plans are more conformal than step-and-shoot plans and can maintain a steep dose gradient (around 13% per mm) between the target tumor volume and the surrounding critical structures. (3) It is possible to prescribe multiple isodose lines with dynamic gamma knife radiosurgery, so that the treatment can cover the periphery of the target volume while escalating the dose for high tumor burden regions. (4) With dynamic gamma knife radiosurgery, one can obtain a family of plans representing a tradeoff between the delivery time and the dose distributions, thus giving the clinician one more dimension of flexibility of choosing a plan based on the clinical situations.
Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems
NASA Astrophysics Data System (ADS)
Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.
2010-12-01
Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.
Robinson, Katherine M; Ninowski, Jerilyn E
2003-12-01
Problems of the form a + b - b have been used to assess conceptual understanding of the relationship between addition and subtraction. No study has investigated the same relationship between multiplication and division on problems of the form d x e / e. In both types of inversion problems, no calculation is required if the inverse relationship between the operations is understood. Adult participants solved addition/subtraction and multiplication/division inversion (e.g., 9 x 22 / 22) and standard (e.g., 2 + 27 - 28) problems. Participants started to use the inversion strategy earlier and more frequently on addition/subtraction problems. Participants took longer to solve both types of multiplication/division problems. Overall, conceptual understanding of the relationship between multiplication and division was not as strong as that between addition and subtraction. One explanation for this difference in performance is that the operation of division is more weakly represented and understood than the other operations and that this weakness affects performance on problems of the form d x e / e.
NASA Astrophysics Data System (ADS)
Reiter, D. T.; Rodi, W. L.
2015-12-01
Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.
Exponential Formulae and Effective Operations
NASA Technical Reports Server (NTRS)
Mielnik, Bogdan; Fernandez, David J. C.
1996-01-01
One of standard methods to predict the phenomena of squeezing consists in splitting the unitary evolution operator into the product of simpler operations. The technique, while mathematically general, is not so simple in applications and leaves some pragmatic problems open. We report an extended class of exponential formulae, which yield a quicker insight into the laboratory details for a class of squeezing operations, and moreover, can be alternatively used to programme different type of operations, as: (1) the free evolution inversion; and (2) the soft simulations of the sharp kicks (so that all abstract results involving the kicks of the oscillator potential, become realistic laboratory prescriptions).
Mathematical Problems in Imaging in Random Media
2015-01-15
of matrix Γ in [1], in the context of intensity based imaging of remote sources in random waveguides. That work is a direct application of the results...for which j we have z > ε−2Sj), and filters them out. It images by time reversing the received wave, weighting the modes based on their being coherent...transport based inversion), so we regularize to obtain ( |ξ̂1|2/β1, . . . |ξ̂N |2/βN )T ? ≈ J∑ j=1 e|Λj |Z ( uTj BQ−1M ) uj , (31) for J chosen so that
An EGO-like optimization framework for sensor placement optimization in modal analysis
NASA Astrophysics Data System (ADS)
Morlier, Joseph; Basile, Aniello; Chiplunkar, Ankit; Charlotte, Miguel
2018-07-01
In aircraft design, ground/flight vibration tests are conducted to extract aircraft’s modal parameters (natural frequencies, damping ratios and mode shapes) also known as the modal basis. The main problem in aircraft modal identification is the large number of sensors needed, which increases operational time and costs. The goal of this paper is to minimize the number of sensors by optimizing their locations in order to reconstruct a truncated modal basis of N mode shapes with a high level of accuracy in the reconstruction. There are several methods to solve sensors placement optimization (SPO) problems, but for this case an original approach has been established based on an iterative process for mode shapes reconstruction through an adaptive Kriging metamodeling approach so called efficient global optimization (EGO)-SPO. The main idea in this publication is to solve an optimization problem where the sensors locations are variables and the objective function is defined by maximizing the trace of criteria so called AutoMAC. The results on a 2D wing demonstrate a reduction of sensors by 30% using our EGO-SPO strategy.
Theory and computation of optimal low- and medium-thrust transfers
NASA Technical Reports Server (NTRS)
Chuang, C.-H.
1994-01-01
This report describes the current state of development of methods for calculating optimal orbital transfers with large numbers of burns. Reported on first is the homotopy-motivated and so-called direction correction method. So far this method has been partially tested with one solver; the final step has yet to be implemented. Second is the patched transfer method. This method is rooted in some simplifying approximations made on the original optimal control problem. The transfer is broken up into single-burn segments, each single-burn solved as a predictor step and the whole problem then solved with a corrector step.
RNAfbinv: an interactive Java application for fragment-based design of RNA sequences.
Weinbrand, Lina; Avihoo, Assaf; Barash, Danny
2013-11-15
In RNA design problems, it is plausible to assume that the user would be interested in preserving a particular RNA secondary structure motif, or fragment, for biological reasons. The preservation could be in structure or sequence, or both. Thus, the inverse RNA folding problem could benefit from considering fragment constraints. We have developed a new interactive Java application called RNA fragment-based inverse that allows users to insert an RNA secondary structure in dot-bracket notation. It then performs sequence design that conforms to the shape of the input secondary structure, the specified thermodynamic stability, the specified mutational robustness and the user-selected fragment after shape decomposition. In this shape-based design approach, specific RNA structural motifs with known biological functions are strictly enforced, while others can possess more flexibility in their structure in favor of preserving physical attributes and additional constraints. RNAfbinv is freely available for download on the web at http://www.cs.bgu.ac.il/~RNAexinv/RNAfbinv. The site contains a help file with an explanation regarding the exact use.
On Some Troubles with the Metaphysics of Fermionic Compositions
NASA Astrophysics Data System (ADS)
Bigaj, Tomasz
2016-09-01
In this paper I discuss some metaphysical consequences of an unorthodox approach to the problem of the identity and individuality of "indistinguishable" quantum particles. This approach is based on the assumption that the only admissible way of individuating separate components of a given system is with the help of the permutation-invariant qualitative properties of the total system. Such a method of individuation, when applied to fermionic compositions occupying so-called GMW-nonentangled states, yields highly implausible consequences regarding the number of distinct components of a given composite system. I specify the problem (which I call the problem of fermionic inflation) in detail, and I consider several strategies of solving it. The preferred solution of the problem is based on the premise that spatial location should play a privileged role in identifying and making reference to quantum-mechanical systems.
High-speed GPU-based finite element simulations for NDT
NASA Astrophysics Data System (ADS)
Huthwaite, P.; Shi, F.; Van Pamel, A.; Lowe, M. J. S.
2015-03-01
The finite element method solved with explicit time increments is a general approach which can be applied to many ultrasound problems. It is widely used as a powerful tool within NDE for developing and testing inspection techniques, and can also be used in inversion processes. However, the solution technique is computationally intensive, requiring many calculations to be performed for each simulation, so traditionally speed has been an issue. For maximum speed, an implementation of the method, called Pogo [Huthwaite, J. Comp. Phys. 2014, doi: 10.1016/j.jcp.2013.10.017], has been developed to run on graphics cards, exploiting the highly parallelisable nature of the algorithm. Pogo typically demonstrates speed improvements of 60-90x over commercial CPU alternatives. Pogo is applied to three NDE examples, where the speed improvements are important: guided wave tomography, where a full 3D simulation must be run for each source transducer and every different defect size; scattering from rough cracks, where many simulations need to be run to build up a statistical model of the behaviour; and ultrasound propagation within coarse-grained materials where the mesh must be highly refined and many different cases run.
On regularizing the MCTDH equations of motion
NASA Astrophysics Data System (ADS)
Meyer, Hans-Dieter; Wang, Haobin
2018-03-01
The Multiconfiguration Time-Dependent Hartree (MCTDH) approach leads to equations of motion (EOM) which become singular when there are unoccupied so-called single-particle functions (SPFs). Starting from a Hartree product, all SPFs, except the first one, are unoccupied initially. To solve the MCTDH-EOMs numerically, one therefore has to remove the singularity by a regularization procedure. Usually the inverse of a density matrix is regularized. Here we argue and show that regularizing the coefficient tensor, which in turn regularizes the density matrix as well, leads to an improved performance of the EOMs. The initially unoccupied SPFs are rotated faster into their "correct direction" in Hilbert space and the final results are less sensitive to the choice of the value of the regularization parameter. For a particular example (a spin-boson system studied with a transformed Hamiltonian), we could even show that only with the new regularization scheme could one obtain correct results. Finally, in Appendix A, a new integration scheme for the MCTDH-EOMs developed by Lubich and co-workers is discussed. It is argued that this scheme does not solve the problem of the unoccupied natural orbitals because this scheme ignores the latter and does not propagate them at all.
Stress-Intensity Factors along Three-Dimensional Elliptical Crack Fronts
DOT National Transportation Integrated Search
1998-05-01
The objective of the present investigation is to determine the mode I stress-intensity factors along two symmetric surface cracks emanating from a centrally located hole in a rectangular plate (the so-called Round Robin Problem) using the domain inte...
Evaluation of concrete cover by surface wave technique: Identification procedure
NASA Astrophysics Data System (ADS)
Piwakowski, Bogdan; Kaczmarek, Mariusz; Safinowski, Paweł
2012-05-01
Concrete cover degradation is induced by aggressive agents in ambiance, such as moisture, chemicals or temperature variations. Due to degradation usually a thin (a few millimeters thick) surface layer has porosity slightly higher than the deeper sound material. The non destructive evaluation of concrete cover is vital to monitor the integrity of concrete structures and prevent their irreversible damage. In this paper the methodology applied by the classical technique used for ground structure recovery called Multichanel Analysis of Surface Waves is discussed as the NDT tool in civil engineering domain to characterize the concrete cover. In order to obtain the velocity as a function of sample depth the dispersion of surface waves is used as an input for solving inverse problem. The paper describes the inversion procedure and provides the practical example of use of developed system.
Children's Understanding of the Arithmetic Concepts of Inversion and Associativity
ERIC Educational Resources Information Center
Robinson, Katherine M.; Ninowski, Jerilyn E.; Gray, Melissa L.
2006-01-01
Previous studies have shown that even preschoolers can solve inversion problems of the form a + b - b by using the knowledge that addition and subtraction are inverse operations. In this study, a new type of inversion problem of the form d x e [divided by] e was also examined. Grade 6 and 8 students solved inversion problems of both types as well…
Validation Studies of the Accuracy of Various SO2 Gas Retrievals in the Thermal InfraRed (8-14 μm)
NASA Astrophysics Data System (ADS)
Gabrieli, A.; Wright, R.; Lucey, P. G.; Porter, J. N.; Honniball, C.; Garbeil, H.; Wood, M.
2016-12-01
Quantifying hazardous SO2 in the atmosphere and in volcanic plumes is important for public health and volcanic eruption prediction. Remote sensing measurements of spectral radiance of plumes contain information on the abundance of SO2. However, in order to convert such measurements into SO2 path-concentrations, reliable inversion algorithms are needed. Various techniques can be employed to derive SO2 path-concentrations. The first approach employs a Partial Least Square Regression model trained using MODTRAN5 simulations for a variety of plume and atmospheric conditions. Radiances at many spectral wavelengths (8-14 μm) were used in the algorithm. The second algorithm uses measurements inside and outside the SO2 plume. Measurements in the plume-free region (background sky) make it possible to remove background atmospheric conditions and any instrumental effects. After atmospheric and instrumental effects are removed, MODTRAN5 is used to fit the SO2 spectral feature and obtain SO2 path-concentrations. The two inversion algorithms described above can be compared with the inversion algorithm for SO2 retrievals developed by Prata and Bernardo (2014). Their approach employs three wavelengths to characterize the plume temperature, the atmospheric background, and the SO2 path-concentration. The accuracy of these various techniques requires further investigation in terms of the effects of different atmospheric background conditions. Validating these inversion algorithms is challenging because ground truth measurements are very difficult. However, if the three separate inversion algorithms provide similar SO2 path-concentrations for actual measurements with various background conditions, then this increases confidence in the results. Measurements of sky radiance when looking through SO2 filled gas cells were collected with a Thermal Hyperspectral Imager (THI) under various atmospheric background conditions. These data were processed using the three inversion approaches, which were tested for convergence on the known SO2 gas cell path-concentrations. For this study, the inversion algorithms were modified to account for the gas cell configuration. Results from these studies will be presented, as well as results from SO2 gas plume measurements at Kīlauea volcano, Hawai'i.
Designing a supply chain of ready-mix concrete using Voronoi diagrams
NASA Astrophysics Data System (ADS)
Kozniewski, E.; Orlowski, M.; Orlowski, Z.
2017-10-01
Voronoi diagrams are used to solve scientific and practical problems in many fields. In this paper Voronoi diagrams have been applied to logistic problems in construction, more specifically in the design of the ready-mix concrete supply chain. Apart from the Voronoi diagram, the so-called time-distance circle (circle of range), which in metric space terminology is simply a sphere, appears useful. It was introduced to solve the problem of supplying concrete-related goods.
A New Paradigm for Satellite Retrieval of Hydrologic Variables: The CDRD Methodology
NASA Astrophysics Data System (ADS)
Smith, E. A.; Mugnai, A.; Tripoli, G. J.
2009-09-01
Historically, retrieval of thermodynamically active geophysical variables in the atmosphere (e.g., temperature, moisture, precipitation) involved some time of inversion scheme - embedded within the retrieval algorithm - to transform radiometric observations (a vector) to the desired geophysical parameter(s) (either a scalar or a vector). Inversion is fundamentally a mathematical operation involving some type of integral-differential radiative transfer equation - often resisting a straightforward algebraic solution - in which the integral side of the equation (typically the right-hand side) contains the desired geophysical vector, while the left-hand side contains the radiative measurement vector often free of operators. Inversion was considered more desirable than forward modeling because the forward model solution had to be selected from a generally unmanageable set of parameter-observation relationships. However, in the classical inversion problem for retrieval of temperature using multiple radiative frequencies along the wing of an absorption band (or line) of a well-mixed radiatively active gas, in either the infrared or microwave spectrums, the inversion equation to be solved consists of a Fredholm integral equation of the 2nd kind - a specific type of transform problem in which there are an infinite number of solutions. This meant that special treatment of the transform process was required in order to obtain a single solution. Inversion had become the method of choice for retrieval in the 1950s because it appealed to the use of mathematical elegance, and because the numerical approaches used to solve the problems (typically some type of relaxation or perturbation scheme) were computationally fast in an age when computers speeds were slow. Like many solution schemes, inversion has lingered on regardless of the fact that computer speeds have increased many orders of magnitude and forward modeling itself has become far more elegant in combination with Bayesian averaging procedures given that the a priori probabilities of occurrence in the true environment of the parameter(s) in question can be approximated (or are actually known). In this presentation, the theory of the more modern retrieval approach using a combination of cloud, radiation and other specialized forward models in conjunction with Bayesian weighted averaging will be reviewed in light of a brief history of inversion. The application of the theory will be cast in the framework of what we call the Cloud-Dynamics-Radiation-Database (CDRD) methodology - which we now use for the retrieval of precipitation from spaceborne passive microwave radiometers. In a companion presentation, we will specifically describe the CDRD methodology and present results for its application within the Mediterranean basin.
Improved Dot Diffusion For Image Halftoning
1999-01-01
The dot diffusion method for digital halftoning has the advantage of parallelism unlike the error diffusion method. The method was recently improved...by optimization of the so-called class matrix so that the resulting halftones are comparable to the error diffused halftones . In this paper we will...first review the dot diffusion method. Previously, 82 class matrices were used for dot diffusion method. A problem with this size of class matrix is
NASA Technical Reports Server (NTRS)
Sabatier, P. C.
1972-01-01
The progressive realization of the consequences of nonuniqueness imply an evolution of both the methods and the centers of interest in inverse problems. This evolution is schematically described together with the various mathematical methods used. A comparative description is given of inverse methods in scientific research, with examples taken from mathematics, quantum and classical physics, seismology, transport theory, radiative transfer, electromagnetic scattering, electrocardiology, etc. It is hoped that this paper will pave the way for an interdisciplinary study of inverse problems.
Some variance reduction methods for numerical stochastic homogenization
Blanc, X.; Le Bris, C.; Legoll, F.
2016-01-01
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065
Kinoform design with an optimal-rotation-angle method.
Bengtsson, J
1994-10-10
Kinoforms (i.e., computer-generated phase holograms) are designed with a new algorithm, the optimalrotation- angle method, in the paraxial domain. This is a direct Fourier method (i.e., no inverse transform is performed) in which the height of the kinoform relief in each discrete point is chosen so that the diffraction efficiency is increased. The optimal-rotation-angle algorithm has a straightforward geometrical interpretation. It yields excellent results close to, or better than, those obtained with other state-of-the-art methods. The optimal-rotation-angle algorithm can easily be modified to take different restraints into account; as an example, phase-swing-restricted kinoforms, which distribute the light into a number of equally bright spots (so called fan-outs), were designed. The phase-swing restriction lowers the efficiency, but the uniformity can still be made almost perfect.
NASA Astrophysics Data System (ADS)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.
NASA Astrophysics Data System (ADS)
Audebert, M.; Clément, R.; Touze-Foltz, N.; Günther, T.; Moreau, S.; Duquennoi, C.
2014-12-01
Leachate recirculation is a key process in municipal waste landfills functioning as bioreactors. To quantify the water content and to assess the leachate injection system, in-situ methods are required to obtain spatially distributed information, usually electrical resistivity tomography (ERT). This geophysical method is based on the inversion process, which presents two major problems in terms of delimiting the infiltration area. First, it is difficult for ERT users to choose an appropriate inversion parameter set. Indeed, it might not be sufficient to interpret only the optimum model (i.e. the model with the chosen regularisation strength) because it is not necessarily the model which best represents the physical process studied. Second, it is difficult to delineate the infiltration front based on resistivity models because of the smoothness of the inversion results. This paper proposes a new methodology called MICS (multiple inversions and clustering strategy), which allows ERT users to improve the delimitation of the infiltration area in leachate injection monitoring. The MICS methodology is based on (i) a multiple inversion step by varying the inversion parameter values to take a wide range of resistivity models into account and (ii) a clustering strategy to improve the delineation of the infiltration front. In this paper, MICS was assessed on two types of data. First, a numerical assessment allows us to optimise and test MICS for different infiltration area sizes, contrasts and shapes. Second, MICS was applied to a field data set gathered during leachate recirculation on a bioreactor.
Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data
Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.
2015-01-01
We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.
NASA Astrophysics Data System (ADS)
Kopacz, Monika; Jacob, Daniel J.; Henze, Daven K.; Heald, Colette L.; Streets, David G.; Zhang, Qiang
2009-02-01
We apply the adjoint of an atmospheric chemical transport model (GEOS-Chem CTM) to constrain Asian sources of carbon monoxide (CO) with 2° × 2.5° spatial resolution using Measurement of Pollution in the Troposphere (MOPITT) satellite observations of CO columns in February-April 2001. Results are compared to the more common analytical method for solving the same Bayesian inverse problem and applied to the same data set. The analytical method is more exact but because of computational limitations it can only constrain emissions over coarse regions. We find that the correction factors to the a priori CO emission inventory from the adjoint inversion are generally consistent with those of the analytical inversion when averaged over the large regions of the latter. The adjoint solution reveals fine-scale variability (cities, political boundaries) that the analytical inversion cannot resolve, for example, in the Indian subcontinent or between Korea and Japan, and some of that variability is of opposite sign which points to large aggregation errors in the analytical solution. Upward correction factors to Chinese emissions from the prior inventory are largest in central and eastern China, consistent with a recent bottom-up revision of that inventory, although the revised inventory also sees the need for upward corrections in southern China where the adjoint and analytical inversions call for downward correction. Correction factors for biomass burning emissions derived from the adjoint and analytical inversions are consistent with a recent bottom-up inventory on the basis of MODIS satellite fire data.
GENERATING FRACTAL PATTERNS BY USING p-CIRCLE INVERSION
NASA Astrophysics Data System (ADS)
Ramírez, José L.; Rubiano, Gustavo N.; Zlobec, Borut Jurčič
2015-10-01
In this paper, we introduce the p-circle inversion which generalizes the classical inversion with respect to a circle (p = 2) and the taxicab inversion (p = 1). We study some basic properties and we also show the inversive images of some basic curves. We apply this new transformation to well-known fractals such as Sierpinski triangle, Koch curve, dragon curve, Fibonacci fractal, among others. Then we obtain new fractal patterns. Moreover, we generalize the method called circle inversion fractal be means of the p-circle inversion.
Fast and Adaptive Sparse Precision Matrix Estimation in High Dimensions
Liu, Weidong; Luo, Xi
2014-01-01
This paper proposes a new method for estimating sparse precision matrices in the high dimensional setting. It has been popular to study fast computation and adaptive procedures for this problem. We propose a novel approach, called Sparse Column-wise Inverse Operator, to address these two issues. We analyze an adaptive procedure based on cross validation, and establish its convergence rate under the Frobenius norm. The convergence rates under other matrix norms are also established. This method also enjoys the advantage of fast computation for large-scale problems, via a coordinate descent algorithm. Numerical merits are illustrated using both simulated and real datasets. In particular, it performs favorably on an HIV brain tissue dataset and an ADHD resting-state fMRI dataset. PMID:25750463
Three Dimensional Inverse Synthetic Aperture Radar Imaging
1995-12-01
unfortunately produces a blurred image. To correct this problem, a deblurring filter must be applied to the data. It is preferred in some applications to...when the pulse is an impulse in time. So in order to get a high degree of downrange resolution directly it would be necessary to transmit the entire...bandwidth of frequencies simultaneously such as in an Impulse Radar. This would prove to be extremely difficult if not impossible. Luckily, the same
Inverse heat transfer problem in digital temperature control in plate fin and tube heat exchangers
NASA Astrophysics Data System (ADS)
Taler, Dawid; Sury, Adam
2011-12-01
The aim of the paper is a steady-state inverse heat transfer problem for plate-fin and tube heat exchangers. The objective of the process control is to adjust the number of fan revolutions per minute so that the water temperature at the heat exchanger outlet is equal to a preset value. Two control techniques were developed. The first is based on the presented mathematical model of the heat exchanger while the second is a digital proportional-integral-derivative (PID) control. The first procedure is very stable. The digital PID controller becomes unstable if the water volumetric flow rate changes significantly. The developed techniques were implemented in digital control system of the water exit temperature in a plate fin and tube heat exchanger. The measured exit temperature of the water was very close to the set value of the temperature if the first method was used. The experiments showed that the PID controller works also well but becomes frequently unstable.
A stochastic vortex structure method for interacting particles in turbulent shear flows
NASA Astrophysics Data System (ADS)
Dizaji, Farzad F.; Marshall, Jeffrey S.; Grant, John R.
2018-01-01
In a recent study, we have proposed a new synthetic turbulence method based on stochastic vortex structures (SVSs), and we have demonstrated that this method can accurately predict particle transport, collision, and agglomeration in homogeneous, isotropic turbulence in comparison to direct numerical simulation results. The current paper extends the SVS method to non-homogeneous, anisotropic turbulence. The key element of this extension is a new inversion procedure, by which the vortex initial orientation can be set so as to generate a prescribed Reynolds stress field. After validating this inversion procedure for simple problems, we apply the SVS method to the problem of interacting particle transport by a turbulent planar jet. Measures of the turbulent flow and of particle dispersion, clustering, and collision obtained by the new SVS simulations are shown to compare well with direct numerical simulation results. The influence of different numerical parameters, such as number of vortices and vortex lifetime, on the accuracy of the SVS predictions is also examined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamm, James R.; Love, Edward; Robinson, Allen C.
We review the edge element formulation for describing the kinematics of hyperelastic solids. This approach is used to frame the problem of remapping the inverse deformation gradient for Arbitrary Lagrangian-Eulerian (ALE) simulations of solid dynamics. For hyperelastic materials, the stress state is completely determined by the deformation gradient, so remapping this quantity effectively updates the stress state of the material. A method, inspired by the constrained transport remap in electromagnetics, is reviewed, according to which the zero-curl constraint on the inverse deformation gradient is implicitly satisfied. Open issues related to the accuracy of this approach are identified. An optimization-based approachmore » is implemented to enforce positivity of the determinant of the deformation gradient. The efficacy of this approach is illustrated with numerical examples.« less
[Acceptance and Commitment Therapy: Theoretical background and practice].
Eisenbeck, Nikolett; Schlosser, Károly Kornél; Szondy, Máté; Szabó-Bartha, Anett
The Acceptance and Commitment Therapy (ACT) is one of the modern, so-called third-wave behavioural therapies. Among them the most successful is ACT, both in the number of therapists and respective scientific research. ACT's theoretical and philosophical background is described explicitly and its therapeutic interventions were developed according to this philosophy. Its psychopathological model is based on the idea that mainly the person's regulatory efforts of their own thoughts and feelings lead to psychological problems. That is, the source of human suffering and various psychological problems is the so called psychological inflexibility: control attempts of private events instead of living a life based on personal values and long-term goals. Therefore, clinical work in ACT focuses on the acceptance and defusion of the unwanted inner experiences and on the development of a meaningful life. The present article aims to provide a comprehensive description of ACT in Hungarian: its theoretical background, clinical techniques, and efficacy. At the end of the article, the state of ACT in Hungary will also be briefly discussed.
Probabilistic numerical methods for PDE-constrained Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Cockayne, Jon; Oates, Chris; Sullivan, Tim; Girolami, Mark
2017-06-01
This paper develops meshless methods for probabilistically describing discretisation error in the numerical solution of partial differential equations. This construction enables the solution of Bayesian inverse problems while accounting for the impact of the discretisation of the forward problem. In particular, this drives statistical inferences to be more conservative in the presence of significant solver error. Theoretical results are presented describing rates of convergence for the posteriors in both the forward and inverse problems. This method is tested on a challenging inverse problem with a nonlinear forward model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro
2016-07-01
This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.
Physics-based Inverse Problem to Deduce Marine Atmospheric Boundary Layer Parameters
2017-03-07
please find the Final Technical Report with SF 298 for Dr. Erin E. Hackett’s ONR grant entitled Physics-based Inverse Problem to Deduce Marine...From- To) 07/03/2017 Final Technica l Dec 2012- Dec 2016 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Physics-based Inverse Problem to Deduce Marine...SUPPLEMENTARY NOTES 14. ABSTRACT This report describes research results related to the development and implementation of an inverse problem approach for
NASA Astrophysics Data System (ADS)
Bielik, M.; Vozar, J.; Hegedus, E.; Celebration Working Group
2003-04-01
The contribution informs about the preliminary results that relate to the first arrival p-wave seismic tomographic processing of data measured along the profiles CEL01, CEL04, CEL05, CEL06, CEL09 and CEL11. These profiles were measured in a framework of the seismic project called CELEBRATION 2000. Data acquisition and geometric parameters of the processed profiles, tomographic processing’s principle, particular processing steps and program parameters are described. Characteristic data (shot points, geophone points, total length of profiles, for all profiles, sampling, sensors and record lengths) of observation profiles are given. The fast program package developed by C. Zelt was applied for tomographic velocity inversion. This process consists of several steps. First step is a creation of the starting velocity field for which the calculated arrival times are modelled by the method of finite differences. The next step is minimization of differences between the measured and modelled arrival time till the deviation is small. Elimination of equivalency problem by including a priori information in the starting velocity field was done too. A priori information consists of the depth to the pre-Tertiary basement, estimation of its overlying sedimentary velocity from well-logging and or other seismic velocity data, etc. After checking the reciprocal times, pickings were corrected. The final result of the processing is a reliable travel time curve set considering the reciprocal times. We carried out picking of travel time curves, enhancement of signal-to-noise ratio on the seismograms using the program system of PROMAX. Tomographic inversion was carried out by so called 3D/2D procedure taking into account 3D wave propagation. It means that a corridor along the profile, which contains the outlying shot points and geophone points as well was defined and we carried out 3D processing within this corridor. The preliminary results indicate the seismic anomalous zones within the crust and the uppermost part of the upper mantle in the area consists of the Western Carpathians, the North European platform, the Pannonian basin and the Bohemian Massif.
Cycling chair: a novel vehicle for the lower limbs disabled
NASA Astrophysics Data System (ADS)
Takahashi, Takayuki; Nishiyyama, Yuuki; Ozawa, Yukiko; Nakano, Eiji; Handa, Yasunobu
2005-12-01
The goal of our research is to develop a practical vehicle for lower limbs disabled to improve their mobility and health. The most significant mechanical character of the proposed vehicle is that it is driven by the lower limbs of the disabled themselves. We call it as Cycling Chair. Disuse of the lower limbs leads many subsidiary issues on health, deteriorating the whole-body circulation, it is the most serious problem, cases so-called the disuse syndrome. The proposed Cycling Chair solves those problems by using the leg-driven mechanism. In this paper, the mechanism of the Cycling Chair and the way to drive the chair by paraplegics are discussed. Some experimental results are also presented.
Dieckhöfer, K; Vogel, T
1974-01-01
Following a synopsis of the main bibliography and a number of own cases it could be pointed out that the so-called poriomania is accompanied with punishable acts in about one-third of all cases. An analysis of the different delicts yielded as result a clear preponderance of larcency of money, fraud and embezzlement - in comparison with desertion and absence without official leave. Only 20% of all cases with punishable acts were denounced. In former times, up to the first world war, about a percentage of 74 of all criminal cases in connection with poriomania was exonerated on the erroneous assumption that the behaviour of the so-called poriomania would be caused by epilepsy. About one third of all persons with poriomania are feebleminded. There is a high inclination to fraud, pseudologia, abuse of alcohol and prostitution. An enlarged inclination (15%) to suicide is also remarkable. A familiar accumulation of poriomania does not justify the supposition of an endogenous factor. Therefore criminal acts in connection with poriomania connot be exonerated. The personality of men with poriomania is characterised by unsteadiness, unstableness and velleity.
NASA Astrophysics Data System (ADS)
Sumlin, Benjamin J.; Heinson, William R.; Chakrabarty, Rajan K.
2018-01-01
The complex refractive index m = n + ik of a particle is an intrinsic property which cannot be directly measured; it must be inferred from its extrinsic properties such as the scattering and absorption cross-sections. Bohren and Huffman called this approach "describing the dragon from its tracks", since the inversion of Lorenz-Mie theory equations is intractable without the use of computers. This article describes PyMieScatt, an open-source module for Python that contains functionality for solving the inverse problem for complex m using extensive optical and physical properties as input, and calculating regions where valid solutions may exist within the error bounds of laboratory measurements. Additionally, the module has comprehensive capabilities for studying homogeneous and coated single spheres, as well as ensembles of homogeneous spheres with user-defined size distributions, making it a complete tool for studying the optical behavior of spherical particles.
Detection of explosives, nerve agents, and illicit substances by zero-energy electron attachment
NASA Technical Reports Server (NTRS)
Chutjian, A.; Darrach, M. R.
2000-01-01
The Reversal Electron Attachment Detection (READ) method, developed at JPL/Caltech, has been used to detect a variety of substances which have electron-attachment resonances at low and intermediate electron energies. In the case of zero-energy resonances, the cross section (hence attachment probability and instrument sensitivity) is mediated by the so-called s-wave phenomenon, in which the cross sections vary as the inverse of the electron velocity. Hence this is, in the limit of zero electron energy or velocity, one of the rare cases in atomic and molecular physics where one carries out detection via infinite cross sections.
Methodological Problems Encountered in the Review of Research in Science Teaching
ERIC Educational Resources Information Center
Lawlor, E. P.; Lawlor, F. X.
1972-01-01
Describes the difficulties encountered in selecting material to be included in the reviews of science education research in the Curtis Series'' published by the Columbia Teachers' College Press. Presents evidence outlining the weaknesses of using a jury'' to determine so-called superior research. (AL)
Lexical Connection: Semiterm Grammatical Patterns in Spanish
ERIC Educational Resources Information Center
Ferrero, Carmen Lopez
2012-01-01
The aim of this article is to describe the grammatical patterns of a set of nouns frequently used in Spanish specialized discourse: the so-called "semiterms". The following nouns were selected for the study: "problema" "problem", "resultado" "result", "motivo" "motive/reason", "razon" "reason", and "consecuencia" "consequence". Apart from…
Introduction: Conceptions of Grammaticalization and Their Problems.
ERIC Educational Resources Information Center
Campbell, Lyle; Janda, Richard
2001-01-01
Introduces the articles in this issue of "Language Sciences," which are dedicated to taking stock of both grammaticalization and so-called "grammaticalization theory." This introduction sets the stage for other papers by surveying the large range of definitions of grammaticalization in the literature and placing them in…
NASA Astrophysics Data System (ADS)
Aucejo, M.; Totaro, N.; Guyader, J.-L.
2010-08-01
In noise control, identification of the source velocity field remains a major problem open to investigation. Consequently, methods such as nearfield acoustical holography (NAH), principal source projection, the inverse frequency response function and hybrid NAH have been developed. However, these methods require free field conditions that are often difficult to achieve in practice. This article presents an alternative method known as inverse patch transfer functions, designed to identify source velocities and developed in the framework of the European SILENCE project. This method is based on the definition of a virtual cavity, the double measurement of the pressure and particle velocity fields on the aperture surfaces of this volume, divided into elementary areas called patches and the inversion of impedances matrices, numerically computed from a modal basis obtained by FEM. Theoretically, the method is applicable to sources with complex 3D geometries and measurements can be carried out in a non-anechoic environment even in the presence of other stationary sources outside the virtual cavity. In the present paper, the theoretical background of the iPTF method is described and the results (numerical and experimental) for a source with simple geometry (two baffled pistons driven in antiphase) are presented and discussed.
Optimization of computations for adjoint field and Jacobian needed in 3D CSEM inversion
NASA Astrophysics Data System (ADS)
Dehiya, Rahul; Singh, Arun; Gupta, Pravin K.; Israil, M.
2017-01-01
We present the features and results of a newly developed code, based on Gauss-Newton optimization technique, for solving three-dimensional Controlled-Source Electromagnetic inverse problem. In this code a special emphasis has been put on representing the operations by block matrices for conjugate gradient iteration. We show how in the computation of Jacobian, the matrix formed by differentiation of system matrix can be made independent of frequency to optimize the operations at conjugate gradient step. The coarse level parallel computing, using OpenMP framework, is used primarily due to its simplicity in implementation and accessibility of shared memory multi-core computing machine to almost anyone. We demonstrate how the coarseness of modeling grid in comparison to source (comp`utational receivers) spacing can be exploited for efficient computing, without compromising the quality of the inverted model, by reducing the number of adjoint calls. It is also demonstrated that the adjoint field can even be computed on a grid coarser than the modeling grid without affecting the inversion outcome. These observations were reconfirmed using an experiment design where the deviation of source from straight tow line is considered. Finally, a real field data inversion experiment is presented to demonstrate robustness of the code.
NASA Astrophysics Data System (ADS)
Ojo, A. O.; Xie, Jun; Olorunfemi, M. O.
2018-01-01
To reduce ambiguity related to nonlinearities in the resistivity model-data relationships, an efficient direct-search scheme employing the Neighbourhood Algorithm (NA) was implemented to solve the 1-D resistivity problem. In addition to finding a range of best-fit models which are more likely to be global minimums, this method investigates the entire multi-dimensional model space and provides additional information about the posterior model covariance matrix, marginal probability density function and an ensemble of acceptable models. This provides new insights into how well the model parameters are constrained and make assessing trade-offs between them possible, thus avoiding some common interpretation pitfalls. The efficacy of the newly developed program is tested by inverting both synthetic (noisy and noise-free) data and field data from other authors employing different inversion methods so as to provide a good base for comparative performance. In all cases, the inverted model parameters were in good agreement with the true and recovered model parameters from other methods and remarkably correlate with the available borehole litho-log and known geology for the field dataset. The NA method has proven to be useful whilst a good starting model is not available and the reduced number of unknowns in the 1-D resistivity inverse problem makes it an attractive alternative to the linearized methods. Hence, it is concluded that the newly developed program offers an excellent complementary tool for the global inversion of the layered resistivity structure.
An Inverse Neural Controller Based on the Applicability Domain of RBF Network Models
Alexandridis, Alex; Stogiannos, Marios; Papaioannou, Nikolaos; Zois, Elias; Sarimveis, Haralambos
2018-01-01
This paper presents a novel methodology of generic nature for controlling nonlinear systems, using inverse radial basis function neural network models, which may combine diverse data originating from various sources. The algorithm starts by applying the particle swarm optimization-based non-symmetric variant of the fuzzy means (PSO-NSFM) algorithm so that an approximation of the inverse system dynamics is obtained. PSO-NSFM offers models of high accuracy combined with small network structures. Next, the applicability domain concept is suitably tailored and embedded into the proposed control structure in order to ensure that extrapolation is avoided in the controller predictions. Finally, an error correction term, estimating the error produced by the unmodeled dynamics and/or unmeasured external disturbances, is included to the control scheme to increase robustness. The resulting controller guarantees bounded input-bounded state (BIBS) stability for the closed loop system when the open loop system is BIBS stable. The proposed methodology is evaluated on two different control problems, namely, the control of an experimental armature-controlled direct current (DC) motor and the stabilization of a highly nonlinear simulated inverted pendulum. For each one of these problems, appropriate case studies are tested, in which a conventional neural controller employing inverse models and a PID controller are also applied. The results reveal the ability of the proposed control scheme to handle and manipulate diverse data through a data fusion approach and illustrate the superiority of the method in terms of faster and less oscillatory responses. PMID:29361781
A Forward Glimpse into Inverse Problems through a Geology Example
ERIC Educational Resources Information Center
Winkel, Brian J.
2012-01-01
This paper describes a forward approach to an inverse problem related to detecting the nature of geological substrata which makes use of optimization techniques in a multivariable calculus setting. The true nature of the related inverse problem is highlighted. (Contains 2 figures.)
Functional electronic inversion layers at ferroelectric domain walls
NASA Astrophysics Data System (ADS)
Mundy, J. A.; Schaab, J.; Kumagai, Y.; Cano, A.; Stengel, M.; Krug, I. P.; Gottlob, D. M.; Doğanay, H.; Holtz, M. E.; Held, R.; Yan, Z.; Bourret, E.; Schneider, C. M.; Schlom, D. G.; Muller, D. A.; Ramesh, R.; Spaldin, N. A.; Meier, D.
2017-06-01
Ferroelectric domain walls hold great promise as functional two-dimensional materials because of their unusual electronic properties. Particularly intriguing are the so-called charged walls where a polarity mismatch causes local, diverging electrostatic potentials requiring charge compensation and hence a change in the electronic structure. These walls can exhibit significantly enhanced conductivity and serve as a circuit path. The development of all-domain-wall devices, however, also requires walls with controllable output to emulate electronic nano-components such as diodes and transistors. Here we demonstrate electric-field control of the electronic transport at ferroelectric domain walls. We reversibly switch from resistive to conductive behaviour at charged walls in semiconducting ErMnO3. We relate the transition to the formation--and eventual activation--of an inversion layer that acts as the channel for the charge transport. The findings provide new insight into the domain-wall physics in ferroelectrics and foreshadow the possibility to design elementary digital devices for all-domain-wall circuitry.
Inverse design of centrifugal compressor vaned diffusers in inlet shear flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zangeneh, M.
1996-04-01
A three-dimensional inverse design method in which the blade (or vane) geometry is designed for specified distributions of circulation and blade thickness is applied to the design of centrifugal compressor vaned diffusers. Two generic diffusers are designed, one with uniform inlet flow (equivalent to a conventional design) and the other with a sheared inlet flow. The inlet shear flow effects are modeled in the design method by using the so-called ``Secondary Flow Approximation`` in which the Bernoulli surfaces are convected by the tangentially mean inviscid flow field. The difference between the vane geometry of the uniform inlet flow and nonuniformmore » inlet flow diffusers is found to be most significant from 50 percent chord to the trailing edge region. The flows through both diffusers are computed by using Denton`s three-dimensional inviscid Euler solver and Dawes` three-dimensional Navier-Stokes solver under sheared in-flow conditions. The predictions indicate improved pressure recovery and internal flow field for the diffuser designed for shear inlet flow conditions.« less
Inverse Leidenfrost effect: self-propelling drops on a bath
NASA Astrophysics Data System (ADS)
Gauthier, Anais; van der Meer, Devaraj; Lohse, Detlef; Physics of Fluids Team
2017-11-01
When deposited on very hot solid, volatile drops can levitate over a cushion of vapor, in the so-called Leidenfrost state. This phenomenon can also be observed on a hot bath and similarly to the solid case, drops are very mobile due to the absence of contact with the substrate that sustains them. We discuss here a situation of ``inverse Leidenfrost effect'' where room-temperature drops levitate on a liquid nitrogen pool - the vapor is generated here by the bath sustaining the relatively hot drop. We show that the drop's movement is not random: the liquid goes across the bath in straight lines, a pattern only disrupted by elastic bouncing on the edges. In addition, the drops are initially self-propelled; first at rest, they accelerate for a few seconds and reach velocities of the order of a few cm/s, before slowing down. We investigate experimentally the parameters that affect their successive acceleration and deceleration, such as the size and nature of the drops and we discuss the origin of this pattern.
Type II shell evolution in A = 70 isobars from the N ≥ 40 island of inversion
NASA Astrophysics Data System (ADS)
Morales, A. I.; Benzoni, G.; Watanabe, H.; Tsunoda, Y.; Otsuka, T.; Nishimura, S.; Browne, F.; Daido, R.; Doornenbal, P.; Fang, Y.; Lorusso, G.; Patel, Z.; Rice, S.; Sinclair, L.; Söderström, P.-A.; Sumikama, T.; Wu, J.; Xu, Z. Y.; Yagi, A.; Yokoyama, R.; Baba, H.; Avigo, R.; Bello Garrote, F. L.; Blasi, N.; Bracco, A.; Camera, F.; Ceruti, S.; Crespi, F. C. L.; de Angelis, G.; Delattre, M.-C.; Dombradi, Zs.; Gottardo, A.; Isobe, T.; Kojouharov, I.; Kurz, N.; Kuti, I.; Matsui, K.; Melon, B.; Mengoni, D.; Miyazaki, T.; Modamio-Hoybjor, V.; Momiyama, S.; Napoli, D. R.; Niikura, M.; Orlandi, R.; Sakurai, H.; Sahin, E.; Sohler, D.; Schaffner, H.; Taniuchi, R.; Taprogge, J.; Vajta, Zs.; Valiente-Dobón, J. J.; Wieland, O.; Yalcinkaya, M.
2017-02-01
The level structures of 70Co and 70Ni, populated from the β decay of 70Fe, have been investigated using β-delayed γ-ray spectroscopy following in-flight fission of a 238U beam. The experimental results are compared to Monte-Carlo Shell-Model calculations including the pf +g9/2 +d5/2 orbitals. The strong population of a (1+) state at 274 keV in 70Co is at variance with the expected excitation energy of ∼1 MeV from near spherical single-particle estimates. This observation indicates a dominance of prolate-deformed intruder configurations in the low-lying levels, which coexist with the normal near spherical states. It is shown that the β decay of the neutron-rich A = 70 isobars from the new island of inversion to the Z = 28 closed-shell regime progresses in accordance with a newly reported type of shell evolution, the so-called Type II, which involves many particle-hole excitations across energy gaps.
NASA Technical Reports Server (NTRS)
Xue, W.-M.; Atluri, S. N.
1985-01-01
In this paper, all possible forms of mixed-hybrid finite element methods that are based on multi-field variational principles are examined as to the conditions for existence, stability, and uniqueness of their solutions. The reasons as to why certain 'simplified hybrid-mixed methods' in general, and the so-called 'simplified hybrid-displacement method' in particular (based on the so-called simplified variational principles), become unstable, are discussed. A comprehensive discussion of the 'discrete' BB-conditions, and the rank conditions, of the matrices arising in mixed-hybrid methods, is given. Some recent studies aimed at the assurance of such rank conditions, and the related problem of the avoidance of spurious kinematic modes, are presented.
Geppert, Bogna; Tezyk, Artur; Florek, Ewa; Zaba, Czesław
2010-01-01
Cannabis sativa species Indica (Marihuana) is nowadays one of the most common plant drug, with psychoactive activity, presently appearing on the illegal market in Poland. It is reported that frequency of securing evidential materials so called substitute of Marihuana, is growing rapidly during the last few years. The substitutes of Marihuana occurring on the market are of natural or synthetic origins, for example different species of raw plants' materials having action similar to Cannabis or raw plants' materials with no psychoactive properities but with an addition of components so called synthetic cannabinoids. The review presents recent developments in drug market and current problems of forensic toxicology on the example of Marihuana.
NASA Astrophysics Data System (ADS)
Moresi, L.; May, D.; Peachey, T.; Enticott, C.; Abramson, D.; Robinson, T.
2004-12-01
Can you teach intuition ? Obviously we think that this is possible (though it's still just a hunch). People undoubtedly develop intuition for non-linear systems through painstaking repetition of complex tasks until they have sufficient feedback to begin to "see" the emergent behaviour. The better the exploration of the system can be exposed, the quicker the potential for developing an intuitive understanding. We have spent some time considering how to incorporate the intuitive knowledge of field geologists into mechanical modeling of geological processes. Our solution has been to allow expert geologist to steer (via a GUI) a genetic algorithm inversion of a mechanical forward model towards "structures" or patterns which are plausible in nature. The expert knowledge is then captured by analysis of the individual model parameters which are constrained by the steering (and by analysis of those which are unconstrained). The same system can also be used in reverse to expose the influence of individual parameters to the non-expert who is trying to learn just what does make a good match between model and observation. The ``distance'' between models preferred by experts, and those by an individual can be shown graphically to provide feedback. The examples we choose are from numerical models of extensional basins. We will first try to give each person some background information on the scientific problem from the poster and then we will let them loose on the numerical modeling tools with specific tasks to achieve. This will be an experiment in progress - we will later analyse how people use the GUI and whether there is really any significant difference between so-called experts and self-styled novices.
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).
Control of pyrite addition in coal liquefaction process
Schmid, Bruce K.; Junkin, James E.
1982-12-21
Pyrite addition to a coal liquefaction process (22, 26) is controlled (118) in inverse proportion to the calcium content of the feed coal to maximize the C.sub.5 --900.degree. F. (482.degree. C.) liquid yield per unit weight of pyrite added (110). The pyrite addition is controlled in this manner so as to minimize the amount of pyrite used and thus reduce pyrite contribution to the slurry pumping load and disposal problems connected with pyrite produced slag.
NASA Astrophysics Data System (ADS)
Hosani, E. Al; Zhang, M.; Abascal, J. F. P. J.; Soleimani, M.
2016-11-01
Electrical capacitance tomography (ECT) is an imaging technology used to reconstruct the permittivity distribution within the sensing region. So far, ECT has been primarily used to image non-conductive media only, since if the conductivity of the imaged object is high, the capacitance measuring circuit will be almost shortened by the conductivity path and a clear image cannot be produced using the standard image reconstruction approaches. This paper tackles the problem of imaging metallic samples using conventional ECT systems by investigating the two main aspects of image reconstruction algorithms, namely the forward problem and the inverse problem. For the forward problem, two different methods to model the region of high conductivity in ECT is presented. On the other hand, for the inverse problem, three different algorithms to reconstruct the high contrast images are examined. The first two methods are the linear single step Tikhonov method and the iterative total variation regularization method, and use two sets of ECT data to reconstruct the image in time difference mode. The third method, namely the level set method, uses absolute ECT measurements and was developed using a metallic forward model. The results indicate that the applications of conventional ECT systems can be extended to metal samples using the suggested algorithms and forward model, especially using a level set algorithm to find the boundary of the metal.
On Responsibility of Scientists
NASA Astrophysics Data System (ADS)
Burdyuzha, Vladimir
The situation of modern world is analised. It is impossible for our Civilization when at least half of the World Scientists are engaged in research intended to solve military problems. Civilization cannot be called reasonable so long as it spends a huge portion of national incomes on armaments. For resolution of our global problems International Scientific Center - Brain Trust of planet must be created, the status of which should be defined and sealed by the UN organization.
Modeling and Simulation of Avionics Systems and Command, Control and Communications Systems
1980-01-01
analytical and operational talent into a cohesive study group . This group becomes our critical mass for innovative analysis. For command and control problems...that focusing small integrated groups on specific aspects of a command and control problem sucoseds best. For example, Air Force Studies and Analyses...phase so called " study groups " should define "tactical requirement-papers", These study groups will be supported by operational analyses and by
Online learning in optical tomography: a stochastic approach
NASA Astrophysics Data System (ADS)
Chen, Ke; Li, Qin; Liu, Jian-Guo
2018-07-01
We study the inverse problem of radiative transfer equation (RTE) using stochastic gradient descent method (SGD) in this paper. Mathematically, optical tomography amounts to recovering the optical parameters in RTE using the incoming–outgoing pair of light intensity. We formulate it as a PDE-constraint optimization problem, where the mismatch of computed and measured outgoing data is minimized with same initial data and RTE constraint. The memory and computation cost it requires, however, is typically prohibitive, especially in high dimensional space. Smart iterative solvers that only use partial information in each step is called for thereafter. Stochastic gradient descent method is an online learning algorithm that randomly selects data for minimizing the mismatch. It requires minimum memory and computation, and advances fast, therefore perfectly serves the purpose. In this paper we formulate the problem, in both nonlinear and its linearized setting, apply SGD algorithm and analyze the convergence performance.
Bayesian approach to inverse statistical mechanics.
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Bayesian approach to inverse statistical mechanics
NASA Astrophysics Data System (ADS)
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
The University and Education about Law
ERIC Educational Resources Information Center
Odegaard, Charles E.
1975-01-01
Arguing that law is too large and too important a subject to be left to the law school, the author calls for changes, including an end to isolationist tendencies of the law school, so that the university can address itself to the problem of justice, its definition and implementation in society. (JT)
Hidden in the Middle: Culture, Value and Reward in Bioinformatics
ERIC Educational Resources Information Center
Lewis, Jamie; Bartlett, Andrew; Atkinson, Paul
2016-01-01
Bioinformatics--the so-called shotgun marriage between biology and computer science--is an interdiscipline. Despite interdisciplinarity being seen as a virtue, for having the capacity to solve complex problems and foster innovation, it has the potential to place projects and people in anomalous categories. For example, valorised…
A Decomposition Approach for Shipboard Manpower Scheduling
2009-01-01
generalizes the bin-packing problem with no conflicts ( BPP ) which is known to be NP-hard (Garey and Johnson 1979). Hence our focus is to obtain a lower...to the BPP ; while the so called constrained packing lower bound also takes conflict constraints into account. Their computational study indicates
Disability in Relation to Different Peer-Victimization Groups and Psychosomatic Problems
ERIC Educational Resources Information Center
Beckman, Linda; Stenbeck, Magnus; Hagquist, Curt
2016-01-01
The purpose of this study was to examine the associations between disability, victims, perpetrators, and so-called "bully-victims" (someone reporting being both a victim and a perpetrator) of traditional, cyber, or combined victimization or perpetration and psychosomatic health among adolescents. Authors analyzed cross-sectional data…
Writing the New West: A Critical Review
ERIC Educational Resources Information Center
Robbins, Paul; Meehan, Katharine; Gosnell, Hannah; Gilbertz, Susan J.
2009-01-01
A vast and growing interdisciplinary research effort has focused on the rise of the so-called New West, purportedly the product of regional socioeconomic, political, and ecological upheavals in states like Montana and Colorado. Reviewing the growing research on this problem in sociology, economics, geography, and conservation science, this article…
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
An inverse problem in thermal imaging
NASA Technical Reports Server (NTRS)
Bryan, Kurt; Caudill, Lester F., Jr.
1994-01-01
This paper examines uniqueness and stability results for an inverse problem in thermal imaging. The goal is to identify an unknown boundary of an object by applying a heat flux and measuring the induced temperature on the boundary of the sample. The problem is studied both in the case in which one has data at every point on the boundary of the region and the case in which only finitely many measurements are available. An inversion procedure is developed and used to study the stability of the inverse problem for various experimental configurations.
Inverse problems in quantum chemistry
NASA Astrophysics Data System (ADS)
Karwowski, Jacek
Inverse problems constitute a branch of applied mathematics with well-developed methodology and formalism. A broad family of tasks met in theoretical physics, in civil and mechanical engineering, as well as in various branches of medical and biological sciences has been formulated as specific implementations of the general theory of inverse problems. In this article, it is pointed out that a number of approaches met in quantum chemistry can (and should) be classified as inverse problems. Consequently, the methodology used in these approaches may be enriched by applying ideas and theorems developed within the general field of inverse problems. Several examples, including the RKR method for the construction of potential energy curves, determining parameter values in semiempirical methods, and finding external potentials for which the pertinent Schrödinger equation is exactly solvable, are discussed in detail.
Relationship between Norm-internalization and Cooperation in N-person Prisoners' Dilemma Games
NASA Astrophysics Data System (ADS)
Matsumoto, Mitsutaka
In this paper, I discuss the problems of ``order in social situations'' using a computer simulation of iterated N-person prisoners' dilemma game. It has been claimed that, in the case of the 2-person prisoners' dilemma, repetition of games and the reciprocal use of the ``tit-for-tat'' strategy promote the possibility of cooperation. However, in cases of N-person prisoners' dilemma where N is greater than 2, the logic does not work effectively. The most essential problem is so called ``sanctioning problems''. In this paper, firstly, I discuss the ``sanctioning problems'' which were introduced by Axelrod and Keohane in 1986. Based on the model formalized by Axelrod, I propose a new model, in which I added a mechanism of players' payoff changes in the Axelrod's model. I call this mechanism norm-internalization and call our model ``norm-internalization game''. Second, by using the model, I investigated the relationship between agents' norm-internalization (payoff-alternation) and the possibilities of cooperation. The results of computer simulation indicated that unequal distribution of cooperating norm and uniform distribution of sanctioning norm are more effective in establishing cooperation. I discuss the mathematical features and the implications of the results on social science.
Application of a stochastic inverse to the geophysical inverse problem
NASA Technical Reports Server (NTRS)
Jordan, T. H.; Minster, J. B.
1972-01-01
The inverse problem for gross earth data can be reduced to an undertermined linear system of integral equations of the first kind. A theory is discussed for computing particular solutions to this linear system based on the stochastic inverse theory presented by Franklin. The stochastic inverse is derived and related to the generalized inverse of Penrose and Moore. A Backus-Gilbert type tradeoff curve is constructed for the problem of estimating the solution to the linear system in the presence of noise. It is shown that the stochastic inverse represents an optimal point on this tradeoff curve. A useful form of the solution autocorrelation operator as a member of a one-parameter family of smoothing operators is derived.
Analysis of space telescope data collection system
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Schoggen, W. O.
1982-01-01
An analysis of the expected performance for the Multiple Access (MA) system is provided. The analysis covers the expected bit error rate performance, the effects of synchronization loss, the problem of self-interference, and the problem of phase ambiguity. The problem of false acceptance of a command word due to data inversion is discussed. A mathematical determination of the probability of accepting an erroneous command word due to a data inversion is presented. The problem is examined for three cases: (1) a data inversion only, (2) a data inversion and a random error within the same command word, and a block (up to 256 48-bit words) containing both a data inversion and a random error.
Clinical knowledge-based inverse treatment planning
NASA Astrophysics Data System (ADS)
Yang, Yong; Xing, Lei
2004-11-01
Clinical IMRT treatment plans are currently made using dose-based optimization algorithms, which do not consider the nonlinear dose-volume effects for tumours and normal structures. The choice of structure specific importance factors represents an additional degree of freedom of the system and makes rigorous optimization intractable. The purpose of this work is to circumvent the two problems by developing a biologically more sensible yet clinically practical inverse planning framework. To implement this, the dose-volume status of a structure was characterized by using the effective volume in the voxel domain. A new objective function was constructed with the incorporation of the volumetric information of the system so that the figure of merit of a given IMRT plan depends not only on the dose deviation from the desired distribution but also the dose-volume status of the involved organs. The conventional importance factor of an organ was written into a product of two components: (i) a generic importance that parametrizes the relative importance of the organs in the ideal situation when the goals for all the organs are met; (ii) a dose-dependent factor that quantifies our level of clinical/dosimetric satisfaction for a given plan. The generic importance can be determined a priori, and in most circumstances, does not need adjustment, whereas the second one, which is responsible for the intractable behaviour of the trade-off seen in conventional inverse planning, was determined automatically. An inverse planning module based on the proposed formalism was implemented and applied to a prostate case and a head-neck case. A comparison with the conventional inverse planning technique indicated that, for the same target dose coverage, the critical structure sparing was substantially improved for both cases. The incorporation of clinical knowledge allows us to obtain better IMRT plans and makes it possible to auto-select the importance factors, greatly facilitating the inverse planning process. The new formalism proposed also reveals the relationship between different inverse planning schemes and gives important insight into the problem of therapeutic plan optimization. In particular, we show that the EUD-based optimization is a special case of the general inverse planning formalism described in this paper.
An evolutive real-time source inversion based on a linear inverse formulation
NASA Astrophysics Data System (ADS)
Sanchez Reyes, H. S.; Tago, J.; Cruz-Atienza, V. M.; Metivier, L.; Contreras Zazueta, M. A.; Virieux, J.
2016-12-01
Finite source inversion is a steppingstone to unveil earthquake rupture. It is used on ground motion predictions and its results shed light on seismic cycle for better tectonic understanding. It is not yet used for quasi-real-time analysis. Nowadays, significant progress has been made on approaches regarding earthquake imaging, thanks to new data acquisition and methodological advances. However, most of these techniques are posterior procedures once seismograms are available. Incorporating source parameters estimation into early warning systems would require to update the source build-up while recording data. In order to go toward this dynamic estimation, we developed a kinematic source inversion formulated in the time-domain, for which seismograms are linearly related to the slip distribution on the fault through convolutions with Green's functions previously estimated and stored (Perton et al., 2016). These convolutions are performed in the time-domain as we progressively increase the time window of records at each station specifically. Selected unknowns are the spatio-temporal slip-rate distribution to keep the linearity of the forward problem with respect to unknowns, as promoted by Fan and Shearer (2014). Through the spatial extension of the expected rupture zone, we progressively build-up the slip-rate when adding new data by assuming rupture causality. This formulation is based on the adjoint-state method for efficiency (Plessix, 2006). The inverse problem is non-unique and, in most cases, underdetermined. While standard regularization terms are used for stabilizing the inversion, we avoid strategies based on parameter reduction leading to an unwanted non-linear relationship between parameters and seismograms for our progressive build-up. Rise time, rupture velocity and other quantities can be extracted later on as attributs from the slip-rate inversion we perform. Satisfactory results are obtained on a synthetic example (FIgure 1) proposed by the Source Inversion Validation project (Mai et al. 2011). A real case application is currently being explored. Our specific formulation, combined with simple prior information, as well as numerical results obtained so far, yields interesting perspectives for a real-time implementation.
Bowhead whale localization using asynchronous hydrophones in the Chukchi Sea.
Warner, Graham A; Dosso, Stan E; Hannay, David E; Dettmer, Jan
2016-07-01
This paper estimates bowhead whale locations and uncertainties using non-linear Bayesian inversion of their modally-dispersed calls recorded on asynchronous recorders in the Chukchi Sea, Alaska. Bowhead calls were recorded on a cluster of 7 asynchronous ocean-bottom hydrophones that were separated by 0.5-9.2 km. A warping time-frequency analysis is used to extract relative mode arrival times as a function of frequency for nine frequency-modulated whale calls that dispersed in the shallow water environment. Each call was recorded on multiple hydrophones and the mode arrival times are inverted for: the whale location in the horizontal plane, source instantaneous frequency (IF), water sound-speed profile, seabed geoacoustic parameters, relative recorder clock drifts, and residual error standard deviations, all with estimated uncertainties. A simulation study shows that accurate prior environmental knowledge is not required for accurate localization as long as the inversion treats the environment as unknown. Joint inversion of multiple recorded calls is shown to substantially reduce uncertainties in location, source IF, and relative clock drift. Whale location uncertainties are estimated to be 30-160 m and relative clock drift uncertainties are 3-26 ms.
Hybrid modeling of spatial continuity for application to numerical inverse problems
Friedel, Michael J.; Iwashita, Fabio
2013-01-01
A novel two-step modeling approach is presented to obtain optimal starting values and geostatistical constraints for numerical inverse problems otherwise characterized by spatially-limited field data. First, a type of unsupervised neural network, called the self-organizing map (SOM), is trained to recognize nonlinear relations among environmental variables (covariates) occurring at various scales. The values of these variables are then estimated at random locations across the model domain by iterative minimization of SOM topographic error vectors. Cross-validation is used to ensure unbiasedness and compute prediction uncertainty for select subsets of the data. Second, analytical functions are fit to experimental variograms derived from original plus resampled SOM estimates producing model variograms. Sequential Gaussian simulation is used to evaluate spatial uncertainty associated with the analytical functions and probable range for constraining variables. The hybrid modeling of spatial continuity is demonstrated using spatially-limited hydrologic measurements at different scales in Brazil: (1) physical soil properties (sand, silt, clay, hydraulic conductivity) in the 42 km2 Vargem de Caldas basin; (2) well yield and electrical conductivity of groundwater in the 132 km2 fractured crystalline aquifer; and (3) specific capacity, hydraulic head, and major ions in a 100,000 km2 transboundary fractured-basalt aquifer. These results illustrate the benefits of exploiting nonlinear relations among sparse and disparate data sets for modeling spatial continuity, but the actual application of these spatial data to improve numerical inverse modeling requires testing.
NASA Astrophysics Data System (ADS)
Cheng, Jin; Hon, Yiu-Chung; Seo, Jin Keun; Yamamoto, Masahiro
2005-01-01
The Second International Conference on Inverse Problems: Recent Theoretical Developments and Numerical Approaches was held at Fudan University, Shanghai from 16-21 June 2004. The first conference in this series was held at the City University of Hong Kong in January 2002 and it was agreed to hold the conference once every two years in a Pan-Pacific Asian country. The next conference is scheduled to be held at Hokkaido University, Sapporo, Japan in July 2006. The purpose of this series of biennial conferences is to establish and develop constant international collaboration, especially among the Pan-Pacific Asian countries. In recent decades, interest in inverse problems has been flourishing all over the globe because of both the theoretical interest and practical requirements. In particular, in Asian countries, one is witnessing remarkable new trends of research in inverse problems as well as the participation of many young talents. Considering these trends, the second conference was organized with the chairperson Professor Li Tat-tsien (Fudan University), in order to provide forums for developing research cooperation and to promote activities in the field of inverse problems. Because solutions to inverse problems are needed in various applied fields, we entertained a total of 92 participants at the second conference and arranged various talks which ranged from mathematical analyses to solutions of concrete inverse problems in the real world. This volume contains 18 selected papers, all of which have undergone peer review. The 18 papers are classified as follows: Surveys: four papers give reviews of specific inverse problems. Theoretical aspects: six papers investigate the uniqueness, stability, and reconstruction schemes. Numerical methods: four papers devise new numerical methods and their applications to inverse problems. Solutions to applied inverse problems: four papers discuss concrete inverse problems such as scattering problems and inverse problems in atmospheric sciences and oceanography. Last but not least is our gratitude. As editors we would like to express our sincere thanks to all the plenary and invited speakers, the members of the International Scientific Committee and the Advisory Board for the success of the conference, which has given rise to this present volume of selected papers. We would also like to thank Mr Wang Yanbo, Miss Wan Xiqiong and the graduate students at Fudan University for their effective work to make this conference a success. The conference was financially supported by the NFS of China, the Mathematical Center of Ministry of Education of China, E-Institutes of Shanghai Municipal Education Commission (No E03004) and Fudan University, Grant 15340027 from the Japan Society for the Promotion of Science, and Grant 15654015 from the Ministry of Education, Cultures, Sports and Technology.
NASA Astrophysics Data System (ADS)
Kalscheuer, Thomas; Bastani, Mehrdad; Donohue, Shane; Persson, Lena; Aspmo Pfaffhuber, Andreas; Reiser, Fabienne; Ren, Zhengyong
2013-05-01
In many coastal areas of North America and Scandinavia, post-glacial clay sediments have emerged above sea level due to iso-static uplift. These clays are often destabilised by fresh water leaching and transformed to so-called quick clays as at the investigated area at Smørgrav, Norway. Slight mechanical disturbances of these materials may trigger landslides. Since the leaching increases the electrical resistivity of quick clay as compared to normal marine clay, the application of electromagnetic (EM) methods is of particular interest in the study of quick clay structures. For the first time, single and joint inversions of direct-current resistivity (DCR), radiomagnetotelluric (RMT) and controlled-source audiomagnetotelluric (CSAMT) data were applied to delineate a zone of quick clay. The resulting 2-D models of electrical resistivity correlate excellently with previously published data from a ground conductivity metre and resistivity logs from two resistivity cone penetration tests (RCPT) into marine clay and quick clay. The RCPT log into the central part of the quick clay identifies the electrical resistivity of the quick clay structure to lie between 10 and 80 Ω m. In combination with the 2-D inversion models, it becomes possible to delineate the vertical and horizontal extent of the quick clay zone. As compared to the inversions of single data sets, the joint inversion model exhibits sharper resistivity contrasts and its resistivity values are more characteristic of the expected geology. In our preferred joint inversion model, there is a clear demarcation between dry soil, marine clay, quick clay and bedrock, which consists of alum shale and limestone.
Refining the Magnitude of the Shallow Slip Deficit
NASA Astrophysics Data System (ADS)
Xu, X.; Tong, X.; Sandwell, D. T.; Milliner, C. W. D.
2014-12-01
Geodetic inversions for slip versus depth for several major (Mw > 7) strike-slip earthquakes (e.g. 1992 Landers, 1999 Hector Mine, 2010 El_Mayor-Cucapah) show a 10% to 40% reduction in slip near surface (depth < 2 km) compared to the slip at deeper depths (5 to 8 km). This has been called the shallow slip deficit (SSD). The large magnitude of this deficit has been an enigma since it cannot be explained by shallow creep during the interseismic period or by triggered slip from nearby earthquakes. One potential explanation for the SSD is that the previous geodetic inversions used incomplete data that do not go close to fault so the shallow portions of the slip models were poorly resolved and generally underestimated. In this study we improve the geodetic inversion, especially at shallow depth by: 1) refining the InSAR processing with non-boxcar phase filtering, model-dependent range corrections, more complete phase unwrapping by SNAPHU using a correlation mask and allowing a phase discontinuity along the rupture; 2) including near-fault offset data from optical imagery and SAR azimuth offsets; 3) using more detailed fault geometry; 4) and using additional campaign GPS data. With these improved observations, the slip inversion has significantly increased resolution at shallow depth. For the Landers rupture the SSD is reduced from 45% to 16%. Similarly for the Hector Mine rupture the SSD is reduced from 15% to 5%. We are assembling all the relevant co-seismic data for the El Major-Cucapah earthquake and will report the inversion result with its SSD at the meeting.
Riemann–Hilbert problem approach for two-dimensional flow inverse scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agaltsov, A. D., E-mail: agalets@gmail.com; Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr; IEPT RAS, 117997 Moscow
2014-10-15
We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosmanis, Ansis
2011-02-15
I introduce a continuous-time quantum walk on graphs called the quantum snake walk, the basis states of which are fixed-length paths (snakes) in the underlying graph. First, I analyze the quantum snake walk on the line, and I show that, even though most states stay localized throughout the evolution, there are specific states that most likely move on the line as wave packets with momentum inversely proportional to the length of the snake. Next, I discuss how an algorithm based on the quantum snake walk might potentially be able to solve an extended version of the glued trees problem, whichmore » asks to find a path connecting both roots of the glued trees graph. To the best of my knowledge, no efficient quantum algorithm solving this problem is known yet.« less
NASA Astrophysics Data System (ADS)
Chen, Zhen; Chan, Tommy H. T.
2017-08-01
This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.
The inverse gravimetric problem in gravity modelling
NASA Technical Reports Server (NTRS)
Sanso, F.; Tscherning, C. C.
1989-01-01
One of the main purposes of geodesy is to determine the gravity field of the Earth in the space outside its physical surface. This purpose can be pursued without any particular knowledge of the internal density even if the exact shape of the physical surface of the Earth is not known, though this seems to entangle the two domains, as it was in the old Stoke's theory before the appearance of Molodensky's approach. Nevertheless, even when large, dense and homogeneous data sets are available, it was always recognized that subtracting from the gravity field the effect of the outer layer of the masses (topographic effect) yields a much smoother field. This is obviously more important when a sparse data set is bad so that any smoothing of the gravity field helps in interpolating between the data without raising the modeling error, this approach is generally followed because it has become very cheap in terms of computing time since the appearance of spectral techniques. The mathematical description of the Inverse Gravimetric Problem (IGP) is dominated mainly by two principles, which in loose terms can be formulated as follows: the knowledge of the external gravity field determines mainly the lateral variations of the density; and the deeper the density anomaly giving rise to a gravity anomaly, the more improperly posed is the problem of recovering the former from the latter. The statistical relation between rho and n (and its inverse) is also investigated in its general form, proving that degree cross-covariances have to be introduced to describe the behavior of rho. The problem of the simultaneous estimate of a spherical anomalous potential and of the external, topographic masses is addressed criticizing the choice of the mixed collection approach.
On the recovery of missing low and high frequency information from bandlimited reflectivity data
NASA Astrophysics Data System (ADS)
Sacchi, M. D.; Ulrych, T. J.
2007-12-01
During the last two decades, an important effort in the seismic exploration community has been made to retrieve broad-band seismic data by means of deconvolution and inversion. In general, the problem can be stated as a spectral reconstruction problem. In other words, given limited spectral information about the earth's reflectivity sequence, one attempts to create a broadband estimate of the Fourier spectra of the unknown reflectivity. Techniques based on the principle of parsimony can be effectively used to retrieve a sparse spike sequence and, consequently, a broad band signal. Alternatively, continuation methods, e.g., autoregressive modeling, can be used to extrapolate the recorded bandwidth of the seismic signal. The goal of this paper is to examine under what conditions the recovery of low and high frequencies from band-limited and noisy signals is possible. At the heart of the methods we discuss, is the celebrated non-Gaussian assumption so important in many modern signal processing methods, such as ICA, for example. Spectral recovery from limited information tends to work when the reflectivity consist of a few well isolated events. Results degrade with the number of reflectors, decreasing SNR and decreasing bandwidth of the source wavelet. Constrains and information-based priors can be used to stabilize the recovery but, as in all inverse problems, the solution is nonunique and effort is required to understand the level of recovery that is achievable, always keeping the physics of the problem in mind. We provide in this paper, a survey of methods to recover broad-band reflectivity sequences and examine the role that these techniques can play in the processing and inversion as applied to exploration and global seismology.
A Bayesian approach to earthquake source studies
NASA Astrophysics Data System (ADS)
Minson, Sarah
Bayesian sampling has several advantages over conventional optimization approaches to solving inverse problems. It produces the distribution of all possible models sampled proportionally to how much each model is consistent with the data and the specified prior information, and thus images the entire solution space, revealing the uncertainties and trade-offs in the model. Bayesian sampling is applicable to both linear and non-linear modeling, and the values of the model parameters being sampled can be constrained based on the physics of the process being studied and do not have to be regularized. However, these methods are computationally challenging for high-dimensional problems. Until now the computational expense of Bayesian sampling has been too great for it to be practicable for most geophysical problems. I present a new parallel sampling algorithm called CATMIP for Cascading Adaptive Tempered Metropolis In Parallel. This technique, based on Transitional Markov chain Monte Carlo, makes it possible to sample distributions in many hundreds of dimensions, if the forward model is fast, or to sample computationally expensive forward models in smaller numbers of dimensions. The design of the algorithm is independent of the model being sampled, so CATMIP can be applied to many areas of research. I use CATMIP to produce a finite fault source model for the 2007 Mw 7.7 Tocopilla, Chile earthquake. Surface displacements from the earthquake were recorded by six interferograms and twelve local high-rate GPS stations. Because of the wealth of near-fault data, the source process is well-constrained. I find that the near-field high-rate GPS data have significant resolving power above and beyond the slip distribution determined from static displacements. The location and magnitude of the maximum displacement are resolved. The rupture almost certainly propagated at sub-shear velocities. The full posterior distribution can be used not only to calculate source parameters but also to determine their uncertainties. So while kinematic source modeling and the estimation of source parameters is not new, with CATMIP I am able to use Bayesian sampling to determine which parts of the source process are well-constrained and which are not.
A systematic linear space approach to solving partially described inverse eigenvalue problems
NASA Astrophysics Data System (ADS)
Hu, Sau-Lon James; Li, Haujun
2008-06-01
Most applications of the inverse eigenvalue problem (IEP), which concerns the reconstruction of a matrix from prescribed spectral data, are associated with special classes of structured matrices. Solving the IEP requires one to satisfy both the spectral constraint and the structural constraint. If the spectral constraint consists of only one or few prescribed eigenpairs, this kind of inverse problem has been referred to as the partially described inverse eigenvalue problem (PDIEP). This paper develops an efficient, general and systematic approach to solve the PDIEP. Basically, the approach, applicable to various structured matrices, converts the PDIEP into an ordinary inverse problem that is formulated as a set of simultaneous linear equations. While solving simultaneous linear equations for model parameters, the singular value decomposition method is applied. Because of the conversion to an ordinary inverse problem, other constraints associated with the model parameters can be easily incorporated into the solution procedure. The detailed derivation and numerical examples to implement the newly developed approach to symmetric Toeplitz and quadratic pencil (including mass, damping and stiffness matrices of a linear dynamic system) PDIEPs are presented. Excellent numerical results for both kinds of problem are achieved under the situations that have either unique or infinitely many solutions.
Convergence of Chahine's nonlinear relaxation inversion method used for limb viewing remote sensing
NASA Technical Reports Server (NTRS)
Chu, W. P.
1985-01-01
The application of Chahine's (1970) inversion technique to remote sensing problems utilizing the limb viewing geometry is discussed. The problem considered here involves occultation-type measurements and limb radiance-type measurements from either spacecraft or balloon platforms. The kernel matrix of the inversion problem is either an upper or lower triangular matrix. It is demonstrated that the Chahine inversion technique always converges, provided the diagonal elements of the kernel matrix are nonzero.
A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion
NASA Astrophysics Data System (ADS)
CUI, C.; Hou, W.
2017-12-01
Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (< 3Hz) in field data is still bottleneck in the FWI. By extracting ultra low-frequency data from field data, envelope inversion is able to recover low wavenumber model with a demodulation operator (envelope operator), though the low frequency data does not really exist in field data. To improve the efficiency and viability of the inversion, in this study, we proposed a joint method of envelope inversion combined with hybrid-domain FWI. First, we developed 3D elastic envelope inversion, and the misfit function and the corresponding gradient operator were derived. Then we performed hybrid-domain FWI with envelope inversion result as initial model which provides low wavenumber component of model. Here, forward modeling is implemented in the time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.
Computational inverse methods of heat source in fatigue damage problems
NASA Astrophysics Data System (ADS)
Chen, Aizhou; Li, Yuan; Yan, Bo
2018-04-01
Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.
Large-N -approximated field theory for multipartite entanglement
NASA Astrophysics Data System (ADS)
Facchi, P.; Florio, G.; Parisi, G.; Pascazio, S.; Scardicchio, A.
2015-12-01
We try to characterize the statistics of multipartite entanglement of the random states of an n -qubit system. Unable to solve the problem exactly we generalize it, replacing complex numbers with real vectors with Nc components (the original problem is recovered for Nc=2 ). Studying the leading diagrams in the large-Nc approximation, we unearth the presence of a phase transition and, in an explicit example, show that the so-called entanglement frustration disappears in the large-Nc limit.
A fast rebinning algorithm for 3D positron emission tomography using John's equation
NASA Astrophysics Data System (ADS)
Defrise, Michel; Liu, Xuan
1999-08-01
Volume imaging in positron emission tomography (PET) requires the inversion of the three-dimensional (3D) x-ray transform. The usual solution to this problem is based on 3D filtered-backprojection (FBP), but is slow. Alternative methods have been proposed which factor the 3D data into independent 2D data sets corresponding to the 2D Radon transforms of a stack of parallel slices. Each slice is then reconstructed using 2D FBP. These so-called rebinning methods are numerically efficient but are approximate. In this paper a new exact rebinning method is derived by exploiting the fact that the 3D x-ray transform of a function is the solution to the second-order partial differential equation first studied by John. The method is proposed for two sampling schemes, one corresponding to a pair of infinite plane detectors and another one corresponding to a cylindrical multi-ring PET scanner. The new FORE-J algorithm has been implemented for this latter geometry and was compared with the approximate Fourier rebinning algorithm FORE and with another exact rebinning algorithm, FOREX. Results with simulated data demonstrate a significant improvement in accuracy compared to FORE, while the reconstruction time is doubled. Compared to FOREX, the FORE-J algorithm is slightly less accurate but more than three times faster.
Efficient bias correction for magnetic resonance image denoising.
Mukherjee, Partha Sarathi; Qiu, Peihua
2013-05-30
Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. Copyright © 2012 John Wiley & Sons, Ltd.
The true quantum face of the "exponential" decay: Unstable systems in rest and in motion
NASA Astrophysics Data System (ADS)
Urbanowski, K.
2017-12-01
Results of theoretical studies and numerical calculations presented in the literature suggest that the survival probability P0(t) has the exponential form starting from times much smaller than the lifetime τ up to times t ⪢τ and that P0(t) exhibits inverse power-law behavior at the late time region for times longer than the so-called crossover time T ⪢ τ (The crossover time T is the time when the late time deviations of P0(t) from the exponential form begin to dominate). More detailed analysis of the problem shows that in fact the survival probability P0(t) can not take the pure exponential form at any time interval including times smaller than the lifetime τ or of the order of τ and it has has an oscillating form. We also study the survival probability of moving relativistic unstable particles with definite momentum . These studies show that late time deviations of the survival probability of these particles from the exponential-like form of the decay law, that is the transition times region between exponential-like and non-exponential form of the survival probability, should occur much earlier than it follows from the classical standard considerations.
Ontiveros, Jesús F; Pierlot, Christel; Catté, Marianne; Molinier, Valérie; Salager, Jean-Louis; Aubry, Jean-Marie
2015-06-15
The Phase Inversion Temperature of a reference C10E4/n-Octane/Water system exhibits a quasi-linear variation versus the mole fraction of a second surfactant S2 added in the mixture. This variation was recently proposed as a classification tool to quantify the Hydrophilic-Lipophilic Balance (HLB) of commercial surfactants. The feasibility of the so-called PIT-slope method for a wide range of well-defined non-ionic and ionic surfactants is investigated. The comparison of various surfactants having the same dodecyl chain tail allows to rank the polar head hydrophilicity as: SO3Na⩾SO4Na⩾NMe3Br>E2SO3Na≈CO2Na⩾E1SO3Na⩾PhSO3Na>Isosorbide(exo)SO4Na≫IsosorbideendoSO4Na≫E8⩾NMe2O>E7>E6⩾Glucosyl>E5⩾Diglyceryl⩾E4>E3>E2≈Isosorbide(exo)>Glyceryl>Isosorbide(endo). The influence on the surfactant HLB of other structural parameters, i.e. hydrophobic chain length, unsaturation, replacement of Na(+) by K(+) counterion, and isomerism is also investigated. Finally, the method is successfully used to predict the optimal formulation of a new bio-based surfactant, 1-O-dodecyldiglycerol, when performing an oil scan at 25 °C. Copyright © 2015 Elsevier Inc. All rights reserved.
Radiative transfer through terrestrial atmosphere and ocean: Software package SCIATRAN
NASA Astrophysics Data System (ADS)
Rozanov, V. V.; Rozanov, A. V.; Kokhanovsky, A. A.; Burrows, J. P.
2014-01-01
SCIATRAN is a comprehensive software package for the modeling of radiative transfer processes in the terrestrial atmosphere and ocean in the spectral range from the ultraviolet to the thermal infrared (0.18 - 40 μm) including multiple scattering processes, polarization, thermal emission and ocean-atmosphere coupling. The software is capable of modeling spectral and angular distributions of the intensity or the Stokes vector of the transmitted, scattered, reflected, and emitted radiation assuming either a plane-parallel or a spherical atmosphere. Simulations are done either in the scalar or in the vector mode (i.e. accounting for the polarization) for observations by space-, air-, ship- and balloon-borne, ground-based, and underwater instruments in various viewing geometries (nadir, off-nadir, limb, occultation, zenith-sky, off-axis). All significant radiative transfer processes are accounted for. These are, e.g. the Rayleigh scattering, scattering by aerosol and cloud particles, absorption by gaseous components, and bidirectional reflection by an underlying surface including Fresnel reflection from a flat or roughened ocean surface. The software package contains several radiative transfer solvers including finite difference and discrete-ordinate techniques, an extensive database, and a specific module for solving inverse problems. In contrast to many other radiative transfer codes, SCIATRAN incorporates an efficient approach to calculate the so-called Jacobians, i.e. derivatives of the intensity with respect to various atmospheric and surface parameters. In this paper we discuss numerical methods used in SCIATRAN to solve the scalar and vector radiative transfer equation, describe databases of atmospheric, oceanic, and surface parameters incorporated in SCIATRAN, and demonstrate how to solve some selected radiative transfer problems using the SCIATRAN package. During the last decades, a lot of studies have been published demonstrating that SCIATRAN is a valuable tool for a wide range of remote sensing applications. Here, we present some selected comparisons of SCIATRAN simulations to published benchmark results, independent radiative transfer models, and various measurements from satellite, ground-based, and ship instruments. Methods for solving inverse problems related to remote sensing of the Earth's atmosphere using the SCIATRAN software are outside the scope of this study and will be discussed in a follow-up paper. The SCIATRAN software package along with a detailed User's Guide is freely available for non-commercial use via the webpage of the Institute of Environmental Physics (IUP), University of Bremen: http://www.iup.physik.uni-bremen.de/sciatran.
A Random Variable Related to the Inversion Vector of a Partial Random Permutation
ERIC Educational Resources Information Center
Laghate, Kavita; Deshpande, M. N.
2005-01-01
In this article, we define the inversion vector of a permutation of the integers 1, 2,..., n. We set up a particular kind of permutation, called a partial random permutation. The sum of the elements of the inversion vector of such a permutation is a random variable of interest.
Russo, R
1999-04-01
Y2K issues could affect anyone, so it is important for HIV-positive people to make preparations to ensure their health and security. There are wide-ranging opinions about the effects of the so-called Millennium Bug, but some planning will make any changes more manageable. Patients should have adequate supplies of food, water, and medications at home in case of shortages or production problems. Other prudent steps include keeping extra cash on hand and obtaining copies of medical records and benefit plans. Internet resources are listed.
FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems
NASA Astrophysics Data System (ADS)
Vourc'h, Eric; Rodet, Thomas
2015-11-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2015 was a one-day workshop held in May 2015 which attracted around 70 attendees. Each of the submitted papers has been reviewed by two reviewers. There have been 15 accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks: GDR ISIS, GDR MIA, GDR MOA and GDR Ondes. The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA and SATIE.
Coll-Font, Jaume; Burton, Brett M; Tate, Jess D; Erem, Burak; Swenson, Darrel J; Wang, Dafang; Brooks, Dana H; van Dam, Peter; Macleod, Rob S
2014-09-01
Cardiac electrical imaging often requires the examination of different forward and inverse problem formulations based on mathematical and numerical approximations of the underlying source and the intervening volume conductor that can generate the associated voltages on the surface of the body. If the goal is to recover the source on the heart from body surface potentials, the solution strategy must include numerical techniques that can incorporate appropriate constraints and recover useful solutions, even though the problem is badly posed. Creating complete software solutions to such problems is a daunting undertaking. In order to make such tools more accessible to a broad array of researchers, the Center for Integrative Biomedical Computing (CIBC) has made an ECG forward/inverse toolkit available within the open source SCIRun system. Here we report on three new methods added to the inverse suite of the toolkit. These new algorithms, namely a Total Variation method, a non-decreasing TMP inverse and a spline-based inverse, consist of two inverse methods that take advantage of the temporal structure of the heart potentials and one that leverages the spatial characteristics of the transmembrane potentials. These three methods further expand the possibilities of researchers in cardiology to explore and compare solutions to their particular imaging problem.
Petrini, Carlo
2011-01-01
A sound evaluation of every bioethical problem should be predicated on a careful analysis of at least two basic elements: (i) reliable scientific information and (ii) the ethical principles and values at stake. A thorough evaluation of both elements also calls for a careful examination of statements by authoritative institutions. Unfortunately, in the case of medically complex living donors neither element gives clear-cut answers to the ethical problems raised. Likewise, institutionary documents frequently offer only general criteria, which are not very helpful when making practical choices. This paper first introduces a brief overview of scientific information, ethical values, and institutionary documents; the notions of “acceptable risk” and “minimal risk” are then briefly examined, with reference to the problem of medically complex living donors. The so-called precautionary principle and the value of solidarity are then discussed as offering a possible approach to the ethical problem of medically complex living donors. PMID:22174982
Children's Understanding of the Inverse Relation between Multiplication and Division
ERIC Educational Resources Information Center
Robinson, Katherine M.; Dube, Adam K.
2009-01-01
Children's understanding of the inversion concept in multiplication and division problems (i.e., that on problems of the form "d multiplied by e/e" no calculations are required) was investigated. Children in Grades 6, 7, and 8 completed an inversion problem-solving task, an assessment of procedures task, and a factual knowledge task of simple…
A Volunteer Computing Project for Solving Geoacoustic Inversion Problems
NASA Astrophysics Data System (ADS)
Zaikin, Oleg; Petrov, Pavel; Posypkin, Mikhail; Bulavintsev, Vadim; Kurochkin, Ilya
2017-12-01
A volunteer computing project aimed at solving computationally hard inverse problems in underwater acoustics is described. This project was used to study the possibilities of the sound speed profile reconstruction in a shallow-water waveguide using a dispersion-based geoacoustic inversion scheme. The computational capabilities provided by the project allowed us to investigate the accuracy of the inversion for different mesh sizes of the sound speed profile discretization grid. This problem suits well for volunteer computing because it can be easily decomposed into independent simpler subproblems.
Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.
2008-05-01
The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.
Optimal Facility Location Tool for Logistics Battle Command (LBC)
2015-08-01
64 Appendix B. VBA Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Appendix C. Story...should city planners have located emergency service facilities so that all households (the demand) had equal access to coverage?” The critical...programming language called Visual Basic for Applications ( VBA ). CPLEX is a commercial solver for linear, integer, and mixed integer linear programming problems
Fund the Child: Tackling Inequity & Antiquity in School Finance
ERIC Educational Resources Information Center
Thomas B. Fordham Foundation & Institute, 2006
2006-01-01
Education funding today is a mess, and a solution is needed that addresses its biggest problems: most disadvantaged students do not receive the funding they need; red tape and overhead waste time and money; and new types of education options, like charter schools, are starved for dollars. Unfortunately, until now, so-called solutions have…
Lifelong Learning and the New Economy: Limitations of a Market Model
ERIC Educational Resources Information Center
Cruikshank, Jane
2008-01-01
What kind of workplace has the so-called "new economy" created? What problems are Canadian workers experiencing? How effective are Canada's lifelong learning policies that focus on high skills development for global competitiveness? These questions were explored as part of a three year research program. During the 2003-2004 academic…
Similarity Theory and Dimensionless Numbers in Heat Transfer
ERIC Educational Resources Information Center
Marin, E.; Calderon, A.; Delgado-Vasallo, O.
2009-01-01
We present basic concepts underlying the so-called similarity theory that in our opinion should be explained in basic undergraduate general physics courses when dealing with heat transport problems, in particular with those involving natural or free convection. A simple example is described that can be useful in showing a criterion for neglecting…
Three Problems with the Connectivist Conception of Learning
ERIC Educational Resources Information Center
Clarà, M.; Barberà, E.
2014-01-01
Connectivism, which has been argued to be a new learning theory, has emerged in the field of online learning during the last decade. On the World Wide Web at least, connectivism promises to establish learning spaces similar to those that Ivan Illich imagined in "Deschooling Society", through so-called massive online open courses (MOOCs).…
Test Diagnosing of Learning Activity
ERIC Educational Resources Information Center
Yavich, Roman; Gein, Alexander; Gerkerova, Alexandra
2016-01-01
The technology of criteria-oriented testing enhanced by the reflexive components is suggested in this article. Tests made according to this technology are called academic activity tests. The student chooses or formulates not the answer to the problem but an action that is productive in his opinion. So, this type of tests helps not only check the…
The Worst of Both Worlds: How U.S. and U.K. Models Are Influencing Australian Education
ERIC Educational Resources Information Center
Dinham, Stephen
2015-01-01
This commentary explores the so-called global "crisis" in education and the corresponding pressures and moves to "reform" education, and in particular, public education. The myths underpinning and driving these developments are examined. Supposed problems with (public) education and proposed solutions are explored. The…
Frontal Deficits in Alcoholism: An ERP Study
ERIC Educational Resources Information Center
George, Mary Reeni M.; Potts, Geoffrey; Kothman, Delia; Martin, Laura; Mukundan, C. R.
2004-01-01
Alcoholism is a major health problem afflicting people all over the world. Understanding the neural substrates of this addictive disorder may provide the basis for effective interventions. So-called ''executive processes'' play a role in cognitive functions like attention and working memory, and appear to be disrupted in alcoholism (Noel et al.,…
Strategic Defense Initiative: Splendid Defense or Pipe Dream? Headline Series No. 275.
ERIC Educational Resources Information Center
Armstrong, Scott; Grier, Peter
This pamphlet presents a discussion of the various components of President Reagan's Strategic Defense Initiative (SDI) including the problem of pulling together various new technologies into an effective defensive system and the politics of the so-called "star wars" system. An important part of the defense initiative is the…
Potential and Problems of Existing Creativity and Innovation Indices
ERIC Educational Resources Information Center
Hoelscher, Michael; Schubert, Julia
2015-01-01
Creativity and innovation are important inputs in the global knowledge economy. However, while the theoretical concepts and the measurement of creativity on the individual level have made considerable progress during the last decades, so-called sectoral approaches to measuring creativity and innovation on the level of aggregate units are less well…
Bridging the Digital Divide in the Schools of Developing Countries
ERIC Educational Resources Information Center
Tiene, Drew
2004-01-01
The so-called "digital divide" problem, significant disparities in access to technology between the affluent and impoverished, is a global phenomenon that is most serious in the poorest parts of the world. The millions who struggle daily for enough food, clothing, housing, and transportation, are unable to afford the hardware, software and service…
Mathematical Modeling of Language Games
NASA Astrophysics Data System (ADS)
Loreto, Vittorio; Baronchelli, Andrea; Puglisi, Andrea
In this chapter we explore several language games of increasing complexity. We first consider the so-called Naming Game, possibly the simplest example of the complex processes leading progressively to the establishment of human-like languages. In this framework, a globally shared vocabulary emerges as a result of local adjustments of individual word-meaning association. The emergence of a common vocabulary only represents a first stage while it is interesting to investigate the emergence of higher forms of agreement, e.g., compositionality, categories, syntactic or grammatical structures. As an example in this direction we consider the so-called Category Game. Here one focuses on the process by which a population of individuals manages to categorize a single perceptually continuous channel. The problem of the emergence of a discrete shared set of categories out of a continuous perceptual channel is a notoriously difficult problem relevant for color categorization, vowels formation, etc. The central result here is the emergence of a hierarchical category structure made of two distinct levels: a basic layer, responsible for fine discrimination of the environment, and a shared linguistic layer that groups together perceptions to guarantee communicative success.
Estimating the Earthquake Source Time Function by Markov Chain Monte Carlo Sampling
NASA Astrophysics Data System (ADS)
Dȩbski, Wojciech
2008-07-01
Many aspects of earthquake source dynamics like dynamic stress drop, rupture velocity and directivity, etc. are currently inferred from the source time functions obtained by a deconvolution of the propagation and recording effects from seismograms. The question of the accuracy of obtained results remains open. In this paper we address this issue by considering two aspects of the source time function deconvolution. First, we propose a new pseudo-spectral parameterization of the sought function which explicitly takes into account the physical constraints imposed on the sought functions. Such parameterization automatically excludes non-physical solutions and so improves the stability and uniqueness of the deconvolution. Secondly, we demonstrate that the Bayesian approach to the inverse problem at hand, combined with an efficient Markov Chain Monte Carlo sampling technique, is a method which allows efficient estimation of the source time function uncertainties. The key point of the approach is the description of the solution of the inverse problem by the a posteriori probability density function constructed according to the Bayesian (probabilistic) theory. Next, the Markov Chain Monte Carlo sampling technique is used to sample this function so the statistical estimator of a posteriori errors can be easily obtained with minimal additional computational effort with respect to modern inversion (optimization) algorithms. The methodological considerations are illustrated by a case study of the mining-induced seismic event of the magnitude M L ≈3.1 that occurred at Rudna (Poland) copper mine. The seismic P-wave records were inverted for the source time functions, using the proposed algorithm and the empirical Green function technique to approximate Green functions. The obtained solutions seem to suggest some complexity of the rupture process with double pulses of energy release. However, the error analysis shows that the hypothesis of source complexity is not justified at the 95% confidence level. On the basis of the analyzed event we also show that the separation of the source inversion into two steps introduces limitations on the completeness of the a posteriori error analysis.
Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies
Theis, Fabian J.
2017-01-01
Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464
Inverse odds ratio-weighted estimation for causal mediation analysis.
Tchetgen Tchetgen, Eric J
2013-11-20
An important scientific goal of studies in the health and social sciences is increasingly to determine to what extent the total effect of a point exposure is mediated by an intermediate variable on the causal pathway between the exposure and the outcome. A causal framework has recently been proposed for mediation analysis, which gives rise to new definitions, formal identification results and novel estimators of direct and indirect effects. In the present paper, the author describes a new inverse odds ratio-weighted approach to estimate so-called natural direct and indirect effects. The approach, which uses as a weight the inverse of an estimate of the odds ratio function relating the exposure and the mediator, is universal in that it can be used to decompose total effects in a number of regression models commonly used in practice. Specifically, the approach may be used for effect decomposition in generalized linear models with a nonlinear link function, and in a number of other commonly used models such as the Cox proportional hazards regression for a survival outcome. The approach is simple and can be implemented in standard software provided a weight can be specified for each observation. An additional advantage of the method is that it easily incorporates multiple mediators of a categorical, discrete or continuous nature. Copyright © 2013 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebrahimi, Fatima
2014-07-31
Large-scale magnetic fields have been observed in widely different types of astrophysical objects. These magnetic fields are believed to be caused by the so-called dynamo effect. Could a large-scale magnetic field grow out of turbulence (i.e. the alpha dynamo effect)? How could the topological properties and the complexity of magnetic field as a global quantity, the so called magnetic helicity, be important in the dynamo effect? In addition to understanding the dynamo mechanism in astrophysical accretion disks, anomalous angular momentum transport has also been a longstanding problem in accretion disks and laboratory plasmas. To investigate both dynamo and momentum transport,more » we have performed both numerical modeling of laboratory experiments that are intended to simulate nature and modeling of configurations with direct relevance to astrophysical disks. Our simulations use fluid approximations (Magnetohydrodynamics - MHD model), where plasma is treated as a single fluid, or two fluids, in the presence of electromagnetic forces. Our major physics objective is to study the possibility of magnetic field generation (so called MRI small-scale and large-scale dynamos) and its role in Magneto-rotational Instability (MRI) saturation through nonlinear simulations in both MHD and Hall regimes.« less
On the theoretical description of weakly charged surfaces.
Wang, Rui; Wang, Zhen-Gang
2015-03-14
It is widely accepted that the Poisson-Boltzmann (PB) theory provides a valid description for charged surfaces in the so-called weak coupling limit. Here, we show that the image charge repulsion creates a depletion boundary layer that cannot be captured by a regular perturbation approach. The correct weak-coupling theory must include the self-energy of the ion due to the image charge interaction. The image force qualitatively alters the double layer structure and properties, and gives rise to many non-PB effects, such as nonmonotonic dependence of the surface energy on concentration and charge inversion. In the presence of dielectric discontinuity, there is no limiting condition for which the PB theory is valid.
NASA Technical Reports Server (NTRS)
Hayati, Samad; Tso, Kam; Roston, Gerald
1988-01-01
Autonomous robot task execution requires that the end effector of the robot be positioned accurately relative to a reference world-coordinate frame. The authors present a complete formulation to identify the actual robot geometric parameters. The method applies to any serial link manipulator with arbitrary order and combination of revolute and prismatic joints. A method is also presented to solve the inverse kinematic of the actual robot model which usually is not a so-called simple robot. Experimental results performed by utilizing a PUMA 560 with simple measurement hardware are presented. As a result of this calibration a precision move command is designed and integrated into a robot language, RCCL, and used in the NASA Telerobot Testbed.
Stability, Higgs boson mass, and new physics.
Branchina, Vincenzo; Messina, Emanuele
2013-12-13
Assuming that the particle with mass ∼126 GeV discovered at LHC is the standard model Higgs boson, we find that the stability of the electroweak (EW) vacuum strongly depends on new physics interaction at the Planck scale MP, despite of the fact that they are higher-dimensional interactions, apparently suppressed by inverse powers of MP. In particular, for the present experimental values of the top and Higgs boson masses, if τ is the lifetime of the EW vacuum, new physics can turn τ from τ≫TU to τ≪TU, where TU is the age of the Universe, thus, weakening the conclusions of the so called metastability scenario.
Monostable superrepellent materials
NASA Astrophysics Data System (ADS)
Li, Yanshen; Quéré, David; Lv, Cunjing; Zheng, Quanshui
2017-03-01
Superrepellency is an extreme situation where liquids stay at the tops of rough surfaces, in the so-called Cassie state. Owing to the dramatic reduction of solid/liquid contact, such states lead to many applications, such as antifouling, droplet manipulation, hydrodynamic slip, and self-cleaning. However, superrepellency is often destroyed by impalement transitions triggered by environmental disturbances whereas inverse transitions are not observed without energy input. Here we show through controlled experiments the existence of a “monostable” region in the phase space of surface chemistry and roughness, where transitions from Cassie to (impaled) Wenzel states become spontaneously reversible. We establish the condition for observing monostability, which might guide further design and engineering of robust superrepellent materials.
A calculus based on a q-deformed Heisenberg algebra
Cerchiai, B. L.; Hinterding, R.; Madore, J.; ...
1999-04-27
We show how one can construct a differential calculus over an algebra where position variables $x$ and momentum variables p have be defined. As the simplest example we consider the one-dimensional q-deformed Heisenberg algebra. This algebra has a subalgebra generated by cursive Greek chi and its inverse which we call the coordinate algebra. A physical field is considered to be an element of the completion of this algebra. We can construct a derivative which leaves invariant the coordinate algebra and so takes physical fields into physical fields. A generalized Leibniz rule for this algebra can be found. Based on thismore » derivative differential forms and an exterior differential calculus can be constructed.« less
Reverse engineering and identification in systems biology: strategies, perspectives and challenges.
Villaverde, Alejandro F; Banga, Julio R
2014-02-06
The interplay of mathematical modelling with experiments is one of the central elements in systems biology. The aim of reverse engineering is to infer, analyse and understand, through this interplay, the functional and regulatory mechanisms of biological systems. Reverse engineering is not exclusive of systems biology and has been studied in different areas, such as inverse problem theory, machine learning, nonlinear physics, (bio)chemical kinetics, control theory and optimization, among others. However, it seems that many of these areas have been relatively closed to outsiders. In this contribution, we aim to compare and highlight the different perspectives and contributions from these fields, with emphasis on two key questions: (i) why are reverse engineering problems so hard to solve, and (ii) what methods are available for the particular problems arising from systems biology?
Chen, Ying-ping; Chen, Chao-Hong
2010-01-01
An adaptive discretization method, called split-on-demand (SoD), enables estimation of distribution algorithms (EDAs) for discrete variables to solve continuous optimization problems. SoD randomly splits a continuous interval if the number of search points within the interval exceeds a threshold, which is decreased at every iteration. After the split operation, the nonempty intervals are assigned integer codes, and the search points are discretized accordingly. As an example of using SoD with EDAs, the integration of SoD and the extended compact genetic algorithm (ECGA) is presented and numerically examined. In this integration, we adopt a local search mechanism as an optional component of our back end optimization engine. As a result, the proposed framework can be considered as a memetic algorithm, and SoD can potentially be applied to other memetic algorithms. The numerical experiments consist of two parts: (1) a set of benchmark functions on which ECGA with SoD and ECGA with two well-known discretization methods: the fixed-height histogram (FHH) and the fixed-width histogram (FWH) are compared; (2) a real-world application, the economic dispatch problem, on which ECGA with SoD is compared to other methods. The experimental results indicate that SoD is a better discretization method to work with ECGA. Moreover, ECGA with SoD works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bledsoe, Keith C.
2015-04-01
The DiffeRential Evolution Adaptive Metropolis (DREAM) method is a powerful optimization/uncertainty quantification tool used to solve inverse transport problems in Los Alamos National Laboratory’s INVERSE code system. The DREAM method has been shown to be adept at accurate uncertainty quantification, but it can be very computationally demanding. Previously, the DREAM method in INVERSE performed a user-defined number of particle transport calculations. This placed a burden on the user to guess the number of calculations that would be required to accurately solve any given problem. This report discusses a new approach that has been implemented into INVERSE, the Gelman-Rubin convergence metric.more » This metric automatically detects when an appropriate number of transport calculations have been completed and the uncertainty in the inverse problem has been accurately calculated. In a test problem with a spherical geometry, this method was found to decrease the number of transport calculations (and thus time required) to solve a problem by an average of over 90%. In a cylindrical test geometry, a 75% decrease was obtained.« less
NASA Astrophysics Data System (ADS)
Hysell, D. L.; Varney, R. H.; Vlasov, M. N.; Nossa, E.; Watkins, B.; Pedersen, T.; Huba, J. D.
2012-02-01
The electron energy distribution during an F region ionospheric modification experiment at the HAARP facility near Gakona, Alaska, is inferred from spectrographic airglow emission data. Emission lines at 630.0, 557.7, and 844.6 nm are considered along with the absence of detectable emissions at 427.8 nm. Estimating the electron energy distribution function from the airglow data is a problem in classical linear inverse theory. We describe an augmented version of the method of Backus and Gilbert which we use to invert the data. The method optimizes the model resolution, the precision of the mapping between the actual electron energy distribution and its estimate. Here, the method has also been augmented so as to limit the model prediction error. Model estimates of the suprathermal electron energy distribution versus energy and altitude are incorporated in the inverse problem formulation as representer functions. Our methodology indicates a heater-induced electron energy distribution with a broad peak near 5 eV that decreases approximately exponentially by 30 dB between 5-50 eV.
Berry, Roberta M; Borenstein, Jason; Butera, Robert J
2013-06-01
This manuscript describes a pilot study in ethics education employing a problem-based learning approach to the study of novel, complex, ethically fraught, unavoidably public, and unavoidably divisive policy problems, called "fractious problems," in bioscience and biotechnology. Diverse graduate and professional students from four US institutions and disciplines spanning science, engineering, humanities, social science, law, and medicine analyzed fractious problems employing "navigational skills" tailored to the distinctive features of these problems. The students presented their results to policymakers, stakeholders, experts, and members of the public. This approach may provide a model for educating future bioscientists and bioengineers so that they can meaningfully contribute to the social understanding and resolution of challenging policy problems generated by their work.
Genetics Home Reference: core binding factor acute myeloid leukemia
... the CBFB gene. One such rearrangement, called an inversion , involves breakage of a chromosome in two places; ... is reversed and reinserted into the chromosome. The inversion involved in CBF-AML (written as inv(16)) ...
Spontaneously broken spacetime symmetries and the role of inessential Goldstones
NASA Astrophysics Data System (ADS)
Klein, Remko; Roest, Diederik; Stefanyszyn, David
2017-10-01
In contrast to internal symmetries, there is no general proof that the coset construction for spontaneously broken spacetime symmetries leads to universal dynamics. One key difference lies in the role of Goldstone bosons, which for spacetime symmetries includes a subset which are inessential for the non-linear realisation and hence can be eliminated. In this paper we address two important issues that arise when eliminating inessential Goldstones. The first concerns the elimination itself, which is often performed by imposing so-called inverse Higgs constraints. Contrary to claims in the literature, there are a series of conditions on the structure constants which must be satisfied to employ the inverse Higgs phenomenon, and we discuss which parametrisation of the coset element is the most effective in this regard. We also consider generalisations of the standard inverse Higgs constraints, which can include integrating out inessential Goldstones at low energies, and prove that under certain assumptions these give rise to identical effective field theories for the essential Goldstones. Secondly, we consider mappings between non-linear realisations that differ both in the coset element and the algebra basis. While these can always be related to each other by a point transformation, remarkably, the inverse Higgs constraints are not necessarily mapped onto each other under this transformation. We discuss the physical implications of this non-mapping, with a particular emphasis on the coset space corresponding to the spontaneous breaking of the Anti-De Sitter isometries by a Minkowski probe brane.
Tracking cells in Life Cell Imaging videos using topological alignments.
Mosig, Axel; Jäger, Stefan; Wang, Chaofeng; Nath, Sumit; Ersoy, Ilker; Palaniappan, Kannap-pan; Chen, Su-Shing
2009-07-16
With the increasing availability of live cell imaging technology, tracking cells and other moving objects in live cell videos has become a major challenge for bioimage informatics. An inherent problem for most cell tracking algorithms is over- or under-segmentation of cells - many algorithms tend to recognize one cell as several cells or vice versa. We propose to approach this problem through so-called topological alignments, which we apply to address the problem of linking segmentations of two consecutive frames in the video sequence. Starting from the output of a conventional segmentation procedure, we align pairs of consecutive frames through assigning sets of segments in one frame to sets of segments in the next frame. We achieve this through finding maximum weighted solutions to a generalized "bipartite matching" between two hierarchies of segments, where we derive weights from relative overlap scores of convex hulls of sets of segments. For solving the matching task, we rely on an integer linear program. Practical experiments demonstrate that the matching task can be solved efficiently in practice, and that our method is both effective and useful for tracking cells in data sets derived from a so-called Large Scale Digital Cell Analysis System (LSDCAS). The source code of the implementation is available for download from http://www.picb.ac.cn/patterns/Software/topaln.
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara
2012-10-01
Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.
Quantum vacuum energy in general relativity
NASA Astrophysics Data System (ADS)
Henke, Christian
2018-02-01
The paper deals with the scale discrepancy between the observed vacuum energy in cosmology and the theoretical quantum vacuum energy (cosmological constant problem). Here, we demonstrate that Einstein's equation and an analogy to particle physics leads to the first physical justification of the so-called fine-tuning problem. This fine-tuning could be automatically satisfied with the variable cosmological term Λ (a)=Λ_0+Λ_1 a^{-(4-ɛ)}, 0 < ɛ ≪ 1, where a is the scale factor. As a side effect of our solution of the cosmological constant problem, the dynamical part of the cosmological term generates an attractive force and solves the missing mass problem of dark matter.
Efficient numerical method for solving Cauchy problem for the Gamma equation
NASA Astrophysics Data System (ADS)
Koleva, Miglena N.
2011-12-01
In this work we consider Cauchy problem for the so called Gamma equation, derived by transforming the fully nonlinear Black-Scholes equation for option price into a quasilinear parabolic equation for the second derivative (Greek) Γ = VSS of the option price V. We develop an efficient numerical method for solving the model problem concerning different volatility terms. Using suitable change of variables the problem is transformed on finite interval, keeping original behavior of the solution at the infinity. Then we construct Picard-Newton algorithm with adaptive mesh step in time, which can be applied also in the case of non-differentiable functions. Results of numerical simulations are given.
NASA Astrophysics Data System (ADS)
Guseinov, I. M.; Khanmamedov, A. Kh.; Mamedova, A. F.
2018-04-01
We consider the Schrödinger equation with an additional quadratic potential on the entire axis and use the transformation operator method to study the direct and inverse problems of the scattering theory. We obtain the main integral equations of the inverse problem and prove that the basic equations are uniquely solvable.
Assessing non-uniqueness: An algebraic approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasco, Don W.
Geophysical inverse problems are endowed with a rich mathematical structure. When discretized, most differential and integral equations of interest are algebraic (polynomial) in form. Techniques from algebraic geometry and computational algebra provide a means to address questions of existence and uniqueness for both linear and non-linear inverse problem. In a sense, the methods extend ideas which have proven fruitful in treating linear inverse problems.
NASA Astrophysics Data System (ADS)
Petra, N.; Alexanderian, A.; Stadler, G.; Ghattas, O.
2015-12-01
We address the problem of optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs). The inverse problem seeks to infer a parameter field (e.g., the log permeability field in a porous medium flow model problem) from synthetic observations at a set of sensor locations and from the governing PDEs. The goal of the OED problem is to find an optimal placement of sensors so as to minimize the uncertainty in the inferred parameter field. We formulate the OED objective function by generalizing the classical A-optimal experimental design criterion using the expected value of the trace of the posterior covariance. This expected value is computed through sample averaging over the set of likely experimental data. Due to the infinite-dimensional character of the parameter field, we seek an optimization method that solves the OED problem at a cost (measured in the number of forward PDE solves) that is independent of both the parameter and the sensor dimension. To facilitate this goal, we construct a Gaussian approximation to the posterior at the maximum a posteriori probability (MAP) point, and use the resulting covariance operator to define the OED objective function. We use randomized trace estimation to compute the trace of this covariance operator. The resulting OED problem includes as constraints the system of PDEs characterizing the MAP point, and the PDEs describing the action of the covariance (of the Gaussian approximation to the posterior) to vectors. We control the sparsity of the sensor configurations using sparsifying penalty functions, and solve the resulting penalized bilevel optimization problem via an interior-point quasi-Newton method, where gradient information is computed via adjoints. We elaborate our OED method for the problem of determining the optimal sensor configuration to best infer the log permeability field in a porous medium flow problem. Numerical results show that the number of PDE solves required for the evaluation of the OED objective function and its gradient is essentially independent of both the parameter dimension and the sensor dimension (i.e., the number of candidate sensor locations). The number of quasi-Newton iterations for computing an OED also exhibits the same dimension invariance properties.
Developing Tools to Test the Thermo-Mechanical Models, Examples at Crustal and Upper Mantle Scale
NASA Astrophysics Data System (ADS)
Le Pourhiet, L.; Yamato, P.; Burov, E.; Gurnis, M.
2005-12-01
Testing geodynamical model is never an easy task. Depending on the spatio-temporal scale of the model, different testable predictions are needed and no magic reciepe exist. This contribution first presents different methods that have been used to test themo-mechanical modeling results at upper crustal, lithospheric and upper mantle scale using three geodynamical examples : the Gulf of Corinth (Greece), the Western Alps, and the Sierra Nevada. At short spatio-temporal scale (e.g. Gulf of Corinth). The resolution of the numerical models is usually sufficient to catch the timing and kinematics of the faults precisely enough to be tested by tectono-stratigraphic arguments. In active deforming area, microseismicity can be compared to the effective rheology and P and T axes of the focal mechanism can be compared with local orientation of the major component of the stress tensor. At lithospheric scale the resolution of the models doesn't permit anymore to constrain the models by direct observations (i.e. structural data from field or seismic reflection). Instead, synthetic P-T-t path may be computed and compared to natural ones in term of rate of exhumation for ancient orogens. Topography may also help but on continent it mainly depends on erosion laws that are complicated to constrain. Deeper in the mantle, the only available constrain are long wave length topographic data and tomographic "data". The major problem to overcome now at lithospheric and upper mantle scale, is that the so called "data" results actually from inverse models of the real data and that those inverse model are based on synthetic models. Post processing P and S wave velocities is not sufficient to be able to make testable prediction at upper mantle scale. Instead of that, direct wave propagations model must be computed. This allows checking if the differences between two models constitute a testable prediction or not. On longer term, we may be able to use those synthetic models to reduce the residue in the inversion of elastic wave arrival time
NASA Astrophysics Data System (ADS)
Sayevand, K.; Pichaghchi, K.
2018-04-01
In this paper, we were concerned with the description of the singularly perturbed boundary value problems in the scope of fractional calculus. We should mention that, one of the main methods used to solve these problems in classical calculus is the so-called matched asymptotic expansion method. However we shall note that, this was not achievable via the existing classical definitions of fractional derivative, because they do not obey the chain rule which one of the key elements of the matched asymptotic expansion method. In order to accommodate this method to fractional derivative, we employ a relatively new derivative so-called the local fractional derivative. Using the properties of local fractional derivative, we extend the matched asymptotic expansion method to the scope of fractional calculus and introduce a reliable new algorithm to develop approximate solutions of the singularly perturbed boundary value problems of fractional order. In the new method, the original problem is partitioned into inner and outer solution equations. The reduced equation is solved with suitable boundary conditions which provide the terminal boundary conditions for the boundary layer correction. The inner solution problem is next solved as a solvable boundary value problem. The width of the boundary layer is approximated using appropriate resemblance function. Some theoretical results are established and proved. Some illustrating examples are solved and the results are compared with those of matched asymptotic expansion method and homotopy analysis method to demonstrate the accuracy and efficiency of the method. It can be observed that, the proposed method approximates the exact solution very well not only in the boundary layer, but also away from the layer.
NASA Astrophysics Data System (ADS)
Neustupa, Tomáš
2017-07-01
The paper presents the mathematical model of a steady 2-dimensional viscous incompressible flow through a radial blade machine. The corresponding boundary value problem is studied in the rotating frame. We provide the classical and weak formulation of the problem. Using a special form of the so called "artificial" or "natural" boundary condition on the outflow, we prove the existence of a weak solution for an arbitrarily large inflow.
One-dimensional Coulomb problem in Dirac materials
NASA Astrophysics Data System (ADS)
Downing, C. A.; Portnoi, M. E.
2014-11-01
We investigate the one-dimensional Coulomb potential with application to a class of quasirelativistic systems, so-called Dirac-Weyl materials, described by matrix Hamiltonians. We obtain the exact solution of the shifted and truncated Coulomb problems, with the wave functions expressed in terms of special functions (namely, Whittaker functions), while the energy spectrum must be determined via solutions to transcendental equations. Most notably, there are critical band gaps below which certain low-lying quantum states are missing in a manifestation of atomic collapse.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolda, Christopher
In this talk, I review recent work on using a generalization of the Next-to-Minimal Supersymmetric Standard Model (NMSSM), called the Singlet-extended Minimal Supersymmetric Standard Model (SMSSM), to raise the mass of the Standard Model-like Higgs boson without requiring extremely heavy top squarks or large stop mixing. In so doing, this model solves the little hierarchy problem of the minimal model (MSSM), at the expense of leaving the {mu}-problem of the MSSM unresolved. This talk is based on work published in Refs. [1, 2, 3].
NASA Astrophysics Data System (ADS)
Padhi, Amit; Mallick, Subhashis
2014-03-01
Inversion of band- and offset-limited single component (P wave) seismic data does not provide robust estimates of subsurface elastic parameters and density. Multicomponent seismic data can, in principle, circumvent this limitation but adds to the complexity of the inversion algorithm because it requires simultaneous optimization of multiple objective functions, one for each data component. In seismology, these multiple objectives are typically handled by constructing a single objective given as a weighted sum of the objectives of individual data components and sometimes with additional regularization terms reflecting their interdependence; which is then followed by a single objective optimization. Multi-objective problems, inclusive of the multicomponent seismic inversion are however non-linear. They have non-unique solutions, known as the Pareto-optimal solutions. Therefore, casting such problems as a single objective optimization provides one out of the entire set of the Pareto-optimal solutions, which in turn, may be biased by the choice of the weights. To handle multiple objectives, it is thus appropriate to treat the objective as a vector and simultaneously optimize each of its components so that the entire Pareto-optimal set of solutions could be estimated. This paper proposes such a novel multi-objective methodology using a non-dominated sorting genetic algorithm for waveform inversion of multicomponent seismic data. The applicability of the method is demonstrated using synthetic data generated from multilayer models based on a real well log. We document that the proposed method can reliably extract subsurface elastic parameters and density from multicomponent seismic data both when the subsurface is considered isotropic and transversely isotropic with a vertical symmetry axis. We also compute approximate uncertainty values in the derived parameters. Although we restrict our inversion applications to horizontally stratified models, we outline a practical procedure of extending the method to approximately include local dips for each source-receiver offset pair. Finally, the applicability of the proposed method is not just limited to seismic inversion but it could be used to invert different data types not only requiring multiple objectives but also multiple physics to describe them.
Sprinkler head revisited: momentum, forces, and flows in Machian propulsion
NASA Astrophysics Data System (ADS)
Jenkins, Alejandro
2011-09-01
Many experimenters, starting with Ernst Mach in 1883, have reported that if a device alternately sucks in and then expels a surrounding fluid, it moves in the same direction as if it only expelled fluid. This surprising phenomenon, which we call Machian propulsion, is explained by conservation of momentum: the outflow efficiently transfers momentum away from the device and into the surrounding medium, while the inflow can do so only by viscous diffusion. However, many previous theoretical discussions have focused instead on the difference in the shapes of the outflow and the inflow. Whereas the argument based on conservation is straightforward and complete, the analysis of the shapes of the flows is more subtle and requires conservation in the first place. Our discussion covers three devices that have usually been treated separately: the reverse sprinkler (also called the inverse, or Feynman sprinkler), the putt-putt boat, and the aspirating cantilever. We then briefly mention some applications of Machian propulsion, ranging from microengineering to astrophysics.
Cosmic Rays in the Earth's Atmosphere and Underground
NASA Astrophysics Data System (ADS)
Dorman, Lev I.
2004-08-01
This book consists of four parts. In the first part (Chapters 1-4) a full overview is given of the theoretical and experimental basis of Cosmic Ray (CR) research in the atmosphere and underground for Geophysics and Space Physics; the development of CR research and a short history of many fundamental discoveries, main properties of primary and secondary CR, methods of transformation of CR observation data in the atmosphere and underground to space, and the experimental basis of CR research underground and on the ground, on balloons and on satellites and space probes. The second part (Chapters 5-9) is devoted to the influence of atmospheric properties on CR, so called CR meteorological effects; pressure, temperature, humidity, snow, wind, gravitation, and atmospheric electric field effects. The inverse problem - the influence of CR properties on the atmosphere and atmospheric processes is considered in the third part (Chapters 10-14); influence on atmospheric, nuclear and chemical compositions, ionization and radio-wave propagation, formation of thunderstorms and lightning, clouds and climate change. The fourth part (Chapters 15-18) describes many realized and potential applications of CR research in different branches of Science and Technology; Meteorology and Aerodrome Service, Geology and Geophysical Prospecting, Hydrology and Agricultural Applications, Archaeology and Medicine, Seismology and Big Earthquakes Forecasting, Space Weather and Environment Monitoring/Forecasting. The book ends with a list providing more than 1,500 full references, a discussion on future developments and unsolved problems, as well as object and author indices. This book will be useful for experts in different branches of Science and Technology, and for students to be used as additional literature to text-books.
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.; Park, C.B.
2005-01-01
In a set of two papers we study the inverse problem of refraction travel times. The purpose of this work is to use the study as a basis for development of more sophisticated methods for finding more reliable solutions to the inverse problem of refraction travel times, which is known to be nonunique. The first paper, "Types of Geophysical Nonuniqueness through Minimization," emphasizes the existence of different forms of nonuniqueness in the realm of inverse geophysical problems. Each type of nonuniqueness requires a different type and amount of a priori information to acquire a reliable solution. Based on such coupling, a nonuniqueness classification is designed. Therefore, since most inverse geophysical problems are nonunique, each inverse problem must be studied to define what type of nonuniqueness it belongs to and thus determine what type of a priori information is necessary to find a realistic solution. The second paper, "Quantifying Refraction Nonuniqueness Using a Three-layer Model," serves as an example of such an approach. However, its main purpose is to provide a better understanding of the inverse refraction problem by studying the type of nonuniqueness it possesses. An approach for obtaining a realistic solution to the inverse refraction problem is planned to be offered in a third paper that is in preparation. The main goal of this paper is to redefine the existing generalized notion of nonuniqueness and a priori information by offering a classified, discriminate structure. Nonuniqueness is often encountered when trying to solve inverse problems. However, possible nonuniqueness diversity is typically neglected and nonuniqueness is regarded as a whole, as an unpleasant "black box" and is approached in the same manner by applying smoothing constraints, damping constraints with respect to the solution increment and, rarely, damping constraints with respect to some sparse reference information about the true parameters. In practice, when solving geophysical problems different types of nonuniqueness exist, and thus there are different ways to solve the problems. Nonuniqueness is usually regarded as due to data error, assuming the true geology is acceptably approximated by simple mathematical models. Compounding the nonlinear problems, geophysical applications routinely exhibit exact-data nonuniqueness even for models with very few parameters adding to the nonuniqueness due to data error. While nonuniqueness variations have been defined earlier, they have not been linked to specific use of a priori information necessary to resolve each case. Four types of nonuniqueness, typical for minimization problems are defined with the corresponding methods for inclusion of a priori information to find a realistic solution without resorting to a non-discriminative approach. The above-developed stand-alone classification is expected to be helpful when solving any geophysical inverse problems. ?? Birkha??user Verlag, Basel, 2005.
Computational methods for inverse problems in geophysics: inversion of travel time observations
Pereyra, V.; Keller, H.B.; Lee, W.H.K.
1980-01-01
General ways of solving various inverse problems are studied for given travel time observations between sources and receivers. These problems are separated into three components: (a) the representation of the unknown quantities appearing in the model; (b) the nonlinear least-squares problem; (c) the direct, two-point ray-tracing problem used to compute travel time once the model parameters are given. Novel software is described for (b) and (c), and some ideas given on (a). Numerical results obtained with artificial data and an implementation of the algorithm are also presented. ?? 1980.
A fixed energy fixed angle inverse scattering in interior transmission problem
NASA Astrophysics Data System (ADS)
Chen, Lung-Hui
2017-06-01
We study the inverse acoustic scattering problem in mathematical physics. The problem is to recover the index of refraction in an inhomogeneous medium by measuring the scattered wave fields in the far field. We transform the problem to the interior transmission problem in the study of the Helmholtz equation. We find an inverse uniqueness on the scatterer with a knowledge of a fixed interior transmission eigenvalue. By examining the solution in a series of spherical harmonics in the far field, we can determine uniquely the perturbation source for the radially symmetric perturbations.
NASA Astrophysics Data System (ADS)
Belkebir, Kamal; Saillard, Marc
2005-12-01
This special section deals with the reconstruction of scattering objects from experimental data. A few years ago, inspired by the Ipswich database [1 4], we started to build an experimental database in order to validate and test inversion algorithms against experimental data. In the special section entitled 'Testing inversion algorithms against experimental data' [5], preliminary results were reported through 11 contributions from several research teams. (The experimental data are free for scientific use and can be downloaded from the web site.) The success of this previous section has encouraged us to go further and to design new challenges for the inverse scattering community. Taking into account the remarks formulated by several colleagues, the new data sets deal with inhomogeneous cylindrical targets and transverse electric (TE) polarized incident fields have also been used. Among the four inhomogeneous targets, three are purely dielectric, while the last one is a `hybrid' target mixing dielectric and metallic cylinders. Data have been collected in the anechoic chamber of the Centre Commun de Ressources Micro-ondes in Marseille. The experimental setup as well as the layout of the files containing the measurements are presented in the contribution by J-M Geffrin, P Sabouroux and C Eyraud. The antennas did not change from the ones used previously [5], namely wide-band horn antennas. However, improvements have been achieved by refining the mechanical positioning devices. In order to enlarge the scope of applications, both TE and transverse magnetic (TM) polarizations have been carried out for all targets. Special care has been taken not to move the target under test when switching from TE to TM measurements, ensuring that TE and TM data are available for the same configuration. All data correspond to electric field measurements. In TE polarization the measured component is orthogonal to the axis of invariance. Contributions A Abubakar, P M van den Berg and T M Habashy, Application of the multiplicative regularized contrast source inversion method TM- and TE-polarized experimental Fresnel data, present results of profile inversions obtained using the contrast source inversion (CSI) method, in which a multiplicative regularization is plugged in. The authors successfully inverted both TM- and TE-polarized fields. Note that this paper is one of only two contributions which address the inversion of TE-polarized data. A Baussard, Inversion of multi-frequency experimental data using an adaptive multiscale approach, reports results of reconstructions using the modified gradient method (MGM). It suggests that a coarse-to-fine iterative strategy based on spline pyramids. In this iterative technique, the number of degrees of freedom is reduced, which improves robustness. The introduction, during the iterative process, of finer scales inside areas of interest leads to an accurate representation of the object under test. The efficiency of this technique is shown via comparisons between the results obtained with the standard MGM and those from an adaptive approach. L Crocco, M D'Urso and T Isernia, Testing the contrast source extended Born inversion method against real data: the case of TM data, assume that the main contribution in the domain integral formulation comes from the singularity of Green's function, even though the media involved are lossless. A Fourier Bessel analysis of the incident and scattered measured fields is used to derive a model of the incident field and an estimate of the location and size of the target. The iterative procedure lies on a conjugate gradient method associated with Tikhonov regularization, and the multi-frequency data are dealt with using a frequency-hopping approach. In many cases, it is difficult to reconstruct accurately both real and imaginary parts of the permittivity if no prior information is included. M Donelli, D Franceschini, A Massa, M Pastorino and A Zanetti, Multi-resolution iterative inversion of real inhomogeneous targets, adopt a multi-resolution strategy, which, at each step, adaptive discretization of the integral equation is performed over an irregular mesh, with a coarser grid outside the regions of interest and tighter sampling where better resolution is required. Here, this procedure is achieved while keeping the number of unknowns constant. The way such a strategy could be combined with multi-frequency data, edge preserving regularization, or any technique also devoted to improve resolution, remains to be studied. As done by some other contributors, the model of incident field is chosen to fit the Fourier Bessel expansion of the measured one. A Dubois, K Belkebir and M Saillard, Retrieval of inhomogeneous targets from experimental frequency diversity data, present results of the reconstruction of targets using three different non-regularized techniques. It is suggested to minimize a frequency weighted cost function rather than a standard one. The different approaches are compared and discussed. C Estatico, G Bozza, A Massa, M Pastorino and A Randazzo, A two-step iterative inexact-Newton method for electromagnetic imaging of dielectric structures from real data, use a two nested iterative methods scheme, based on the second-order Born approximation, which is nonlinear in terms of contrast but does not involve the total field. At each step of the outer iteration, the problem is linearized and solved iteratively using the Landweber method. Better reconstructions than with the Born approximation are obtained at low numerical cost. O Feron, B Duchêne and A Mohammad-Djafari, Microwave imaging of inhomogeneous objects made of a finite number of dielectric and conductive materials from experimental data, adopt a Bayesian framework based on a hidden Markov model, built to take into account, as prior knowledge, that the target is composed of a finite number of homogeneous regions. It has been applied to diffraction tomography and to a rigorous formulation of the inverse problem. The latter can be viewed as a Bayesian adaptation of the contrast source method such that prior information about the contrast can be introduced in the prior law distribution, and it results in estimating the posterior mean instead of minimizing a cost functional. The accuracy of the result is thus closely linked to the prior knowledge of the contrast, making this approach well suited for non-destructive testing. J-M Geffrin, P Sabouroux and C Eyraud, Free space experimental scattering database continuation: experimental set-up and measurement precision, describe the experimental set-up used to carry out the data for the inversions. They report the modifications of the experimental system used previously in order to improve the precision of the measurements. Reliability of data is demonstrated through comparisons between measurements and computed scattered field with both fundamental polarizations. In addition, the reader interested in using the database will find the relevant information needed to perform inversions as well as the description of the targets under test. A Litman, Reconstruction by level sets of n-ary scattering obstacles, presents the reconstruction of targets using a level sets representation. It is assumed that the constitutive materials of the obstacles under test are known and the shape is retrieved. Two approaches are reported. In the first one the obstacles of different constitutive materials are represented in a single level set, while in the second approach several level sets are combined. The approaches are applied to the experimental data and compared. U Shahid, M Testorf and M A Fiddy, Minimum-phase-based inverse scattering algorithm applied to Institut Fresnel data, suggest a way of extending the use of minimum phase functions to 2D problems. In the kind of inverse problems we are concerned with, it consists of separating the contributions from the field and from the contrast in the so-called contrast source term, through homomorphic filtering. Images of the targets are obtained by combination with diffraction tomography. Both pre-processing and imaging are thus based on the use of Fourier transforms, making the algorithm very fast compared to classical iterative approaches. It is also pointed out that the design of appropriate filters remains an open topic. C Yu, L-P Song and Q H Liu, Inversion of multi-frequency experimental data for imaging complex objects by a DTA CSI method, use the contrast source inversion (CSI) method for the reconstruction of the targets, in which the initial guess is a solution deduced from another iterative technique based on the diagonal tensor approximation (DTA). In so doing, the authors combine the fast convergence of the DTA method for generating an accurate initial estimate for the CSI method. Note that this paper is one of only two contributions which address the inversion of TE-polarized data. Conclusion In this special section various inverse scattering techniques were used to successfully reconstruct inhomogeneous targets from multi-frequency multi-static measurements. This shows that the database is reliable and can be useful for researchers wanting to test and validate inversion algorithms. From the database, it is also possible to extract subsets to study particular inverse problems, for instance from phaseless data or from `aspect-limited' configurations. Our future efforts will be directed towards extending the database in order to explore inversions from transient fields and the full three-dimensional problem. Acknowledgments The authors would like to thank the Inverse Problems board for opening the journal to us, and offer profound thanks to Elaine Longden-Chapman and Kate Hooper for their help in organizing this special section.
Total-variation based velocity inversion with Bregmanized operator splitting algorithm
NASA Astrophysics Data System (ADS)
Zand, Toktam; Gholami, Ali
2018-04-01
Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.
Affinity-based biosensors as promising tools for gene doping detection.
Minunni, Maria; Scarano, Simona; Mascini, Marco
2008-05-01
Innovative bioanalytical approaches can be foreseen as interesting means for solving relevant emerging problems in anti-doping control. Sport authorities fear that the newer form of doping, so-called gene doping, based on a misuse of gene therapy, will be undetectable and thus much less preventable. The World Anti-Doping Agency has already asked scientists to assist in finding ways to prevent and detect this newest kind of doping. In this Opinion article we discuss the main aspects of gene doping, from the putative target analytes to suitable sampling strategies. Moreover, we discuss the potential application of affinity sensing in this field, which so far has been successfully applied to a variety of analytical problems, from clinical diagnostics to food and environmental analysis.
Mathematical marriages: intercourse between mathematics and Semiotic choice.
Wagner, Roy
2009-04-01
This paper examines the interaction between Semiotic choices and the presentation and solution of a family of contemporary mathematical problems centred around the so-called 'stable marriage problem'. I investigate how a socially restrictive choice of signs impacts mathematical production both in terms of problem formation and of solutions. I further note how the choice of gendered language ends up constructing a reality, which duplicates the very structural framework that it imported into mathematical analysis in the first place. I go on to point out some semiotic lines of flight from this interlocking grip of mathematics and gendered language.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kikinzon, Evgeny; Kuznetsov, Yuri; Lipnikov, Konstatin
In this study, we describe a new algorithm for solving multi-material diffusion problem when material interfaces are not aligned with the mesh. In this case interface reconstruction methods are used to construct approximate representation of interfaces between materials. They produce so-called multi-material cells, in which materials are represented by material polygons that contain only one material. The reconstructed interface is not continuous between cells. Finally, we suggest the new method for solving multi-material diffusion problems on such meshes and compare its performance with known homogenization methods.
Kikinzon, Evgeny; Kuznetsov, Yuri; Lipnikov, Konstatin; ...
2017-07-08
In this study, we describe a new algorithm for solving multi-material diffusion problem when material interfaces are not aligned with the mesh. In this case interface reconstruction methods are used to construct approximate representation of interfaces between materials. They produce so-called multi-material cells, in which materials are represented by material polygons that contain only one material. The reconstructed interface is not continuous between cells. Finally, we suggest the new method for solving multi-material diffusion problems on such meshes and compare its performance with known homogenization methods.
NASA Astrophysics Data System (ADS)
Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.
2017-07-01
The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.
Numerical optimization in Hilbert space using inexact function and gradient evaluations
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.
Optimal Control of Thermo--Fluid Phenomena in Variable Domains
NASA Astrophysics Data System (ADS)
Volkov, Oleg; Protas, Bartosz
2008-11-01
This presentation concerns our continued research on adjoint--based optimization of viscous incompressible flows (the Navier--Stokes problem) coupled with heat conduction involving change of phase (the Stefan problem), and occurring in domains with variable boundaries. This problem is motivated by optimization of advanced welding techniques used in automotive manufacturing, where the goal is to determine an optimal heat input, so as to obtain a desired shape of the weld pool surface upon solidification. We argue that computation of sensitivities (gradients) in such free--boundary problems requires the use of the shape--differential calculus as a key ingredient. We also show that, with such tools available, the computational solution of the direct and inverse (optimization) problems can in fact be achieved in a similar manner and in a comparable computational time. Our presentation will address certain mathematical and computational aspects of the method. As an illustration we will consider the two--phase Stefan problem with contact point singularities where our approach allows us to obtain a thermodynamically consistent solution.
Geostatistical regularization operators for geophysical inverse problems on irregular meshes
NASA Astrophysics Data System (ADS)
Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA
2018-05-01
Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.
FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)
NASA Astrophysics Data System (ADS)
2014-10-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2014 was a one-day workshop held in May 2014 which attracted around sixty attendees. Each of the submitted papers has been reviewed by two reviewers. There have been nine accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks (GDR ISIS, GDR MIA, GDR MOA, GDR Ondes). The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA, SATIE. Eric Vourc'h and Thomas Rodet
Regional regularization method for ECT based on spectral transformation of Laplacian
NASA Astrophysics Data System (ADS)
Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.
2016-10-01
Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.
Inverse problems in the design, modeling and testing of engineering systems
NASA Technical Reports Server (NTRS)
Alifanov, Oleg M.
1991-01-01
Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.
NASA Astrophysics Data System (ADS)
Eichinger, Benjamin
2016-07-01
We recall criteria on the spectrum of Jacobi matrices such that the corresponding isospectral torus consists of periodic operators. Motivated by those known results for Jacobi matrices, we define a new class of operators called GMP matrices. They form a certain Generalization of matrices related to the strong Moment Problem. This class allows us to give a parametrization of almost periodic finite gap Jacobi matrices by periodic GMP matrices. Moreover, due to their structural similarity we can carry over numerous results from the direct and inverse spectral theory of periodic Jacobi matrices to the class of periodic GMP matrices. In particular, we prove an analogue of the remarkable ''magic formula'' for this new class.
NASA Technical Reports Server (NTRS)
Koshak, William; Solakiewicz, Richard
2012-01-01
The ability to estimate the fraction of ground flashes in a set of flashes observed by a satellite lightning imager, such as the future GOES-R Geostationary Lightning Mapper (GLM), would likely improve operational and scientific applications (e.g., severe weather warnings, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method, called the Ground Flash Fraction Retrieval Algorithm (GoFFRA), was recently developed for estimating the ground flash fraction. The method uses a constrained mixed exponential distribution model to describe a particular lightning optical measurement called the Maximum Group Area (MGA). To obtain the optimum model parameters (one of which is the desired ground flash fraction), a scalar function must be minimized. This minimization is difficult because of two problems: (1) Label Switching (LS), and (2) Parameter Identity Theft (PIT). The LS problem is well known in the literature on mixed exponential distributions, and the PIT problem was discovered in this study. Each problem occurs when one allows the numerical minimizer to freely roam through the parameter search space; this allows certain solution parameters to interchange roles which leads to fundamental ambiguities, and solution error. A major accomplishment of this study is that we have employed a state-of-the-art genetic-based global optimization algorithm called Differential Evolution (DE) that constrains the parameter search in such a way as to remove both the LS and PIT problems. To test the performance of the GoFFRA when DE is employed, we applied it to analyze simulated MGA datasets that we generated from known mixed exponential distributions. Moreover, we evaluated the GoFFRA/DE method by applying it to analyze actual MGAs derived from low-Earth orbiting lightning imaging sensor data; the actual MGA data were classified as either ground or cloud flash MGAs using National Lightning Detection Network[TM] (NLDN) data. Solution error plots are provided for both the simulations and actual data analyses.
Growing the Seeds of Strength in High Risk Urban Neighborhoods.
ERIC Educational Resources Information Center
Saegert, Susan
The lives of poor minority city residents demonstrate the diversity, multiple potentials, and vulnerability to external structures. In spite of the stereotypes of failure and the very real problems of the urban poor, there are many strengths among the so-called urban underclass and there are aspects of life that are successful and productive. In…
The Role of Computer-Assisted Language Learning (CALL) in Promoting Learner Autonomy
ERIC Educational Resources Information Center
Mutlu, Arzu; Eroz-Tuga, Betil
2013-01-01
Problem Statement: Teaching a language with the help of computers and the Internet has attracted the attention of many practitioners and researchers in the last 20 years, so the number of studies that investigate whether computers and the Internet promote language learning continues to increase. These studies have focused on exploring the beliefs…
Magnetic Interactions and the Method of Images: A Wealth of Educational Suggestions
ERIC Educational Resources Information Center
Bonanno, A.; Camarca, M.; Sapia, P.
2011-01-01
Under some conditions, the method of images (well known in electrostatics) may be implemented in magnetostatic problems too, giving an excellent example of the usefulness of formal analogies in the description of physical systems. In this paper, we develop a quantitative model for the magnetic interactions underlying the so-called Geomag[TM]…
Enabling Problem Based Learning through Web 2.0 Technologies: PBL 2.0
ERIC Educational Resources Information Center
Tambouris, Efthimios; Panopoulou, Eleni; Tarabanis, Konstantinos; Ryberg, Thomas; Buus, Lillian; Peristeras, Vassilios; Lee, Deirdre; Porwol, Lukasz
2012-01-01
Advances in Information and Communications Technology (ICT), particularly the so-called Web 2.0, are affecting all aspects of our life: How we communicate, how we shop, how we socialise, how we learn. Facilitating learning through the use of ICT, also known as eLearning, is a vital part of modern educational systems. Established pedagogical…
Complex Langevin method: When can it be trusted?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aarts, Gert; Seiler, Erhard; Stamatescu, Ion-Olimpiu
2010-03-01
We analyze to what extent the complex Langevin method, which is in principle capable of solving the so-called sign problems, can be considered as reliable. We give a formal derivation of the correctness and then point out various mathematical loopholes. The detailed study of some simple examples leads to practical suggestions about the application of the method.
ERIC Educational Resources Information Center
Smith, William A.; Hung, Man; Franklin, Jeremy D.
2011-01-01
Black men's lives are racialized contradictions, They are told that contemporary educational and professional institutions--particularly historically White institutions (HWls)--are places where, through hard work, they can achieve the so-called American dream. However, for far too many Black men, HWIs represent racial climates that are replete…
Community Matters: Fulfilling Learning Potentials for Young Men and Women. UIL Policy Brief 4
ERIC Educational Resources Information Center
UNESCO Institute for Lifelong Learning, 2014
2014-01-01
In some countries, the scale of the youth illiteracy problem calls for critical and targeted responses. It is important to help young people in developing countries gain basic literacy skills so that they can contribute to the development of productive, peaceful, and democratic societies. While national commitments and support are important, the…
An Evidence Centered Design for Learning and Assessment in the Digital World. CRESST Report 778
ERIC Educational Resources Information Center
Behrens, John T.; Mislevy, Robert J.; DiCerbo, Kristen E.; Levy, Roy
2010-01-01
The world in which learning and assessment must take place is rapidly changing. The digital revolution has created a vast space of interconnected information, communication, and interaction. Functioning effectively in this environment requires so-called 21st century skills such as technological fluency, complex problem solving, and the ability to…
Not so Simple: The Problem with "Evidence-Based Practice" and the EEF Toolkit
ERIC Educational Resources Information Center
Wrigley, Terry
2016-01-01
There are increasing calls for policy and practice to be "evidence informed." At surface value, there may appear much to commend such an approach. However, it is important to understand that "evidence" and "knowledge" are being mobilised in very particular ways. The danger is that rather than promote a rich and lively…
Molecular symmetry with quaternions.
Fritzer, H P
2001-09-01
A new and relatively simple version of the quaternion calculus is offered which is especially suitable for applications in molecular symmetry and structure. After introducing the real quaternion algebra and its classical matrix representation in the group SO(4) the relations with vectors in 3-space and the connection with the rotation group SO(3) through automorphism properties of the algebra are discussed. The correlation of the unit quaternions with both the Cayley-Klein and the Euler parameters through the group SU(2) is presented. Besides rotations the extension of quaternions to other important symmetry operations, reflections and the spatial inversion, is given. Finally, the power of the quaternion calculus for molecular symmetry problems is revealed by treating some examples applied to icosahedral symmetry.
NASA Astrophysics Data System (ADS)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.
The ternary system K2SO4MgSO4CaSO4
Rowe, J.J.; Morey, G.W.; Silber, C.C.
1967-01-01
Melting and subsolidus relations in the system K2SO4MgSO4CaSO4 were studied using heating-cooling curves, differential thermal analysis, optics, X-ray diffraction at room and high temperatures and by quenching techniques. Previous investigators were unable to study the binary MgSO4CaSO4 system and the adjacent area in the ternary system because of the decomposition of MgSO4 and CaSO4 at high temperatures. This problem was partly overcome by a novel sealed-tube quenching method, by hydrothermal synthesis, and by long-time heating in the solidus. As a result of this study, we found: (1) a new compound, CaSO4??3MgSO4 (m.p. 1201??C) with a field extending into the ternary system; (2) a high temperature form of MgSO4 with a sluggishly reversible inversion. An X-ray diffraction pattern for this polymorphic form is given; (3) the inversion of ??-CaSO4 (anhydrite) to ??-CaSO4 at 1195??C, in agreement with grahmann; (1) (4) the melting point of MgSO4 is 1136??C and that of CaSO4 is 1462??C (using sealed tube methods to prevent decomposition of the sulphates); (5) calcium langbeinite (K2SO4??2CaSO4) is the only compound in the K2SO4CaSO4 binary system. This resolved discrepancies in the results of previous investigators; (6) a continuous solid solution series between congruently melting K2SOP4??2MgSO4 (langbeinite) and incongruently melting K2SO4??2CaSO4 (calcium langbeinite); (7) the liquidus in the ternary system consists of primary phase fields of K2SO4, MgSO4, CaSO4, langbeinite-calcium langbeinite solid solution, and CaSO4??3MgSO4. The CaSO4 field extends over a large portion of the system. Previously reported fields for the compounds (K2SO4??MgSO4??nCaSO4), K2SO4??3CaSO4 and K2SO4??CaSO4 were not found; (8) a minimum in the ternary system at: 740??C, 25% MgSO4, 6% CaSO4, 69% K2SO4; and ternary eutectics at 882??C, 49% MgSO4, 19% CaSO4, 32% K2SO4; and 880??, 67??5% MgSO4, 5% CaSO4, 27??5% K2SO4. ?? 1967.
Enhancing PC Cluster-Based Parallel Branch-and-Bound Algorithms for the Graph Coloring Problem
NASA Astrophysics Data System (ADS)
Taoka, Satoshi; Takafuji, Daisuke; Watanabe, Toshimasa
A branch-and-bound algorithm (BB for short) is the most general technique to deal with various combinatorial optimization problems. Even if it is used, computation time is likely to increase exponentially. So we consider its parallelization to reduce it. It has been reported that the computation time of a parallel BB heavily depends upon node-variable selection strategies. And, in case of a parallel BB, it is also necessary to prevent increase in communication time. So, it is important to pay attention to how many and what kind of nodes are to be transferred (called sending-node selection strategy). In this paper, for the graph coloring problem, we propose some sending-node selection strategies for a parallel BB algorithm by adopting MPI for parallelization and experimentally evaluate how these strategies affect computation time of a parallel BB on a PC cluster network.
NASA Astrophysics Data System (ADS)
Shirzaei, Manoochehr; Walter, Thomas
2010-05-01
Volcanic unrest and eruptions are one of the major natural hazards next to earthquakes, floods, and storms. It has been shown that many of volcanic and tectonic unrests are triggered by changes in the stress field induced by nearby seismic and magmatic activities. In this study, as part of a mobile volcano fast response system so-called "Exupery" (www.exupery-vfrs.de) we present an arrangement for semi real time assessing the stress field excited by volcanic activity. This system includes; (1) an approach called "WabInSAR" dedicated for advanced processing of the satellite data and providing an accurate time series of the surface deformation [1, 2], (2) a time dependent inverse source modeling method to investigate the source of volcanic unrest using observed surface deformation data [3, 4], (3) the assessment of the changes in stress field induced by magmatic activity at the nearby volcanic and tectonic systems. This system is implemented in a recursive manner that allows handling large 3D data sets in an efficient and robust way which is requirement of an early warning system. We have applied and validated this arrangement on Mauna Loa volcano, Hawaii Island, to assess the influence of the time dependent activities of Mauna Loa on earthquake occurrence at the Kaoiki seismic zone. References [1] M. Shirzaei and T. R. Walter, "Wavelet based InSAR (WabInSAR): a new advanced time series approach for accurate spatiotemporal surface deformation monitoring," IEEE, pp. submitted, 2010. [2] M. Shirzaei and R. T. Walter, "Deformation interplay at Hawaii Island through InSAR time series and modeling," J. Geophys Res., vol. submited, 2009. [3] M. Shirzaei and T. R. Walter, "Randomly Iterated Search and Statistical Competency (RISC) as powerful inversion tools for deformation source modeling: application to volcano InSAR data," J. Geophys. Res., vol. 114, B10401, doi:10.1029/2008JB006071, 2009. [4] M. Shirzaei and T. R. Walter, "Genetic algorithm combined with Kalman filter as powerful tool for nonlinear time dependent inverse modelling: Application to volcanic deformation time series," J. Geophys. Res., pp. submitted, 2010.
The inviscid axisymmetric stability of the supersonic flow along a circular cylinder
NASA Technical Reports Server (NTRS)
Duck, Peter W.
1990-01-01
The supersonic flow past a thin straight circular cylinder is investigated. The associated boundary-layer flow (i.e. the velocity and temperature field) is computed; the asymptotic, far downstream solution is obtained, and compared with the full numerical results. The inviscid, linear, axisymmetric (temporal) stability of this boundary layer is also studied. A so-called 'doubly generalized' inflexion condition is derived, which is a condition for the existence of so-called 'subsonic' neutral modes. The eigenvalue problem (for the complex wavespeed) is computed for two free-stream Mach numbers (2.8 and 3.8), and this reveals that curvature has a profound effect on the stability of the flow. The first unstable inviscid mode is seen to disappear rapidly as curvature is introduced, while the second (and generally the most important) mode suffers a substantially reduced amplification rate.
The inviscid axisymmetric stability of the supersonic flow along a circular cylinder
NASA Technical Reports Server (NTRS)
Duck, Peter W.
1989-01-01
The supersonic flow past a thin straight circular cylinder is investigated. The associated boundary layer flow (i.e., the velocity and temperature field) is computed; the asymptotic, far downstream solution is obtained, and compared with the full numerical results. The inviscid, linear, axisymmetric (temporal) stability of this boundary layer is also studied. A so called doubly generalized inflexion condition is derived, which is a condition for the existence of so called subsonic neutral modes. The eigenvalue problem (for the complex wavespeed) is computed for two freestream Mach numbers (2.8 and 3.8), and this reveals that curvature has a profound effect on the stability of the flow. The first unstable inviscid mode is seen to rapidly disappear as curvature is introduced, while the second (and generally the most important) mode suffers a substantially reduced amplification rate.
Oxygen-enabled control of Dzyaloshinskii-Moriya Interaction in ultra-thin magnetic films.
Belabbes, Abderrezak; Bihlmayer, Gustav; Blügel, Stefan; Manchon, Aurélien
2016-04-22
The search for chiral magnetic textures in systems lacking spatial inversion symmetry has attracted a massive amount of interest in the recent years with the real space observation of novel exotic magnetic phases such as skyrmions lattices, but also domain walls and spin spirals with a defined chirality. The electrical control of these textures offers thrilling perspectives in terms of fast and robust ultrahigh density data manipulation. A powerful ingredient commonly used to stabilize chiral magnetic states is the so-called Dzyaloshinskii-Moriya interaction (DMI) arising from spin-orbit coupling in inversion asymmetric magnets. Such a large antisymmetric exchange has been obtained at interfaces between heavy metals and transition metal ferromagnets, resulting in spin spirals and nanoskyrmion lattices. Here, using relativistic first-principles calculations, we demonstrate that the magnitude and sign of DMI can be entirely controlled by tuning the oxygen coverage of the magnetic film, therefore enabling the smart design of chiral magnetism in ultra-thin films. We anticipate that these results extend to other electronegative ions and suggest the possibility of electrical tuning of exotic magnetic phases.
Oxygen-enabled control of Dzyaloshinskii-Moriya Interaction in ultra-thin magnetic films
Belabbes, Abderrezak; Bihlmayer, Gustav; Blügel, Stefan; Manchon, Aurélien
2016-01-01
The search for chiral magnetic textures in systems lacking spatial inversion symmetry has attracted a massive amount of interest in the recent years with the real space observation of novel exotic magnetic phases such as skyrmions lattices, but also domain walls and spin spirals with a defined chirality. The electrical control of these textures offers thrilling perspectives in terms of fast and robust ultrahigh density data manipulation. A powerful ingredient commonly used to stabilize chiral magnetic states is the so-called Dzyaloshinskii-Moriya interaction (DMI) arising from spin-orbit coupling in inversion asymmetric magnets. Such a large antisymmetric exchange has been obtained at interfaces between heavy metals and transition metal ferromagnets, resulting in spin spirals and nanoskyrmion lattices. Here, using relativistic first-principles calculations, we demonstrate that the magnitude and sign of DMI can be entirely controlled by tuning the oxygen coverage of the magnetic film, therefore enabling the smart design of chiral magnetism in ultra-thin films. We anticipate that these results extend to other electronegative ions and suggest the possibility of electrical tuning of exotic magnetic phases. PMID:27103448
Havelock Ellis's literary criticism, canon formation, and the heterosexual Shakespeare.
Radel, Nicholas F
2009-01-01
Famous as the author of an early full-length scientific study of sexual inversion or homosexuality, English sexologist Havelock Ellis was also a literary critic responsible for initiating publication of the famous Mermaid Series of "The Best of Plays of the Old Dramatists" in the late-nineteenth century. Personally editing the first volume of plays by Christopher Marlowe and a later collection by tragedian John Ford, Ellis associated these playwrights here and in his scientific work, Sexual Inversion, with ideas about normative and so-called abnormal sexualities at the start of the twentieth century. Ellis, thus, helped give expression to a literary canon of early English dramatists in which modern, anachronistic ideas about sexual subjectivity play a part. While this article does not claim that Ellis was the necessary source for later criticism, it shows how, over the whole of the twentieth century, Shakespeare's priority in the literary canon came to be posited at least in part on his apparent sexual normality in contrast with a supposedly homosexual Christopher Marlowe and other playwrights such as Ford or Francis Beaumont and John Fletcher associated with varying degrees of sexual difference.
Marichy, Catherine; Muller, Nicolas; Froufe-Pérez, Luis S; Scheffold, Frank
2016-02-25
Photonic crystal materials are based on a periodic modulation of the dielectric constant on length scales comparable to the wavelength of light. These materials can exhibit photonic band gaps; frequency regions for which the propagation of electromagnetic radiation is forbidden due to the depletion of the density of states. In order to exhibit a full band gap, 3D PCs must present a threshold refractive index contrast that depends on the crystal structure. In the case of the so-called woodpile photonic crystals this threshold is comparably low, approximately 1.9 for the direct structure. Therefore direct or inverted woodpiles made of high refractive index materials like silicon, germanium or titanium dioxide are sought after. Here we show that, by combining multiphoton lithography and atomic layer deposition, we can achieve a direct inversion of polymer templates into TiO2 based photonic crystals. The obtained structures show remarkable optical properties in the near-infrared region with almost perfect specular reflectance, a transmission dip close to the detection limit and a Bragg length comparable to the lattice constant.
Parallel halftoning technique using dot diffusion optimization
NASA Astrophysics Data System (ADS)
Molina-Garcia, Javier; Ponomaryov, Volodymyr I.; Reyes-Reyes, Rogelio; Cruz-Ramos, Clara
2017-05-01
In this paper, a novel approach for halftone images is proposed and implemented for images that are obtained by the Dot Diffusion (DD) method. Designed technique is based on an optimization of the so-called class matrix used in DD algorithm and it consists of generation new versions of class matrix, which has no baron and near-baron in order to minimize inconsistencies during the distribution of the error. Proposed class matrix has different properties and each is designed for two different applications: applications where the inverse-halftoning is necessary, and applications where this method is not required. The proposed method has been implemented in GPU (NVIDIA GeForce GTX 750 Ti), multicore processors (AMD FX(tm)-6300 Six-Core Processor and in Intel core i5-4200U), using CUDA and OpenCV over a PC with linux. Experimental results have shown that novel framework generates a good quality of the halftone images and the inverse halftone images obtained. The simulation results using parallel architectures have demonstrated the efficiency of the novel technique when it is implemented in real-time processing.
Size-controlled soft-template synthesis of carbon nanodots toward versatile photoactive materials.
Kwon, Woosung; Lee, Gyeongjin; Do, Sungan; Joo, Taiha; Rhee, Shi-Woo
2014-02-12
Size-controlled soft-template synthesis of carbon nanodots (CNDs) as novel photoactive materials is reported. The size of the CNDs can be controlled by regulating the amount of an emulsifier. As the size increases, the CNDs exhibit blue-shifted photoluminescence (PL) or so-called an inverse PL shift. Using time-correlated single photon counting, ultraviolet photoelectron spectroscopy, and low-temperature PL measurements, it is revealed that the CNDs are composed of sp² clusters with certain energy gaps and their oleylamine ligands act as auxochromes to reduce the energy gaps. This insight can provide a plausible explanation on the origin of the inverse PL shift which has been debatable over a past decade. To explore the potential of the CNDs as photoactive materials, several prototypes of CND-based optoelectronic devices, including multicolored light-emitting diodes and air-stable organic solar cells, are demonstrated. This study could shed light on future applications of the CNDs and further expedite the development of other related fields. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Why does the sign problem occur in evaluating the overlap of HFB wave functions?
NASA Astrophysics Data System (ADS)
Mizusaki, Takahiro; Oi, Makito; Shimizu, Noritaka
2018-04-01
For the overlap matrix element between Hartree-Fock-Bogoliubov states, there are two analytically different formulae: one with the square root of the determinant (the Onishi formula) and the other with the Pfaffian (Robledo's Pfaffian formula). The former formula is two-valued as a complex function, hence it leaves the sign of the norm overlap undetermined (i.e., the so-called sign problem of the Onishi formula). On the other hand, the latter formula does not suffer from the sign problem. The derivations for these two formulae are so different that the reasons are obscured why the resultant formulae possess different analytical properties. In this paper, we discuss the reason why the difference occurs by means of the consistent framework, which is based on the linked cluster theorem and the product-sum identity for the Pfaffian. Through this discussion, we elucidate the source of the sign problem in the Onishi formula. We also point out that different summation methods of series expansions may result in analytically different formulae.
CABINS: Case-based interactive scheduler
NASA Technical Reports Server (NTRS)
Miyashita, Kazuo; Sycara, Katia
1992-01-01
In this paper we discuss the need for interactive factory schedule repair and improvement, and we identify case-based reasoning (CBR) as an appropriate methodology. Case-based reasoning is the problem solving paradigm that relies on a memory for past problem solving experiences (cases) to guide current problem solving. Cases similar to the current case are retrieved from the case memory, and similarities and differences of the current case to past cases are identified. Then a best case is selected, and its repair plan is adapted to fit the current problem description. If a repair solution fails, an explanation for the failure is stored along with the case in memory, so that the user can avoid repeating similar failures in the future. So far we have identified a number of repair strategies and tactics for factory scheduling and have implemented a part of our approach in a prototype system, called CABINS. As a future work, we are going to scale up CABINS to evaluate its usefulness in a real manufacturing environment.
Once more with feeling: Normative data for the aha experience in insight and noninsight problems.
Webb, Margaret E; Little, Daniel R; Cropper, Simon J
2017-10-19
Despite the presumed ability of insight problems to elicit the subjective feeling of insight, as well as the use of so-called insight problems to investigate this phenomenon for over 100 years, no research has collected normative data regarding the ability of insight problems to actually elicit the feeling of insight in a given individual. The work described in this article provides an overview of both classic and contemporary problems used to examine the construct of insight and presents normative data on the success rate, mean time to solution, and mean rating of aha experience for each problem and task type. We suggest using these data in future work as a reference for selecting problems on the basis of their ability to elicit an aha experience.
Recombination rate predicts inversion size in Diptera.
Cáceres, M; Barbadilla, A; Ruiz, A
1999-01-01
Most species of the Drosophila genus and other Diptera are polymorphic for paracentric inversions. A common observation is that successful inversions are of intermediate size. We test here the hypothesis that the selected property is the recombination length of inversions, not their physical length. If so, physical length of successful inversions should be negatively correlated with recombination rate across species. This prediction was tested by a comprehensive statistical analysis of inversion size and recombination map length in 12 Diptera species for which appropriate data are available. We found that (1) there is a wide variation in recombination map length among species; (2) physical length of successful inversions varies greatly among species and is inversely correlated with the species recombination map length; and (3) neither the among-species variation in inversion length nor the correlation are observed in unsuccessful inversions. The clear differences between successful and unsuccessful inversions point to natural selection as the most likely explanation for our results. Presumably the selective advantage of an inversion increases with its length, but so does its detrimental effect on fertility due to double crossovers. Our analysis provides the strongest and most extensive evidence in favor of the notion that the adaptive value of inversions stems from their effect on recombination. PMID:10471710
Reverse engineering and identification in systems biology: strategies, perspectives and challenges
Villaverde, Alejandro F.; Banga, Julio R.
2014-01-01
The interplay of mathematical modelling with experiments is one of the central elements in systems biology. The aim of reverse engineering is to infer, analyse and understand, through this interplay, the functional and regulatory mechanisms of biological systems. Reverse engineering is not exclusive of systems biology and has been studied in different areas, such as inverse problem theory, machine learning, nonlinear physics, (bio)chemical kinetics, control theory and optimization, among others. However, it seems that many of these areas have been relatively closed to outsiders. In this contribution, we aim to compare and highlight the different perspectives and contributions from these fields, with emphasis on two key questions: (i) why are reverse engineering problems so hard to solve, and (ii) what methods are available for the particular problems arising from systems biology? PMID:24307566
NASA Astrophysics Data System (ADS)
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.
NASA Astrophysics Data System (ADS)
Rundell, William; Somersalo, Erkki
2008-07-01
The Inverse Problems International Association (IPIA) awarded the first Calderón Prize to Matti Lassas for his outstanding contributions to the field of inverse problems, especially in geometric inverse problems. The Calderón Prize is given to a researcher under the age of 40 who has made distinguished contributions to the field of inverse problems broadly defined. The first Calderón Prize Committee consisted of Professors Adrian Nachman, Lassi Päivärinta, William Rundell (chair), and Michael Vogelius. William Rundell For the Calderón Prize Committee Prize ceremony The ceremony awarding the Calderón Prize. Matti Lassas is on the left. He and William Rundell are on the right. Photos by P Stefanov. Brief Biography of Matti Lassas Matti Lassas was born in 1969 in Helsinki, Finland, and studied at the University of Helsinki. He finished his Master's studies in 1992 in three years and earned his PhD in 1996. His PhD thesis, written under the supervision of Professor Erkki Somersalo was entitled `Non-selfadjoint inverse spectral problems and their applications to random bodies'. Already in his thesis, Matti demonstrated a remarkable command of different fields of mathematics, bringing together the spectral theory of operators, geometry of Riemannian surfaces, Maxwell's equations and stochastic analysis. He has continued to develop all of these branches in the framework of inverse problems, the most remarkable results perhaps being in the field of differential geometry and inverse problems. Matti has always been a very generous researcher, sharing his ideas with his numerous collaborators. He has authored over sixty scientific articles, among which a monograph on inverse boundary spectral problems with Alexander Kachalov and Yaroslav Kurylev and over forty articles in peer reviewed journals of the highest standards. To get an idea of the wide range of Matti's interests, it is enough to say that he also has three US patents on medical imaging applications. Matti is currently professor of mathematics at Helsinki University of Technology, where he has created his own line of research with young talented researchers around him. He is a central person in the Centre of Excellence in Inverse Problems Research of the Academy of Finland. Previously, Matti Lassas has won several awards in his home country, including the prestigious Vaisala price of the Finnish Academy of Science and Letters in 2004. He is a highly esteemed colleague, teacher and friend, and the Great Diving Beetle of the Finnish Inverse Problems Society (http://venda.uku.fi/research/FIPS/), an honorary title for a person who has no fear of the deep. Erkki Somersalo
Adaptive eigenspace method for inverse scattering problems in the frequency domain
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nahum, Uri
2017-02-01
A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.
Orthogonalizing EM: A design-based least squares algorithm.
Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z G
We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p . Supplementary materials for this article are available online.
PREFACE: Inverse Problems in Applied Sciences—towards breakthrough
NASA Astrophysics Data System (ADS)
Cheng, Jin; Iso, Yuusuke; Nakamura, Gen; Yamamoto, Masahiro
2007-06-01
These are the proceedings of the international conference `Inverse Problems in Applied Sciences—towards breakthrough' which was held at Hokkaido University, Sapporo, Japan on 3-7 July 2006 (http://coe.math.sci.hokudai.ac.jp/sympo/inverse/). There were 88 presentations and more than 100 participants, and we are proud to say that the conference was very successful. Nowadays, many new activities on inverse problems are flourishing at many centers of research around the world, and the conference has successfully gathered a world-wide variety of researchers. We believe that this volume contains not only main papers, but also conveys the general status of current research into inverse problems. This conference was the third biennial international conference on inverse problems, the core of which is the Pan-Pacific Asian area. The purpose of this series of conferences is to establish and develop constant international collaboration, especially among the Pan-Pacific Asian countries, and to lead the organization of activities concerning inverse problems centered in East Asia. The first conference was held at City University of Hong Kong in January 2002 and the second was held at Fudan University in June 2004. Following the preceding two successes, the third conference was organized in order to extend the scope of activities and build useful bridges to the next conference in Seoul in 2008. Therefore this third biennial conference was intended not only to establish collaboration and links between researchers in Asia and leading researchers worldwide in inverse problems but also to nurture interdisciplinary collaboration in theoretical fields such as mathematics, applied fields and evolving aspects of inverse problems. For these purposes, we organized tutorial lectures, serial lectures and a panel discussion as well as conference research presentations. This volume contains three lecture notes from the tutorial and serial lectures, and 22 papers. Especially at this flourishing time, it is necessary to carefully analyse the current status of inverse problems for further development. Thus we have opened with the panel discussion entitled `Future of Inverse Problems' with panelists: Professors J Cheng, H W Engl, V Isakov, R Kress, J-K Seo, G Uhlmann and the commentator: Elaine Longden-Chapman from IOP Publishing. The aims of the panel discussion were to examine the current research status from various viewpoints, to discuss how we can overcome any difficulties and how we can promote young researchers and open new possibilities for inverse problems such as industrial linkages. As one output, the panel discussion has triggered the organization of the Inverse Problems International Association (IPIA) which has led to its first international congress in the summer of 2007. Another remarkable outcome of the conference is, of course, the present volume: this is the very high quality online proceedings volume of Journal of Physics: Conference Series. Readers can see in these proceedings very well written tutorial lecture notes, and very high quality original research and review papers all of which show what was achieved by the time the conference was held. The electronic publication of the proceedings is a new way of publicizing the achievement of the conference. It has the advantage of wide circulation and cost reduction. We believe this is a most efficient method for our needs and purposes. We would like to take this opportunity to acknowledge all the people who helped to organize the conference. Guest Editors Jin Cheng, Fudan University, Shanghai, China Yuusuke Iso, Kyoto University, Kyoto, Japan Gen Nakamura, Hokkaido University, Sapporo, Japan Masahiro Yamamoto, University of Tokyo, Tokyo, Japan
Erem, B; Hyde, D E; Peters, J M; Duffy, F H; Brooks, D H; Warfield, S K
2015-04-01
The dynamical structure of the brain's electrical signals contains valuable information about its physiology. Here we combine techniques for nonlinear dynamical analysis and manifold identification to reveal complex and recurrent dynamics in interictal epileptiform discharges (IEDs). Our results suggest that recurrent IEDs exhibit some consistent dynamics, which may only last briefly, and so individual IED dynamics may need to be considered in order to understand their genesis. This could potentially serve to constrain the dynamics of the inverse source localization problem.
Towards an Effective Theory of Reformulation. Part 1; Semantics
NASA Technical Reports Server (NTRS)
Benjamin, D. Paul
1992-01-01
This paper describes an investigation into the structure of representations of sets of actions, utilizing semigroup theory. The goals of this project are twofold: to shed light on the relationship between tasks and representations, leading to a classification of tasks according to the representations they admit; and to develop techniques for automatically transforming representations so as to improve problem-solving performance. A method is demonstrated for automatically generating serial algorithms for representations whose actions form a finite group. This method is then extended to representations whose actions form a finite inverse semigroup.
Discrete-time entropy formulation of optimal and adaptive control problems
NASA Technical Reports Server (NTRS)
Tsai, Yweting A.; Casiello, Francisco A.; Loparo, Kenneth A.
1992-01-01
The discrete-time version of the entropy formulation of optimal control of problems developed by G. N. Saridis (1988) is discussed. Given a dynamical system, the uncertainty in the selection of the control is characterized by the probability distribution (density) function which maximizes the total entropy. The equivalence between the optimal control problem and the optimal entropy problem is established, and the total entropy is decomposed into a term associated with the certainty equivalent control law, the entropy of estimation, and the so-called equivocation of the active transmission of information from the controller to the estimator. This provides a useful framework for studying the certainty equivalent and adaptive control laws.
Krylov subspace methods - Theory, algorithms, and applications
NASA Technical Reports Server (NTRS)
Sad, Youcef
1990-01-01
Projection methods based on Krylov subspaces for solving various types of scientific problems are reviewed. The main idea of this class of methods when applied to a linear system Ax = b, is to generate in some manner an approximate solution to the original problem from the so-called Krylov subspace span. Thus, the original problem of size N is approximated by one of dimension m, typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now becoming popular for solving nonlinear equations. The main ideas in Krylov subspace methods are shown and their use in solving linear systems, eigenvalue problems, parabolic partial differential equations, Liapunov matrix equations, and nonlinear system of equations are discussed.
The pseudo-Boolean optimization approach to form the N-version software structure
NASA Astrophysics Data System (ADS)
Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.
2015-10-01
The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality. Some additional modifications of MVP have been made to solve the problem of N-version systems design. Those algorithms take into account the discovered specific features of the objective function. The practical experiments have shown the advantage of using these algorithm modifications because of reducing a search space.
Solvability of the electrocardiology inverse problem for a moving dipole.
Tolkachev, V; Bershadsky, B; Nemirko, A
1993-01-01
New formulations of the direct and inverse problems for the moving dipole are offered. It has been suggested to limit the study by a small area on the chest surface. This lowers the role of the medium inhomogeneity. When formulating the direct problem, irregular components are considered. The algorithm of simultaneous determination of the dipole and regular noise parameters has been described and analytically investigated. It is shown that temporal overdetermination of the equations offers a single solution of the inverse problem for the four leads.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
NASA Astrophysics Data System (ADS)
Honarvar, M.; Lobo, J.; Mohareri, O.; Salcudean, S. E.; Rohling, R.
2015-05-01
To produce images of tissue elasticity, the vibro-elastography technique involves applying a steady-state multi-frequency vibration to tissue, estimating displacements from ultrasound echo data, and using the estimated displacements in an inverse elasticity problem with the shear modulus spatial distribution as the unknown. In order to fully solve the inverse problem, all three displacement components are required. However, using ultrasound, the axial component of the displacement is measured much more accurately than the other directions. Therefore, simplifying assumptions must be used in this case. Usually, the equations of motion are transformed into a Helmholtz equation by assuming tissue incompressibility and local homogeneity. The local homogeneity assumption causes significant imaging artifacts in areas of varying elasticity. In this paper, we remove the local homogeneity assumption. In particular we introduce a new finite element based direct inversion technique in which only the coupling terms in the equation of motion are ignored, so it can be used with only one component of the displacement. Both Cartesian and cylindrical coordinate systems are considered. The use of multi-frequency excitation also allows us to obtain multiple measurements and reduce artifacts in areas where the displacement of one frequency is close to zero. The proposed method was tested in simulations and experiments against a conventional approach in which the local homogeneity is used. The results show significant improvements in elasticity imaging with the new method compared to previous methods that assumes local homogeneity. For example in simulations, the contrast to noise ratio (CNR) for the region with spherical inclusion increases from an average value of 1.5-17 after using the proposed method instead of the local inversion with homogeneity assumption, and similarly in the prostate phantom experiment, the CNR improved from an average value of 1.6 to about 20.
Ship dynamics for maritime ISAR imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin Walter
2008-02-01
Demand is increasing for imaging ships at sea. Conventional SAR fails because the ships are usually in motion, both with a forward velocity, and other linear and angular motions that accompany sea travel. Because the target itself is moving, this becomes an Inverse- SAR, or ISAR problem. Developing useful ISAR techniques and algorithms is considerably aided by first understanding the nature and characteristics of ship motion. Consequently, a brief study of some principles of naval architecture sheds useful light on this problem. We attempt to do so here. Ship motions are analyzed for their impact on range-Doppler imaging using Inversemore » Synthetic Aperture Radar (ISAR). A framework for analysis is developed, and limitations of simple ISAR systems are discussed.« less
Transmission loss measurement of acoustic material using time-domain pulse-separation method (L).
Sun, Liang; Hou, Hong
2011-04-01
An alternative method for measuring the normal incidence sound transmission loss (nSTL) is presented in this paper based on the time-domain separation of so-called Butterworth pulse with a short-duration time about 1 ms in a standing wave tube. During the generation process of the pulse, inverse filter principle was adopted to compensate the loudspeaker response, besides this, the effect of the characteristics of tube termination can be eliminated through the generation process of the pulse so as to obtain a single plane pulse wave in the standing wave tube which makes the nSTL measurement very simple. A polyurethane foam material with low transmission loss and a kind of rubber material with relatively high transmission loss are used to verify the proposed method. When compared with the traditional two-load method, a relatively good agreement between these two methods can be observed. The main error of this method results from the measuring accuracy of the amplitude of transmission coefficient.
Confidence set inference with a prior quadratic bound
NASA Technical Reports Server (NTRS)
Backus, George E.
1988-01-01
In the uniqueness part of a geophysical inverse problem, the observer wants to predict all likely values of P unknown numerical properties z = (z sub 1,...,z sub p) of the earth from measurement of D other numerical properties y(0)=(y sub 1(0),...,y sub D(0)) knowledge of the statistical distribution of the random errors in y(0). The data space Y containing y(0) is D-dimensional, so when the model space X is infinite-dimensional the linear uniqueness problem usually is insoluble without prior information about the correct earth model x. If that information is a quadratic bound on x (e.g., energy or dissipation rate), Bayesian inference (BI) and stochastic inversion (SI) inject spurious structure into x, implied by neither the data nor the quadratic bound. Confidence set inference (CSI) provides an alternative inversion technique free of this objection. CSI is illustrated in the problem of estimating the geomagnetic field B at the core-mantle boundary (CMB) from components of B measured on or above the earth's surface. Neither the heat flow nor the energy bound is strong enough to permit estimation of B(r) at single points on the CMB, but the heat flow bound permits estimation of uniform averages of B(r) over discs on the CMB, and both bounds permit weighted disc-averages with continous weighting kernels. Both bounds also permit estimation of low-degree Gauss coefficients at the CMB. The heat flow bound resolves them up to degree 8 if the crustal field at satellite altitudes must be treated as a systematic error, but can resolve to degree 11 under the most favorable statistical treatment of the crust. These two limits produce circles of confusion on the CMB with diameters of 25 deg and 19 deg respectively.
Data inversion immune to cycle-skipping using AWI
NASA Astrophysics Data System (ADS)
Guasch, L.; Warner, M.; Umpleby, A.; Yao, G.; Morgan, J. V.
2014-12-01
Over the last decade, 3D Full Waveform Inversion (FWI) has become a standard model-building tool in exploration seismology, especially in oil and gas applications -thanks to the high quality (spatial density of sources and receivers) datasets acquired by the industry. FWI provides superior quantitative images than its travel-time counterparts (travel-time based inversion methods) because it aims to match all the information in the observations instead of a severely restricted subset of them, namely picked arrivals.The downside is that the solution space explored by FWI has a high number of local minima, and since the solution is restricted to local optimization methods (due to the objective function evaluation cost), the success of the inversion is subject to starting within the basin of attraction of the global minimum.Local minima can exist for a wide variety of reasons, and it seems unlikely that a formulation of the problem that can eliminate all of them -by defining the optimization problem in a form that results in a monotonic objective function- exist. However, a significant amount of local minima are created by the definition of data misfit. In its standard formulation FWI compares observed data (field data) with predicted data (generated with a synthetic model) by subtracting one from the other, and the objective function is defined as some norm of this difference. The combination of this criteria and the fact that seismic data is oscillatory produces the well-known phenomenon of cycle-skipping, where model updates try to match nearest cycles from one dataset to the other.In order to avoid cycle-skipping we propose a different comparison between observed and predicted data, based on Wiener filters, which exploits the fact that the "identity" Wiener filter is a spike at zero lag. This gives rise to a new objective function without cycle-skipped related local minima, and therefore suppress the need of accurate starting models or low frequencies in the data. This new technique, called Adaptive Waveform Inversion (AWI) appears always superior to conventional FWI.
MAP Estimators for Piecewise Continuous Inversion
2016-08-08
MAP estimators for piecewise continuous inversion M M Dunlop1 and A M Stuart Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK E...Published 8 August 2016 Abstract We study the inverse problem of estimating a field ua from data comprising a finite set of nonlinear functionals of ua...then natural to study maximum a posterior (MAP) estimators. Recently (Dashti et al 2013 Inverse Problems 29 095017) it has been shown that MAP
Time-domain full waveform inversion using instantaneous phase information with damping
NASA Astrophysics Data System (ADS)
Luo, Jingrui; Wu, Ru-Shan; Gao, Fuchun
2018-06-01
In time domain, the instantaneous phase can be obtained from the complex seismic trace using Hilbert transform. The instantaneous phase information has great potential in overcoming the local minima problem and improving the result of full waveform inversion. However, the phase wrapping problem, which comes from numerical calculation, prevents its application. In order to avoid the phase wrapping problem, we choose to use the exponential phase combined with the damping method, which gives instantaneous phase-based multi-stage inversion. We construct the objective functions based on the exponential instantaneous phase, and also derive the corresponding gradient operators. Conventional full waveform inversion and the instantaneous phase-based inversion are compared with numerical examples, which indicates that in the case without low frequency information in seismic data, our method is an effective and efficient approach for initial model construction for full waveform inversion.
Education and the Market: A Response to "Imagined Evidence and False Imperatives"
ERIC Educational Resources Information Center
Holmes, Mark
2009-01-01
While Merrifield is correct in his basic argument that so-called "market reforms" in el/sec schooling are far from being pure market, he is incorrect to suggest that purer market projects are needed together with simulations of pure market reforms. There are two fundamental problems in that thesis. First, it is not clear that school choice in…
Lichnerowicz-type equations with sign-changing nonlinearities on complete manifolds with boundary
NASA Astrophysics Data System (ADS)
Albanese, Guglielmo; Rigoli, Marco
2017-12-01
We prove an existence theorem for positive solutions to Lichnerowicz-type equations on complete manifolds with boundary (M , ∂ M , 〈 , 〉) and nonlinear Neumann conditions. This kind of nonlinear problems arise quite naturally in the study of solutions for the Einstein-scalar field equations of General Relativity in the framework of the so called Conformal Method.
ERIC Educational Resources Information Center
Van de Walle, P.; Hallemans, A.; Truijen, S.; Gosselink, R.; Heyrman, L.; Molenaers, G.; Desloovere, K.
2012-01-01
Gait efficiency in children with cerebral palsy is decreased. To date, most research did not include the upper body as a separate functional unit when exploring these changes in gait efficiency. Since children with spastic diplegia often experience problems with trunk control, they could benefit from separate evaluation of the so-called "passenger…
ERIC Educational Resources Information Center
Swain, Amy
2013-01-01
Schools of education have seen many changes over the last 100 years (Labaree 2004). More recent modifications have included the slow and steady elimination of the social foundations of education in lieu of a more direct attention to teacher skills and basic training. The increased focus on the so-called "nuts and bolts" of teacher…
"Better" People, Better Teaching: The Vision of the National Teacher Corps, 1965-1968
ERIC Educational Resources Information Center
Rogers, Bethany
2009-01-01
This article focuses on the period between 1966 and 1968, when the original vision of the policymakers of the National Teacher Corps (NTC) and the federal staffers who created the NTC initiative held sway. In their vision, the "best and brightest" (according to their criteria) could better solve the problems of educating so-called disadvantaged…
ERIC Educational Resources Information Center
Moller, Asa
2012-01-01
Compensatory pedagogy is in theory a strategy used to manage social and cultural diversity (Sleeter, 2007) by providing extra resources or special treatment for so-called deprived groups. A problem with this particular kind of approach to social and cultural diversity is that it lacks critical awareness of the way social differences (i.e. race,…
Solutions to inverse plume in a crosswind problem using a predictor - corrector method
NASA Astrophysics Data System (ADS)
Vanderveer, Joseph; Jaluria, Yogesh
2013-11-01
Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.
Zeng, Ping; Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.
Cannabis use and dating violence among college students: A call for research.
Shorey, Ryan C; Haynes, Ellen; Strauss, Catherine; Temple, Jeff R; Stuart, Gregory L
2017-01-01
Dating violence is a serious and prevalent problem on college campuses. Although there is a robust literature documenting that alcohol use is consistently associated with increased risk for perpetrating dating violence, little research has examined the role of cannabis in dating violence perpetration. With increasing legalisation of cannabis throughout the world, it is imperative to understand what role, if any, cannabis may play in the important public health problem of dating violence. In this commentary, we discuss the current state of the research on cannabis and dating violence and suggest avenues for additional research in this area. It is critical that we conduct methodologically sound research on the association between cannabis and dating violence so that we can understand what role, if any, cannabis exerts on this important problem. [Shorey RC, Haynes E, Strauss C, Temple JR, Stuart GL. Cannabis use and dating violence among college students: A call for research. Drug Alcohol Rev 2017;36:17-19]. © 2017 Australasian Professional Society on Alcohol and other Drugs.
[The first and foremost tasks of the medical service].
Chizh, I M
1997-07-01
Now in connection with common situation in Russian Federation the problem of reinforcements of army and fleet by healthy personnel, scare of a call-up quota and its poor quality are the main problems of the Armed Forces at the state level. The uniform complex program of medico-social maintenance of the citizens during preparation for military service is necessary. The modern situation is difficult due to many infectious diseases, so the role and the place of military-medical service grows. In last years structure of quota, served by the military doctors, and number of other parameters have greatly changed, that require revision of some priorities. A problem of reinforcements of the Armed Forces by medical service officers remains actual, for decision of which a full-bodied admission on military medical faculty is required, as well as admission of the officers under contract and calling-up of reserve officers. In article the main lessons, received by the medical service during combat actions in Republic of Chechnya are also formulated.
A Very Large Area Network (VLAN) knowledge-base applied to space communication problems
NASA Technical Reports Server (NTRS)
Zander, Carol S.
1988-01-01
This paper first describes a hierarchical model for very large area networks (VLAN). Space communication problems whose solution could profit by the model are discussed and then an enhanced version of this model incorporating the knowledge needed for the missile detection-destruction problem is presented. A satellite network or VLAN is a network which includes at least one satellite. Due to the complexity, a compromise between fully centralized and fully distributed network management has been adopted. Network nodes are assigned to a physically localized group, called a partition. Partitions consist of groups of cell nodes with one cell node acting as the organizer or master, called the Group Master (GM). Coordinating the group masters is a Partition Master (PM). Knowledge is also distributed hierarchically existing in at least two nodes. Each satellite node has a back-up earth node. Knowledge must be distributed in such a way so as to minimize information loss when a node fails. Thus the model is hierarchical both physically and informationally.
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol
Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on—all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications. PMID:28399157
Generalized SMO algorithm for SVM-based multitask learning.
Cai, Feng; Cherkassky, Vladimir
2012-06-01
Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-learning applications, training data can be naturally separated into several groups, and incorporating this group information into learning may improve generalization. Recently, Vapnik proposed a general approach to formalizing such problems, known as "learning with structured data" and its support vector machine (SVM) based optimization formulation called SVM+. Liang and Cherkassky showed the connection between SVM+ and multitask learning (MTL) approaches in machine learning, and proposed an SVM-based formulation for MTL called SVM+MTL for classification. Training the SVM+MTL classifier requires the solution of a large quadratic programming optimization problem which scales as O(n(3)) with sample size n. So there is a need to develop computationally efficient algorithms for implementing SVM+MTL. This brief generalizes Platt's sequential minimal optimization (SMO) algorithm to the SVM+MTL setting. Empirical results show that, for typical SVM+MTL problems, the proposed generalized SMO achieves over 100 times speed-up, in comparison with general-purpose optimization routines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genest-Beaulieu, C.; Bergeron, P., E-mail: genest@astro.umontreal.ca, E-mail: bergeron@astro.umontreal.ca
We present a comparative analysis of atmospheric parameters obtained with the so-called photometric and spectroscopic techniques. Photometric and spectroscopic data for 1360 DA white dwarfs from the Sloan Digital Sky Survey (SDSS) are used, as well as spectroscopic data from the Villanova White Dwarf Catalog. We first test the calibration of the ugriz photometric system by using model atmosphere fits to observed data. Our photometric analysis indicates that the ugriz photometry appears well calibrated when the SDSS to AB{sub 95} zeropoint corrections are applied. The spectroscopic analysis of the same data set reveals that the so-called high-log g problem canmore » be solved by applying published correction functions that take into account three-dimensional hydrodynamical effects. However, a comparison between the SDSS and the White Dwarf Catalog spectra also suggests that the SDSS spectra still suffer from a small calibration problem. We then compare the atmospheric parameters obtained from both fitting techniques and show that the photometric temperatures are systematically lower than those obtained from spectroscopic data. This systematic offset may be linked to the hydrogen line profiles used in the model atmospheres. We finally present the results of an analysis aimed at measuring surface gravities using photometric data only.« less
On judgment and judgmentalism: how counselling can make people better
Gibson, S
2005-01-01
Counsellors, like other members of the caring professions, are required to practise within an ethical framework, at least in so far as they seek professional accreditation. As such, the counsellor is called upon to exercise her moral agency. In most professional contexts this requirement is, in itself, unproblematic. It has been suggested, however, that counselling practice does present a problem in this respect, in so far as the counsellor is expected to take a non-judgemental stance and an attitude of "unconditional positive regard" toward the client. If, as might appear to be the case, this stance and attitude are at odds with the making of moral judgments, the possibility of an adequate ethics of counselling is called into question. This paper explores the nature and extent of the problem suggesting that, understood in a Kantian context, non-judgmentalism can be seen to be at odds with neither the moral agency of the counsellor nor that of the client. Instead, it is argued, the relationship between the non-judgmental counsellor and her client is a fundamentally moral relationship, based on respect for the client's unconditional worth as a moral agent. PMID:16199597
Schröder, Lisa; Seehagen, Sabine; Zmyj, Norbert; Hebebrand, Johannes
2016-01-01
Supporting other human beings is a fundamental aspect of human societies. Such so-called prosocial behavior is expressed in helping others, cooperating and sharing with them. This article gives an overview both of the development of prosocial behavior across childhood and of the relationship between prosociality and externalizing and internalizing problems. Especially externalizing problems are negatively associated with prosocial behavior, whereas the relationships with prosocial behavior are more heterogeneous for internalizing problems. Studies investigating developmental trajectories demonstrate that prosocial behavior and externalizing problems are not opposite ends of a continuum. Rather, they are two independent dimensions that may also co-occur in development. The same applies to internalizing problems, which can co-occur with pronounced prosociality as well as with low prosociality.
Problem solving in the borderland between mathematics and physics
NASA Astrophysics Data System (ADS)
Jensen, Jens Højgaard; Niss, Martin; Jankvist, Uffe Thomas
2017-01-01
The article addresses the problématique of where mathematization is taught in the educational system, and who teaches it. Mathematization is usually not a part of mathematics programs at the upper secondary level, but we argue that physics teaching has something to offer in this respect, if it focuses on solving so-called unformalized problems, where a major challenge is to formalize the problems in mathematics and physics terms. We analyse four concrete examples of unformalized problems for which the formalization involves different order of mathematization and applying physics to the problem, but all require mathematization. The analysis leads to the formulation of a model by which we attempt to capture the important steps of the process of solving unformalized problems by means of mathematization and physicalization.
Improving the Nulling Beamformer Using Subspace Suppression.
Rana, Kunjan D; Hämäläinen, Matti S; Vaina, Lucia M
2018-01-01
Magnetoencephalography (MEG) captures the magnetic fields generated by neuronal current sources with sensors outside the head. In MEG analysis these current sources are estimated from the measured data to identify the locations and time courses of neural activity. Since there is no unique solution to this so-called inverse problem, multiple source estimation techniques have been developed. The nulling beamformer (NB), a modified form of the linearly constrained minimum variance (LCMV) beamformer, is specifically used in the process of inferring interregional interactions and is designed to eliminate shared signal contributions, or cross-talk, between regions of interest (ROIs) that would otherwise interfere with the connectivity analyses. The nulling beamformer applies the truncated singular value decomposition (TSVD) to remove small signal contributions from a ROI to the sensor signals. However, ROIs with strong crosstalk will have high separating power in the weaker components, which may be removed by the TSVD operation. To address this issue we propose a new method, the nulling beamformer with subspace suppression (NBSS). This method, controlled by a tuning parameter, reweights the singular values of the gain matrix mapping from source to sensor space such that components with high overlap are reduced. By doing so, we are able to measure signals between nearby source locations with limited cross-talk interference, allowing for reliable cortical connectivity analysis between them. In two simulations, we demonstrated that NBSS reduces cross-talk while retaining ROIs' signal power, and has higher separating power than both the minimum norm estimate (MNE) and the nulling beamformer without subspace suppression. We also showed that NBSS successfully localized the auditory M100 event-related field in primary auditory cortex, measured from a subject undergoing an auditory localizer task, and suppressed cross-talk in a nearby region in the superior temporal sulcus.
Anterior cingulate cortex and intuitive bias detection during number conservation.
Simon, Grégory; Lubin, Amélie; Houdé, Olivier; De Neys, Wim
2015-01-01
Children's number conservation is often biased by misleading intuitions but the precise nature of these conservation errors is not clear. A key question is whether children detect that their erroneous conservation judgment is unwarranted. The present study reanalyzed available fMRI data to test the implication of the anterior cingulate cortex (ACC) in this detection process. We extracted mean BOLD (Blood Oxygen Level Dependent) signal values in an independently defined ACC region of interest (ROI) during presentation of classic and control number conservation problems. In classic trials, an intuitively cued visuospatial response conflicted with the correct conservation response, whereas this conflict was not present in the control trials. Results showed that ACC activation increased when solving the classic conservation problems. Critically, this increase did not differ between participants who solved the classic problems correctly (i.e., so-called conservers) and incorrectly (i.e., so-called non-conservers). Additional control analyses of inferior and lateral prefrontal ROIs showed that the group of conservers did show stronger activation in the right inferior frontal gyrus and right lateral middle frontal gyrus. In line with recent behavioral findings, these data lend credence to the hypothesis that even non-conserving children detect the biased nature of their judgment. The key difference between conservers and non-conservers seems to lie in a differential recruitment of inferior and lateral prefrontal regions associated with inhibitory control.
Global inverse modeling of CH4 sources and sinks: an overview of methods
NASA Astrophysics Data System (ADS)
Houweling, Sander; Bergamaschi, Peter; Chevallier, Frederic; Heimann, Martin; Kaminski, Thomas; Krol, Maarten; Michalak, Anna M.; Patra, Prabir
2017-01-01
The aim of this paper is to present an overview of inverse modeling methods that have been developed over the years for estimating the global sources and sinks of CH4. It provides insight into how techniques and estimates have evolved over time and what the remaining shortcomings are. As such, it serves a didactical purpose of introducing apprentices to the field, but it also takes stock of developments so far and reflects on promising new directions. The main focus is on methodological aspects that are particularly relevant for CH4, such as its atmospheric oxidation, the use of methane isotopologues, and specific challenges in atmospheric transport modeling of CH4. The use of satellite retrievals receives special attention as it is an active field of methodological development, with special requirements on the sampling of the model and the treatment of data uncertainty. Regional scale flux estimation and attribution is still a grand challenge, which calls for new methods capable of combining information from multiple data streams of different measured parameters. A process model representation of sources and sinks in atmospheric transport inversion schemes allows the integrated use of such data. These new developments are needed not only to improve our understanding of the main processes driving the observed global trend but also to support international efforts to reduce greenhouse gas emissions.
Inverting dedevelopment: geometric singularity theory in embryology
NASA Astrophysics Data System (ADS)
Bookstein, Fred L.; Smith, Bradley R.
2000-10-01
The diffeomorphism model so useful in the biomathematics of normal morphological variability and disease is inappropriate for applications in embryogenesis, where whole coordinate patches are created out of single points. For this application we need a suitable algebra for the creation of something from nothing in a carefully organized geometry: a formalism for parameterizing discrete nondifferentiabilities of invertible functions on Rk, k $GTR 1. One easy way to begin is via the inverse of the development map - call it the dedevelopment map, the deformation backwards in time. Extrapolated, this map will inevitably have singularities at which its derivative is zero. When the dedevelopment map is inverted to face forward in time, the singularities become appropriately isolated infinities of derivative. We have recently introduced growth visualizations via extrapolations to the isolated singularities at which only one directional derivative is zero. Maps inverse to these create new coordinate patches directionally rather than radically. The most generic singularity that suits this purpose is the crease f(x,y) equals (x,x2y+y3), which has already been applied in morphometrics for the description of focal morphogenetic phenomena. We apply it to embryogenesis in the form of its analytic inverse, and demonstrate its power using a priceless new data set of mouse embryos imaged in 3D by micro-MR with voxels smaller than 100micrometers 3.
NASA Technical Reports Server (NTRS)
Solomon, Sean C.; Jordan, Thomas H.
1993-01-01
Long-wavelength variations in geoid height, bathymetry, and SS-S travel times are all relatable to lateral variations in the characteristic temperature and bulk composition of the upper mantle. The temperature and composition are in turn relatable to mantle convection and the degree of melt extraction from the upper mantle residuum. Thus the combined inversion of the geoid or gravity field, residual bathymetry, and seismic velocity information offers the promise of resolving fundamental aspects of the pattern of mantle dynamics. The use of differential body wave travel times as a measure of seismic velocity information, in particular, permits resolution of lateral variations at scales not resolvable by conventional global or regional-scale seismic tomography with long-period surface waves. These intermediate scale lengths, well resolved in global gravity field models, are crucial for understanding the details of any chemical or physical layering in the mantle and of the characteristics of so-called 'small-scale' convection beneath oceanic lithosphere. In 1991 a three-year project to the NASA Geophysics Program was proposed to carry out a systematic inversion of long-wavelength geoid anomalies, residual bathymetric anomalies, and differential SS-S travel time delays for the lateral variation in characteristic temperature and bulk composition of the oceanic upper mantle. The project was funded as a three-year award, beginning on 1 Jan. 1992.
Are two plasma equilibrium states possible when the emission coefficient exceeds unity?
NASA Astrophysics Data System (ADS)
Campanell, M. D.; Umansky, M. V.
2017-05-01
Two floating sheath solutions with strong electron emission in planar geometry have been proposed, a "space-charge limited" (SCL) sheath and an "inverse" sheath. SCL and inverse models contain different assumptions about conditions outside the sheath (e.g., the velocity of ions entering the sheath). So it is not yet clear whether both sheaths are possible in practice, or only one. Here we treat the global presheath-sheath problem for a plasma produced volumetrically between two planar walls. We show that all equilibrium requirements (a) floating condition, (b) plasma shielding, and (c) presheath force balance, can indeed be satisfied in two different ways when the emission coefficient γ > 1. There is one solution with SCL sheaths and one with inverse sheaths, each with sharply different presheath distributions. As we show for the first time in 1D-1V simulations, a SCL and inverse equilibrium are both possible in plasmas with the same upstream properties (e.g., same N and Te). However, maintaining a true SCL equilibrium requires no ionization or charge exchange collisions in the sheath, or else cold ion accumulation in the SCL's "dip" forces a transition to the inverse. This suggests that only a monotonic inverse type sheath potential should exist at any plasma-facing surface with strong emission, whether be a divertor plate, emissive probe, dust grain, Hall thruster channel wall, sunlit object in space, etc. Nevertheless, SCL sheaths might still be possible if the ions in the dip can escape. Our simulations demonstrate ways in which SCL and inverse regimes might be distinguished experimentally based on large-scale presheath effects, without having to probe inside the sheath.
[Substance abuse and toxicity. Fetal drug syndrome].
Rodé, Magdolna
2003-08-10
24% of the 16 years old adolescents have already consumed so called substances suitable for abuse. We must make children, teachers, parents, lawyers, priests, physicians and family aware of the effects and outcomes of drug use. This is just one of the many similar unsolved problems of society like AIDS and smoking. It is imperative that education for healthy lifestyle should be thought at every level of social life. Fighting against the hard problems emerging from drug abuse must be continuously kept on the right place in teaching medicine and in our everyday activity.
[Medicolegal problems of "dyadic death"].
Kunz, Jerzy; Bolechała, Filip; Kaliszczak, Paweł
2002-01-01
The authors present 9 cases of homicide followed by suicide of the perpetrator--so called dyadic death from the practice of the Cracow Forensic Medicine Chair. The circumstances of the event, medico legal and psychiatric problems were discussed in view of the literature. A typical picture of the perpetrator is male of the average age 49, killing his spouse or children. The major reasons of dyadic death are: breakdown in a relationship, mental and somatic diseases, financial stress. Very uncommon in dyadic death are cases of murder of people from outside the closest family.
PROBABILISTIC CROSS-IDENTIFICATION IN CROWDED FIELDS AS AN ASSIGNMENT PROBLEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budavári, Tamás; Basu, Amitabh, E-mail: budavari@jhu.edu, E-mail: basu.amitabh@jhu.edu
2016-10-01
One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.
Probabilistic Cross-identification in Crowded Fields as an Assignment Problem
NASA Astrophysics Data System (ADS)
Budavári, Tamás; Basu, Amitabh
2016-10-01
One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.
A General Symbolic Method with Physical Applications
NASA Astrophysics Data System (ADS)
Smith, Gregory M.
2000-06-01
A solution to the problem of unifying the General Relativistic and Quantum Theoretical formalisms is given which introduces a new non-axiomatic symbolic method and an algebraic generalization of the Calculus to non-finite symbolisms without reference to the concept of a limit. An essential feature of the non-axiomatic method is the inadequacy of any (finite) statements: Identifying this aspect of the theory with the "existence of an external physical reality" both allows for the consistency of the method with the results of experiments and avoids the so-called "measurement problem" of quantum theory.
NASA Astrophysics Data System (ADS)
Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory
2017-04-01
The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.
Asteroseismic inversions in the Kepler era: application to the Kepler Legacy sample
NASA Astrophysics Data System (ADS)
Buldgen, Gaël; Reese, Daniel; Dupret, Marc-Antoine
2017-10-01
In the past few years, the CoRoT and Kepler missions have carried out what is now called the space photometry revolution. This revolution is still ongoing thanks to K2 and will be continued by the Tess and Plato2.0 missions. However, the photometry revolution must also be followed by progress in stellar modelling, in order to lead to more precise and accurate determinations of fundamental stellar parameters such as masses, radii and ages. In this context, the long-lasting problems related to mixing processes in stellar interior is the main obstacle to further improvements of stellar modelling. In this contribution, we will apply structural asteroseismic inversion techniques to targets from the Kepler Legacy sample and analyse how these can help us constrain the fundamental parameters and mixing processes in these stars. Our approach is based on previous studies using the SOLA inversion technique [1] to determine integrated quantities such as the mean density [2], the acoustic radius, and core conditions indicators [3], and has already been successfully applied to the 16Cyg binary system [4]. We will show how this technique can be applied to the Kepler Legacy sample and how new indicators can help us to further constrain the chemical composition profiles of stars as well as provide stringent constraints on stellar ages.
Review of the inverse scattering problem at fixed energy in quantum mechanics
NASA Technical Reports Server (NTRS)
Sabatier, P. C.
1972-01-01
Methods of solution of the inverse scattering problem at fixed energy in quantum mechanics are presented. Scattering experiments of a beam of particles at a nonrelativisitic energy by a target made up of particles are analyzed. The Schroedinger equation is used to develop the quantum mechanical description of the system and one of several functions depending on the relative distance of the particles. The inverse problem is the construction of the potentials from experimental measurements.
Cengizci, Süleyman; Atay, Mehmet Tarık; Eryılmaz, Aytekin
2016-01-01
This paper is concerned with two-point boundary value problems for singularly perturbed nonlinear ordinary differential equations. The case when the solution only has one boundary layer is examined. An efficient method so called Successive Complementary Expansion Method (SCEM) is used to obtain uniformly valid approximations to this kind of solutions. Four test problems are considered to check the efficiency and accuracy of the proposed method. The numerical results are found in good agreement with exact and existing solutions in literature. The results confirm that SCEM has a superiority over other existing methods in terms of easy-applicability and effectiveness.