Fast algorithm for computing the Abel inversion integral in broadband reflectometry
Nunes, F.D.
1995-10-01
The application of the Hansen--Jablokow recursive technique is proposed for the numerical computation of the Abel inversion integral which is used in ({ital O}-mode) frequency-modulated broadband reflectometry to evaluate plasma density profiles. Compared to the usual numerical methods the recursive algorithm allows substantial time savings that can be important when processing massive amounts of data aiming to control the plasma in real time. {copyright} {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.
Abel inversion method for cometary atmospheres.
NASA Astrophysics Data System (ADS)
Hubert, Benoit; Opitom, Cyrielle; Hutsemekers, Damien; Jehin, Emmanuel; Munhoven, Guy; Manfroid, Jean; Bisikalo, Dmitry V.; Shematovich, Valery I.
2016-04-01
Remote observation of cometary atmospheres produces a measurement of the cometary emissions integrated along the line of sight joining the observing instrument and the gas of the coma. This integration is the so-called Abel transform of the local emission rate. We develop a method specifically adapted to the inversion of the Abel transform of cometary emissions, that retrieves the radial profile of the emission rate of any unabsorbed emission, under the hypothesis of spherical symmetry of the coma. The method uses weighted least squares fitting and analytical results. A Tikhonov regularization technique is applied to reduce the possible effects of noise and ill-conditioning, and standard error propagation techniques are implemented. Several theoretical tests of the inversion techniques are carried out to show its validity and robustness, and show that the method is only weakly dependent on any constant offset added to the data, which reduces the dependence of the retrieved emission rate on the background subtraction. We apply the method to observations of three different comets observed using the TRAPPIST instrument: 103P/ Hartley 2, F6/ Lemmon and A1/ Siding spring. We show that the method retrieves realistic emission rates, and that characteristic lengths and production rates can be derived from the emission rate for both CN and C2 molecules. We show that the emission rate derived from the observed flux of CN emission at 387 nm and from the C2 emission at 514.1 nm of comet Siding Spring both present an easily-identifiable shoulder that corresponds to the separation between pre- and post-outburst gas. As a general result, we show that diagnosing properties and features of the coma using the emission rate is easier than directly using the observed flux. We also determine the parameters of a Haser model fitting the inverted data and fitting the line-of-sight integrated observation, for which we provide the exact analytical expression of the line-of-sight integration
A generalized Abel inversion method for gamma-ray imaging of thermonuclear plasmas
NASA Astrophysics Data System (ADS)
Nocente, M.; Pavone, A.; Tardocchi, M.; Goloborod'ko, V.; Schoepf, K.; Yavorskij, V.
2016-03-01
A method to determine the gamma-ray emissivity profile from measurements along a few multiple collimated lines of sight in thermonuclear plasmas is presented. The algorithm is based on a generalisation of the known Abel inversion and takes into account the non circular shape of the plasma flux surfaces and the limited number of data points available. The method is applied to synthetic experimental measurements originating from parabolic and non parabolic JET gamma-ray emissivity profiles, where the aim is to compare the results of the inversion with the original, known input parameters. We find that profile parameters, such as the peak value, width and centre of the emissivity, are determined with an accuracy between 1 and 20% for parabolic and 2 to 25% for non parabolic profiles, respectively, which compare to an error at the 10% level for the input data. The results presented in this paper are primarily of relevance for the reconstruction of emissivity profiles from radiation measurements in tokamaks, but the method can also be applied to measurements along a sparse set of collimated lines of sight in general applications, provided that the surfaces at constant emissivity are known to have rotational simmetry.
Improved Abel transform inversion: First application to COSMIC/FORMOSAT-3
NASA Astrophysics Data System (ADS)
Aragon-Angel, A.; Hernandez-Pajares, M.; Juan, J.; Sanz, J.
2007-05-01
In this paper the first results of Ionospheric Tomographic inversion are presented, using the Improved Abel Transform on the COSMIC/FORMOSAT-3 constellation of 6 LEO satellites, carrying on-board GPS receivers.[- 4mm] The Abel transform inversion is a wide used technique which in the ionospheric context makes it possible to retrieve electron densities as a function of height based of STEC (Slant Total Electron Content) data gathered from GPS receivers on board of LEO (Low Earth Orbit) satellites. Within this precise use, the classical approach of the Abel inversion is based on the assumption of spherical symmetry of the electron density in the vicinity of an occultation, meaning that the electron content varies in height but not horizontally. In particular, one implication of this assumption is that the VTEC (Vertical Total Electron Content) is a constant value for the occultation region. This assumption may not always be valid since horizontal ionospheric gradients (a very frequent feature in some ionosphere problematic areas such as the Equatorial region) could significantly affect the electron profiles. [- 4mm] In order to overcome this limitation/problem of the classical Abel inversion, a studied improvement of this technique can be obtained by assuming separability in the electron density (see Hernández-Pajares et al. 2000). This means that the electron density can be expressed by the multiplication of VTEC data and a shape function which assumes all the height dependency in it while the VTEC data keeps the horizontal dependency. Actually, it is more realistic to assume that this shape fuction depends only on the height and to use VTEC information to take into account the horizontal variation rather than considering spherical symmetry in the electron density function as it has been carried out in the classical approach of the Abel inversion.[-4mm] Since the above mentioned improved Abel inversion technique has already been tested and proven to be a useful
A new asymmetric Abel-inversion method for plasma interferometry in tokamaks
Park, H.K.
1989-02-01
In order to get precise local electron density information from chordal interferometric measurement of a tokamak plasma, a self- consistent and reliable inversion method is necessary. In this paper, a new asymmetric Abel-inversion method is introduced. This method includes flexible boundary conditions, application to a non-circular geometry, and estimation of the plasma in the scrape-off layer. The advantages of this method are demonstrated by comparison with other methods. This new inversion method is applied to a parametric study which includes dependence on the Shafranov shift and elongation of the profile. The inverted results are integrated along different views and compared with other density measurements. This new method can also be applied to plasma spectroscopy. 6 refs., 6 figs.
An efficient and flexible Abel-inversion method for noisy data
NASA Astrophysics Data System (ADS)
Antokhin, Igor I.
2016-08-01
We propose an efficient and flexible method for solving Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization on itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.
A semisimultaneous inversion algorithm for SAGE III
NASA Astrophysics Data System (ADS)
Ward, Dale M.
2002-12-01
The Stratospheric Aerosol and Gas Experiment (SAGE) III instrument was successfully launched into orbit on 10 December 2001. The planned operational species separation inversion algorithm will utilize a stepwise retrieval strategy. This paper presents an alternative, semisimultaneous species separation inversion that simultaneously retrieves all species over user-specified vertical intervals or blocks. By overlapping these vertical blocks, retrieved species profiles over the entire vertical range of the measurements are obtained. The semisimultaneous retrieval approach provides a more straightforward method for evaluating the error coupling that occurs among the retrieved profiles due to various types of input uncertainty. Simulation results are presented to show how the semisimultaneous inversion can enhance understanding of the SAGE III retrieval process. In the future, the semisimultaneous inversion algorithm will be used to help evaluate the results and performance of the operational inversion. Compared to SAGE II, SAGE III will provide expanded and more precise spectral measurements. This alone is shown to significantly reduce the uncertainties in the retrieved ozone, nitrogen dioxide, and aerosol extinction profiles for SAGE III. Additionally, the well-documented concern that SAGE II retrievals are biased by the level of volcanic aerosol is greatly alleviated for SAGE III.
Inversion for seismic anisotropy using genetic algorithms
Horne, S. Univ. of Edinburgh . Dept. of Geology and Geophysics); MacBeth, C. . Dept. of Geology and Geophysics)
1994-11-01
A general inversion scheme based on a genetic algorithm is developed to invert seismic observations for anisotropic parameters. The technique is applied to the inversion of shear-wave observations from two azimuthal VSP data sets from the Conoco test site in Oklahoma. Horizontal polarizations and time-delays are inverted for hexagonal and orthorhombic symmetries. The model solutions are consistent with previous studies using trial and error matching of full waveform synthetics. The shear-wave splitting observations suggest the presence of a shear-wave line singularity and are consistent with a dipping fracture system which is known to exist at the test site. Application of the inversion scheme prior to full waveform modeling demonstrates that a considerable saving in time is possible while retaining the same degree of accuracy.
NASA Astrophysics Data System (ADS)
Huestis, D. L.
Forward integration calculation of air mass, refraction, and time delay requires care even for very smooth model atmospheres. The literature abounds in examples of injudicious approximations, assumptions, transformations, variable substitutions, and failures to verify that the formulas work with unlimited accuracy for simple cases and also survive challenges from mathematically pathological but physically realizable cases. A few years ago we addressed the problem of evaluation of the Chapman function for attenuation along a straight line path in an exponential atmosphere. In this presentation we will describe issues and approaches for integration over light paths curved by refraction. The inverse problem, determining the altitude profile of mass density (index of refraction) or the concentration of an individual chemical species (absorption), from occultation data, also has its mathematically interesting (i.e., difficult) aspects. Now we automatically have noise and thus statistical analysis is just as important as calculus and numerical analysis. Here we will describe a new approach of least-squares fitting occultation data to an expansion over compact basis functions. This approach, which avoids numerical differentiation and singular integrals, was originally developed to analyze laboratory imaging data.Forward integration calculation of air mass, refraction, and time delay requires care even for very smooth model atmospheres. The literature abounds in examples of injudicious approximations, assumptions, transformations, variable substitutions, and failures to verify that the formulas work with unlimited accuracy for simple cases and also survive challenges from mathematically pathological but physically realizable cases. A few years ago we addressed the problem of evaluation of the Chapman function for attenuation along a straight line path in an exponential atmosphere. In this presentation we will describe issues and approaches for integration over light paths
A Parallel Processing Algorithm for Gravity Inversion
NASA Astrophysics Data System (ADS)
Frasheri, Neki; Bushati, Salvatore; Frasheri, Alfred
2013-04-01
The paper presents results of using MPI parallel processing for the 3D inversion of gravity anomalies. The work is done under the FP7 project HP-SEE (http://www.hp-see.eu/). The inversion of geophysical anomalies remains a challenge, and the use of parallel processing can be a tool to achieve better results, "compensating" the complexity of the ill-posed problem of inversion with the increase of volume of calculations. We considered the gravity as the simplest case of physical fields and experimented an algorithm based in the methodology known as CLEAN and developed by Högbom in 1974. The 3D geosection was discretized in finite cuboid elements and represented by a 3D array of nodes, while the ground surface where the anomaly is observed as a 2D array of points. Starting from a geosection with mass density zero in all nodes, iteratively the algorithm defines the 3D node that offers the best anomaly shape that approximates the observed anomaly minimizing the least squares error; the mass density in the best 3D node is modified with a prefixed density step and the related effect subtracted from the observed anomaly; the process continues until some criteria is fulfilled. Theoretical complexity of he algorithm was evaluated on the basis of iterations and run-time for a geosection discretized in different scales. We considered the average number N of nodes in one edge of the 3D array. The order of number of iterations was evaluated O(N^3); and the order of run-time was evaluated O(N^8). We used several different methods for the identification of the 3D node which effect offers the best least squares error in approximating the observed anomaly: unweighted least squares error for the whole 2D array of anomalous points; weighting least squares error by the inverted value of observed anomaly over each 3D node; and limiting the area of 2D anomalous points where least squares are calculated over shallow 3D nodes. By comparing results from the inversion of single body and two
Magnetotelluric inversion via reverse time migration algorithm of seismic data
Ha, Taeyoung . E-mail: tyha@math.snu.ac.kr; Shin, Changsoo . E-mail: css@model.snu.ac.kr
2007-07-01
We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.
SAGE II inversion algorithm. [Stratospheric Aerosol and Gas Experiment
NASA Technical Reports Server (NTRS)
Chu, W. P.; Mccormick, M. P.; Lenoble, J.; Brogniez, C.; Pruvost, P.
1989-01-01
The operational Stratospheric Aerosol and Gas Experiment II multichannel data inversion algorithm is described. Aerosol and ozone retrievals obtained with the algorithm are discussed. The algorithm is compared to an independently developed algorithm (Lenoble, 1989), showing that the inverted aerosol and ozone profiles from the two algorithms are similar within their respective uncertainties.
Abel reconstruction of piecewise constant radial density profiles from x-ray radiographs.
Deutsch, M; Notea, A; Pal, D
1989-08-01
We present a method for reconstructing the radial density profile of a cylindrically symmetric object from a single x-ray projection, when the profile consists of a number of different constant sections. A forward Abel transform based algorithm is employed whereby the profile is recovered recursively, onion peelinglike, starting from the outside diameter of the object and moving in. Distortions originating in the Gibbs phenomenon, unavoidable in most available Abel inversion methods, are completely eliminated. The method is simple enough to be carried out on a handheld calculator or a spreadsheet program on a personal computer, and no elaborate computer fits or application programming are required. The method is demonstrated by inverting a simulated three-section noisy set of data and is shown to yield results of a quality equal to that of a recent powerful Abel inversion method, based on full nonlinear least-squares computer fits. PMID:20555668
Rayleigh wave nonlinear inversion based on the Firefly algorithm
NASA Astrophysics Data System (ADS)
Zhou, Teng-Fei; Peng, Geng-Xin; Hu, Tian-Yue; Duan, Wen-Sheng; Yao, Feng-Chang; Liu, Yi-Mou
2014-06-01
Rayleigh waves have high amplitude, low frequency, and low velocity, which are treated as strong noise to be attenuated in reflected seismic surveys. This study addresses how to identify useful shear wave velocity profile and stratigraphic information from Rayleigh waves. We choose the Firefly algorithm for inversion of surface waves. The Firefly algorithm, a new type of particle swarm optimization, has the advantages of being robust, highly effective, and allows global searching. This algorithm is feasible and has advantages for use in Rayleigh wave inversion with both synthetic models and field data. The results show that the Firefly algorithm, which is a robust and practical method, can achieve nonlinear inversion of surface waves with high resolution.
A vector inverse algorithm for electromagnetic scattering
NASA Astrophysics Data System (ADS)
Borden, B.
1984-06-01
Investigated is an inverse electromagnetic scattering technique that uses the polarization characteristics of the scattered wave to form an image of the convex portions of the scattering body. The depolarization of an electromagnetic signal by scattering surface is related to the local principal curvatures through the measurable leading edge of the impulse response. A classic problem in differential geometry (Christoffel-Hurwitz) deals with the reconstruction of such a surface from a knowledge of this kind of information, and a differential equation relating these local measurements to the surface has long been established. A Fortran code employing a 'finite-element' solution to this equation has been constructed and tested on synthetic data.
New RADIOM algorithm using inverse EOS
NASA Astrophysics Data System (ADS)
Busquet, Michel; Sokolov, Igor; Klapisch, Marcel
2012-10-01
The RADIOM model, [1-2], allows one to implement non-LTE atomic physics with a very low extra CPU cost. Although originally heuristic, RADIOM has been physically justified [3] and some accounting for auto-ionization has been included [2]. RADIOM defines an ionization temperature Tz derived from electronic density and actual electronic temperature Te. LTE databases are then queried for properties at Tz and NLTE values are derived from them. Some hydro-codes (like FAST at NRL, Ramis' MULTI, or the CRASH code at U.Mich) use inverse EOS starting from the total internal energy Etot and returning the temperature. In the NLTE case, inverse EOS requires to solve implicit relations between Te, Tz,
Improved Inversion Algorithms for Near Surface Characterization
NASA Astrophysics Data System (ADS)
Astaneh, Ali Vaziri; Guddati, Murthy N.
2016-05-01
Near-surface geophysical imaging is often performed by generating surface waves, and estimating the subsurface properties through inversion, i.e. iteratively matching experimentally observed dispersion curves with predicted curves from a layered half-space model of the subsurface. Key to the effectiveness of inversion is the efficiency and accuracy of computing the dispersion curves and their derivatives. This paper presents improved methodologies for both dispersion curve and derivative computation. First, it is shown that the dispersion curves can be computed more efficiently by combining an unconventional complex-length finite element method (CFEM) to model the finite depth layers, with perfectly matched discrete layers (PMDL) to model the unbounded half-space. Second, based on analytical derivatives for theoretical dispersion curves, an approximate derivative is derived for so-called effective dispersion curve for realistic geophysical surface response data. The new derivative computation has a smoothing effect on the computation of derivatives, in comparison with traditional finite difference (FD) approach, and results in faster convergence. In addition, while the computational cost of FD differentiation is proportional to the number of model parameters, the new differentiation formula has a computational cost that is almost independent of the number of model parameters. At the end, as confirmed by synthetic and real-life imaging examples, the combination of CFEM+PMDL for dispersion calculation and the new differentiation formula results in more accurate estimates of the subsurface characteristics than the traditional methods, at a small fraction of computational effort.
Optimisation in radiotherapy. II: Programmed and inversion optimisation algorithms.
Ebert, M
1997-12-01
This is the second article in a three part examination of optimisation in radiotherapy. The previous article established the bases of optimisation in radiotherapy, and the formulation of the optimisation problem. This paper outlines several algorithms that have been used in radiotherapy, for searching for the best irradiation strategy within the full set of possible strategies. Two principle classes of algorithm are considered--those associated with mathematical programming which employ specific search techniques, linear programming-type searches or artificial intelligence--and those which seek to perform a numerical inversion of the optimisation problem, finishing with deterministic iterative inversion. PMID:9503694
An adaptive inverse kinematics algorithm for robot manipulators
NASA Technical Reports Server (NTRS)
Colbaugh, R.; Glass, K.; Seraji, H.
1990-01-01
An adaptive algorithm for solving the inverse kinematics problem for robot manipulators is presented. The algorithm is derived using model reference adaptive control (MRAC) theory and is computationally efficient for online applications. The scheme requires no a priori knowledge of the kinematics of the robot if Cartesian end-effector sensing is available, and it requires knowledge of only the forward kinematics if joint position sensing is used. Computer simulation results are given for the redundant seven-DOF robotics research arm, demonstrating that the proposed algorithm yields accurate joint angle trajectories for a given end-effector position/orientation trajectory.
NASA Astrophysics Data System (ADS)
Martínez, M. D.; Lana, X.
1991-03-01
The total inversion algorithm and some elements of Mathematical Information Theory are used in the treatment of travel-time data belonging to a seismic refraction experiment from the southern segment (Sardinia Channel) of the European Geotraverse Project. The inversion algorithm allows us to improve a preliminary propagating model obtained by means of usual trial and error procedure and to quantify the resolution degree of parameters defining the crust and upper mantle of such a model. Concepts related to Mathematical Information Theory detect some seismic profiles of the refraction experiment which give the most homogeneous coverage of the model in terms of number of trajectories crossing it. Finally, the efficiency of the inversion procedure is quantified and the uncertainties regarding knowledge of different parts of the model are also evaluated.
An overview of LOA SAGE III inversion algorithm
NASA Astrophysics Data System (ADS)
Bazureau, A.; Brogniez, C.; Lenoble, J.
2000-08-01
We present here the inversion algorithm implemented by the Laboratoire d'Optique Atmosphérique, University of Lille, France, for the analysis of solar and lunar occultation data from the Stratospheric Aerosol and Gas Experiment III (SAGE III). The first flight of SAGE III is planned to be launched in late fall 2000 on the polar orbit spacecraft METEOR 3M. We present first the forward model algorithm for calculating atmospheric transmittances in four of the SAGE III channels: the solar ones, around 440 nm and 600 nm, and the lunar ones, around 413 nm and 660 nm. Then the inversion algorithm is introduced, accomplished in two sequential steps. The first one is the spatial inversion of the simulated slant optical thickness profile leading to the extinction coefficient profile. The second is the spectral inversion of the extinction coefficient at each altitude to separate gas and aerosol contributions. Lastly, error analysis is conducted by a Monte Carlo technique and discussed: the retrieved gas densities and aerosol extinction profiles favourably compare to the corresponding input profiles.
A fast algorithm for sparse matrix computations related to inversion
NASA Astrophysics Data System (ADS)
Li, S.; Wu, W.; Darve, E.
2013-06-01
We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green's functions Gr and G< for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round-off errors
ANNIT - An Efficient Inversion Algorithm based on Prediction Principles
NASA Astrophysics Data System (ADS)
Růžek, B.; Kolář, P.
2009-04-01
Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good
Non-thermal Hard X-Ray Emission from Coma and Several Abell Clusters
Correa, C
2004-02-05
We report results of hard X-Ray observations of the clusters Coma, Abell 496, Abell754, Abell 1060, Abell 1367, Abell2256 and Abell3558 using RXTE data from the NASA HEASARC public archive. Specifically we searched for clusters with hard x-ray emission that can be fitted by a power law because this would indicate that the cluster is a source of non-thermal emission. We are assuming the emission mechanism proposed by Vahk Petrosian where the inter cluster space contains clouds of relativistic electrons that by themselves create a magnetic field and emit radio synchrotron radiation. These relativistic electrons Inverse-Compton scatter Microwave Background photons up to hard x-ray energies. The clusters that were found to be sources of non-thermal hard x-rays are Coma, Abell496, Abell754 and Abell 1060.
Efficient algorithms for linear dynamic inverse problems with known motion
NASA Astrophysics Data System (ADS)
Hahn, B. N.
2014-03-01
An inverse problem is called dynamic if the object changes during the data acquisition process. This occurs e.g. in medical applications when fast moving organs like the lungs or the heart are imaged. Most regularization methods are based on the assumption that the object is static during the measuring procedure. Hence, their application in the dynamic case often leads to serious motion artefacts in the reconstruction. Therefore, an algorithm has to take into account the temporal changes of the investigated object. In this paper, a reconstruction method that compensates for the motion of the object is derived for dynamic linear inverse problems. The algorithm is validated at numerical examples from computerized tomography.
Eddy-current NDE inverse problem with sparse grid algorithm
NASA Astrophysics Data System (ADS)
Zhou, Liming; Sabbagh, Harold A.; Sabbagh, Elias H.; Murphy, R. Kim; Bernacchi, William; Aldrin, John C.; Forsyth, David; Lindgren, Eric
2016-02-01
In model-based inverse problems, the unknown parameters (such as length, width, depth) need to be estimated. When the unknown parameters are few, the conventional mathematical methods are suitable. But the increasing number of unknown parameters will make the computation become heavy. To reduce the burden of computation, the sparse grid algorithm was used in our work. As a result, we obtain a powerful interpolation method that requires significantly fewer support nodes than conventional interpolation on a full grid.
A fast algorithm for sparse matrix computations related to inversion
Li, S.; Wu, W.; Darve, E.
2013-06-01
We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green’s functions G{sup r} and G{sup <} for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round
Application of multistatic inversion algorithms to landmine detection
NASA Astrophysics Data System (ADS)
Gürbüz, Ali Cafer; Counts, Tegan; Kim, Kangwook; McClellan, James H.; Scott, Waymond R., Jr.
2006-05-01
Multi-static ground-penetrating radar (GPR) uses an array of antennas to conduct a number of bistatic operations simultaneously. The multi-static GPR is used to obtain more information on the target of interest using angular diversity. An entirely computer controlled, multi-static GPR consisting of a linear array of six resistively-loaded vee dipoles (RVDs), a network analyzer, and a microwave switch matrix was developed to investigate the potential of multi-static inversion algorithms. The performance of a multi-static inversion algorithm is evaluated for targets buried in clean sand, targets buried under the ground covered by rocks, and targets held above the ground (in the air) using styrofoam supports. A synthetic-aperture, multi-static, time-domain GPR imaging algorithm is extended from conventional mono-static back-projection techniques and used to process the data. Good results are obtained for the clean surface and air targets; however, for targets buried under rocks, only the deeply buried targets could be accurately detected and located.
Development of an Inverse Algorithm for Resonance Inspection
Lai, Canhai; Xu, Wei; Sun, Xin
2012-10-01
Resonance inspection (RI), which employs the natural frequency spectra shift between the good and the anomalous part populations to detect defects, is a non-destructive evaluation (NDE) technique with many advantages such as low inspection cost, high testing speed, and broad applicability to structures with complex geometry compared to other contemporary NDE methods. It has already been widely used in the automobile industry for quality inspections of safety critical parts. Unlike some conventionally used NDE methods, the current RI technology is unable to provide details, i.e. location, dimension, or types, of the flaws for the discrepant parts. Such limitation severely hinders its wide spread applications and further development. In this study, an inverse RI algorithm based on maximum correlation function is proposed to quantify the location and size of flaws for a discrepant part. A dog-bone shaped stainless steel sample with and without controlled flaws are used for algorithm development and validation. The results show that multiple flaws can be accurately pinpointed back using the algorithms developed, and the prediction accuracy decreases with increasing flaw numbers and decreasing distance between flaws.
Aerosol Models for the CALIPSO Lidar Inversion Algorithms
NASA Technical Reports Server (NTRS)
Omar, Ali H.; Winker, David M.; Won, Jae-Gwang
2003-01-01
We use measurements and models to develop aerosol models for use in the inversion algorithms for the Cloud Aerosol Lidar and Imager Pathfinder Spaceborne Observations (CALIPSO). Radiance measurements and inversions of the AErosol RObotic NETwork (AERONET1, 2) are used to group global atmospheric aerosols using optical and microphysical parameters. This study uses more than 105 records of radiance measurements, aerosol size distributions, and complex refractive indices to generate the optical properties of the aerosol at more 200 sites worldwide. These properties together with the radiance measurements are then classified using classical clustering methods to group the sites according to the type of aerosol with the greatest frequency of occurrence at each site. Six significant clusters are identified: desert dust, biomass burning, urban industrial pollution, rural background, marine, and dirty pollution. Three of these are used in the CALIPSO aerosol models to characterize desert dust, biomass burning, and polluted continental aerosols. The CALIPSO aerosol model also uses the coarse mode of desert dust and the fine mode of biomass burning to build a polluted dust model. For marine aerosol, the CALIPSO aerosol model uses measurements from the SEAS experiment 3. In addition to categorizing the aerosol types, the cluster analysis provides all the column optical and microphysical properties for each cluster.
New inverse synthetic aperture radar algorithm for translational motion compensation
NASA Astrophysics Data System (ADS)
Bocker, Richard P.; Henderson, Thomas B.; Jones, Scott A.; Frieden, B. R.
1991-10-01
Inverse synthetic aperture radar (ISAR) is an imaging technique that shows real promise in classifying airborne targets in real time under all weather conditions. Over the past few years a large body of ISAR data has been collected and considerable effort has been expended to develop algorithms to form high-resolution images from this data. One important goal of workers in this field is to develop software that will do the best job of imaging under the widest range of conditions. The success of classifying targets using ISAR is predicated upon forming highly focused radar images of these targets. Efforts to develop highly focused imaging computer software have been challenging, mainly because the imaging depends on and is affected by the motion of the target, which in general is not precisely known. Specifically, the target generally has both rotational motion about some axis and translational motion as a whole with respect to the radar. The slant-range translational motion kinematic quantities must be first accurately estimated from the data and compensated before the image can be focused. Following slant-range motion compensation, the image is further focused by determining and correcting for target rotation. The use of the burst derivative measure is proposed as a means to improve the computational efficiency of currently used ISAR algorithms. The use of this measure in motion compensation ISAR algorithms for estimating the slant-range translational motion kinematic quantities of an uncooperative target is described. Preliminary tests have been performed on simulated as well as actual ISAR data using both a Sun 4 workstation and a parallel processing transputer array. Results indicate that the burst derivative measure gives significant improvement in processing speed over the traditional entropy measure now employed.
Modelling and genetic algorithm based optimisation of inverse supply chain
NASA Astrophysics Data System (ADS)
Bányai, T.
2009-04-01
(Recycling of household appliances with emphasis on reuse options). The purpose of this paper is the presentation of a possible method for avoiding the unnecessary environmental risk and landscape use through unprovoked large supply chain of collection systems of recycling processes. In the first part of the paper the author presents the mathematical model of recycling related collection systems (applied especially for wastes of electric and electronic products) and in the second part of the work a genetic algorithm based optimisation method will be demonstrated, by the aid of which it is possible to determine the optimal structure of the inverse supply chain from the point of view economical, ecological and logistic objective functions. The model of the inverse supply chain is based on a multi-level, hierarchical collection system. In case of this static model it is assumed that technical conditions are permanent. The total costs consist of three parts: total infrastructure costs, total material handling costs and environmental risk costs. The infrastructure-related costs are dependent only on the specific fixed costs and the specific unit costs of the operation points (collection, pre-treatment, treatment, recycling and reuse plants). The costs of warehousing and transportation are represented by the material handling related costs. The most important factors determining the level of environmental risk cost are the number of out of time recycled (treated or reused) products, the number of supply chain objects and the length of transportation routes. The objective function is the minimization of the total cost taking into consideration the constraints. However a lot of research work discussed the design of supply chain [8], but most of them concentrate on linear cost functions. In the case of this model non-linear cost functions were used. The non-linear cost functions and the possible high number of objects of the inverse supply chain leaded to the problem of choosing a
NASA Technical Reports Server (NTRS)
Tsao, Nai-Kuan
1989-01-01
A class of direct inverse decomposition algorithms for solving systems of linear equations is presented. Their behavior in the presence of round-off errors is analyzed. It is shown that under some mild restrictions on their implementation, the class of direct inverse decomposition algorithms presented are equivalent in terms of the error complexity measures.
An algorithm for constrained one-step inversion of spectral CT data
NASA Astrophysics Data System (ADS)
Foygel Barber, Rina; Sidky, Emil Y.; Gilat Schmidt, Taly; Pan, Xiaochuan
2016-05-01
We develop a primal-dual algorithm that allows for one-step inversion of spectral CT transmission photon counts data to a basis map decomposition. The algorithm allows for image constraints to be enforced on the basis maps during the inversion. The derivation of the algorithm makes use of a local upper bounding quadratic approximation to generate descent steps for non-convex spectral CT data discrepancy terms, combined with a new convex-concave optimization algorithm. Convergence of the algorithm is demonstrated on simulated spectral CT data. Simulations with noise and anthropomorphic phantoms show examples of how to employ the constrained one-step algorithm for spectral CT data.
NASA Astrophysics Data System (ADS)
Hou, Zhen-Long; Wei, Xiao-Hui; Huang, Da-Nian; Sun, Xu
2015-09-01
We apply reweighted inversion focusing to full tensor gravity gradiometry data using message-passing interface (MPI) and compute unified device architecture (CUDA) parallel computing algorithms, and then combine MPI with CUDA to formulate a hybrid algorithm. Parallel computing performance metrics are introduced to analyze and compare the performance of the algorithms. We summarize the rules for the performance evaluation of parallel algorithms. We use model and real data from the Vinton salt dome to test the algorithms. We find good match between model and real density data, and verify the high efficiency and feasibility of parallel computing algorithms in the inversion of full tensor gravity gradiometry data.
Modelling and genetic algorithm based optimisation of inverse supply chain
NASA Astrophysics Data System (ADS)
Bányai, T.
2009-04-01
(Recycling of household appliances with emphasis on reuse options). The purpose of this paper is the presentation of a possible method for avoiding the unnecessary environmental risk and landscape use through unprovoked large supply chain of collection systems of recycling processes. In the first part of the paper the author presents the mathematical model of recycling related collection systems (applied especially for wastes of electric and electronic products) and in the second part of the work a genetic algorithm based optimisation method will be demonstrated, by the aid of which it is possible to determine the optimal structure of the inverse supply chain from the point of view economical, ecological and logistic objective functions. The model of the inverse supply chain is based on a multi-level, hierarchical collection system. In case of this static model it is assumed that technical conditions are permanent. The total costs consist of three parts: total infrastructure costs, total material handling costs and environmental risk costs. The infrastructure-related costs are dependent only on the specific fixed costs and the specific unit costs of the operation points (collection, pre-treatment, treatment, recycling and reuse plants). The costs of warehousing and transportation are represented by the material handling related costs. The most important factors determining the level of environmental risk cost are the number of out of time recycled (treated or reused) products, the number of supply chain objects and the length of transportation routes. The objective function is the minimization of the total cost taking into consideration the constraints. However a lot of research work discussed the design of supply chain [8], but most of them concentrate on linear cost functions. In the case of this model non-linear cost functions were used. The non-linear cost functions and the possible high number of objects of the inverse supply chain leaded to the problem of choosing a
Numerical Laplace Transform Inversion Employing the Gaver-Stehfest Algorithm.
ERIC Educational Resources Information Center
Jacquot, Raymond G.; And Others
1985-01-01
Presents a technique for the numerical inversion of Laplace Transforms and several examples employing this technique. Limitations of the method in terms of available computer word length and the effects of these limitations on approximate inverse functions are also discussed. (JN)
An Algorithm to Generating Inverse S-box for Rijndael Encryption Standard
NASA Astrophysics Data System (ADS)
Hussain, Iqtadar; Gondal, Muhammad Asif
2014-12-01
The S-box transformation is very important step for advanced encryption standard algorithm. The S-box values are generated from the multiplicative inverse of finite field with an affine transform. There are many techniques in literature to generate the multiplicative inverse values. In this paper, a software method of producing the multiplicative inverse values, which is the generator of S-box values will be discussed. The proposed technique is based on the mathematical concept of log and antilog.
A repeatable inverse kinematics algorithm with linear invariant subspaces for mobile manipulators.
Tchoń, Krzysztof; Jakubiak, Janusz
2005-10-01
On the basis of a geometric characterization of repeatability we present a repeatable extended Jacobian inverse kinematics algorithm for mobile manipulators. The algorithm's dynamics have linear invariant subspaces in the configuration space. A standard Ritz approximation of platform controls results in a band-limited version of this algorithm. Computer simulations involving an RTR manipulator mounted on a kinematic car-type mobile platform are used in order to illustrate repeatability and performance of the algorithm. PMID:16240778
NASA Astrophysics Data System (ADS)
Zhang, B.; Qi, H.; Ren, Y. T.; Sun, S. C.; Ruan, L. M.
2014-01-01
As a heuristic intelligent optimization algorithm, the Ant Colony Optimization (ACO) algorithm was applied to the inverse problem of a one-dimensional (1-D) transient radiative transfer in present study. To illustrate the performance of this algorithm, the optical thickness and scattering albedo of the 1-D participating slab medium were retrieved simultaneously. The radiative reflectance simulated by Monte-Carlo Method (MCM) and Finite Volume Method (FVM) were used as measured and estimated value for the inverse analysis, respectively. To improve the accuracy and efficiency of the Basic Ant Colony Optimization (BACO) algorithm, three improved ACO algorithms, i.e., the Region Ant Colony Optimization algorithm (RACO), Stochastic Ant Colony Optimization algorithm (SACO) and Homogeneous Ant Colony Optimization algorithm (HACO), were developed. By the HACO algorithm presented, the radiative parameters could be estimated accurately, even with noisy data. In conclusion, the HACO algorithm is demonstrated to be effective and robust, which had the potential to be implemented in various fields of inverse radiation problems.
VLSI design of inverse-free Berlekamp-Massey algorithm for Reed-Solomon code
NASA Astrophysics Data System (ADS)
Truong, Trieu-Kien; Chang, Y. W.; Jeng, Jyh H.
2001-11-01
The inverse-free Berlekamp-Massey (BM) algorithm is the simplest technique for Reed-Solomon (RS) code to correct errors. In the decoding process, the BM algorithm is used to find the error locator polynomial with syndromes as the input. Later, the inverse-free BM algorithm is generalized to find the error locator polynomial with given erasure locator polynomial. By this means, the modified algorithm can be used for RS code to correct both errors and erasures. The improvement is achieved by replacing the input of the Berlekamp-Massey algorithm with the Forney syndromes instead of the syndromes. With this improved technique, the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. In this paper, the register transfer language of this modified BM algorithm is derived and the VLSI architecture is presented.
Lü, Li-hui; Liu, Wen-qing; Zhang, Tian-shu; Lu, Yi-huai; Dong, Yun-sheng; Chen, Zhen-yi; Fan, Guang-qiang; Qi, Shao-shuai
2015-07-01
Atmospheric aerosols have important impacts on human health, the environment and the climate system. Micro Pulse Lidar (MPL) is a new effective tool for detecting atmosphere aerosol horizontal distribution. And the extinction coefficient inversion and error analysis are important aspects of data processing. In order to detect the horizontal distribution of atmospheric aerosol near the ground, slope and Fernald algorithms were both used to invert horizontal MPL data and then the results were compared. The error analysis showed that the error of the slope algorithm and Fernald algorithm were mainly from theoretical model and some assumptions respectively. Though there still some problems exist in those two horizontal extinction coefficient inversions, they can present the spatial and temporal distribution of aerosol particles accurately, and the correlations with the forward-scattering visibility sensor are both high with the value of 95%. Furthermore relatively speaking, Fernald algorithm is more suitable for the inversion of horizontal extinction coefficient. PMID:26717723
Improved inversion algorithms for near-surface characterization
NASA Astrophysics Data System (ADS)
Vaziri Astaneh, Ali; Guddati, Murthy N.
2016-08-01
Near-surface geophysical imaging is often performed by generating surface waves, and estimating the subsurface properties through inversion, that is, iteratively matching experimentally observed dispersion curves with predicted curves from a layered half-space model of the subsurface. Key to the effectiveness of inversion is the efficiency and accuracy of computing the dispersion curves and their derivatives. This paper presents improved methodologies for both dispersion curve and derivative computation. First, it is shown that the dispersion curves can be computed more efficiently by combining an unconventional complex-length finite element method (CFEM) to model the finite depth layers, with perfectly matched discrete layers (PMDL) to model the unbounded half-space. Second, based on analytical derivatives for theoretical dispersion curves, an approximate derivative is derived for the so-called effective dispersion curve for realistic geophysical surface response data. The new derivative computation has a smoothing effect on the computation of derivatives, in comparison with traditional finite difference (FD) approach, and results in faster convergence. In addition, while the computational cost of FD differentiation is proportional to the number of model parameters, the new differentiation formula has a computational cost that is almost independent of the number of model parameters. At the end, as confirmed by synthetic and real-life imaging examples, the combination of CFEM + PMDL for dispersion calculation and the new differentiation formula results in more accurate estimates of the subsurface characteristics than the traditional methods, at a small fraction of computational effort.
Riemannian mean and space-time adaptive processing using projection and inversion algorithms
NASA Astrophysics Data System (ADS)
Balaji, Bhashyam; Barbaresco, Frédéric
2013-05-01
The estimation of the covariance matrix from real data is required in the application of space-time adaptive processing (STAP) to an airborne ground moving target indication (GMTI) radar. A natural approach to estimation of the covariance matrix that is based on the information geometry has been proposed. In this paper, the output of the Riemannian mean is used in inversion and projection algorithms. It is found that the projection class of algorithms can yield very significant gains, even when the gains due to inversion-based algorithms are marginal over standard algorithms. The performance of the projection class of algorithms does not appear to be overly sensitive to the projected subspace dimension.
Technology Transfer Automated Retrieval System (TEKTRAN)
Determination of the optical properties from intact biological materials based on diffusion approximation theory is a complicated inverse problem, and it requires proper implementation of inverse algorithm, instrumentation, and experiment. This work was aimed at optimizing the procedure of estimatin...
NASA Astrophysics Data System (ADS)
Lin, Y.; O'Malley, D.; Vesselinov, V. V.
2015-12-01
Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a
Study on 2D random medium inversion algorithm based on Fuzzy C-means Clustering theory
NASA Astrophysics Data System (ADS)
Xu, Z.; Zhu, P.; Gu, Y.; Yang, X.; Jiang, J.
2015-12-01
Abstract: In seismic exploration for metal deposits, the traditional seismic inversion method based on layered homogeneous medium theory seems difficult to inverse small scale inhomogeneity and spatial variation of the actual medium. The reason is that physical properties of actual medium are more likely random distribution rather than layered. Thus, it is necessary to investigate a random medium inversion algorithm. The velocity of 2D random medium can be described as a function of five parameters: the background velocity (V0), the standard deviation of velocity (σ), the horizontal and vertical autocorrelation lengths (A and B), and the autocorrelation angle (θ). In this study, we propose an inversion algorithm for random medium based on the Fuzzy C-means Clustering (FCM) theory, whose basic idea is that FCM is used to control the inversion process to move forward to the direction we desired by clustering the estimated parameters into groups. Our method can be divided into three steps: firstly, the three parameters (A, B, θ) are estimated from 2D post-stack seismic data using the non-stationary random medium parameter estimation method, and then the estimated parameters are clustered to different groups according to FCM; secondly, the initial random medium model is constructed with clustered groups and the rest two parameters (V0 and σ) obtained from the well logging data; at last, inversion of the random medium are conducted to obtain velocity, impedance and random medium parameters using the Conjugate Gradient Method. The inversion experiments of synthetic seismic data show that the velocity models inverted by our algorithm are close to the real velocity distribution and the boundary of different media can be distinguished clearly.Key words: random medium, inversion, FCM, parameter estimation
An implementation of differential search algorithm (DSA) for inversion of surface wave data
NASA Astrophysics Data System (ADS)
Song, Xianhai; Li, Lei; Zhang, Xueqiang; Shi, Xinchun; Huang, Jianquan; Cai, Jianchao; Jin, Si; Ding, Jianping
2014-12-01
Surface wave dispersion analysis is widely used in geophysics to infer near-surface shear (S)-wave velocity profiles for a wide variety of applications. However, inversion of surface wave data is challenging for most local-search methods due to its high nonlinearity and to its multimodality. In this work, we proposed and implemented a new Rayleigh wave dispersion curve inversion scheme based on differential search algorithm (DSA), one of recently developed swarm intelligence-based algorithms. DSA is inspired from seasonal migration behavior of species of the living beings throughout the year for solving highly nonlinear, multivariable, and multimodal optimization problems. The proposed inverse procedure is applied to nonlinear inversion of fundamental-mode Rayleigh wave dispersion curves for near-surface S-wave velocity profiles. To evaluate calculation efficiency and stability of DSA, four noise-free and four noisy synthetic data sets are firstly inverted. Then, the performance of DSA is compared with that of genetic algorithms (GA) by two noise-free synthetic data sets. Finally, a real-world example from a waste disposal site in NE Italy is inverted to examine the applicability and robustness of the proposed approach on surface wave data. Furthermore, the performance of DSA is compared against that of GA by real data to further evaluate scores of the inverse procedure described here. Simulation results from both synthetic and actual field data demonstrate that differential search algorithm (DSA) applied to nonlinear inversion of surface wave data should be considered good not only in terms of the accuracy but also in terms of the convergence speed. The great advantages of DSA are that the algorithm is simple, robust and easy to implement. Also there are fewer control parameters to tune.
NASA Astrophysics Data System (ADS)
Venkata Rao, R.; Patel, Vivek
2012-08-01
This study explores the use of teaching-learning-based optimization (TLBO) and artificial bee colony (ABC) algorithms for determining the optimum operating conditions of combined Brayton and inverse Brayton cycles. Maximization of thermal efficiency and specific work of the system are considered as the objective functions and are treated simultaneously for multi-objective optimization. Upper cycle pressure ratio and bottom cycle expansion pressure of the system are considered as design variables for the multi-objective optimization. An application example is presented to demonstrate the effectiveness and accuracy of the proposed algorithms. The results of optimization using the proposed algorithms are validated by comparing with those obtained by using the genetic algorithm (GA) and particle swarm optimization (PSO) on the same example. Improvement in the results is obtained by the proposed algorithms. The results of effect of variation of the algorithm parameters on the convergence and fitness values of the objective functions are reported.
SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters
NASA Technical Reports Server (NTRS)
McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.
2014-01-01
Ocean color remote sensing provides synoptic-scale, near-daily observations of marine inherent optical properties (IOPs). Whilst contemporary ocean color algorithms are known to perform well in deep oceanic waters, they have difficulty operating in optically clear, shallow marine environments where light reflected from the seafloor contributes to the water-leaving radiance. The effect of benthic reflectance in optically shallow waters is known to adversely affect algorithms developed for optically deep waters [1, 2]. Whilst adapted versions of optically deep ocean color algorithms have been applied to optically shallow regions with reasonable success [3], there is presently no approach that directly corrects for bottom reflectance using existing knowledge of bathymetry and benthic albedo.To address the issue of optically shallow waters, we have developed a semi-analytical ocean color inversion algorithm: the Shallow Water Inversion Model (SWIM). SWIM uses existing bathymetry and a derived benthic albedo map to correct for bottom reflectance using the semi-analytical model of Lee et al [4]. The algorithm was incorporated into the NASA Ocean Biology Processing Groups L2GEN program and tested in optically shallow waters of the Great Barrier Reef, Australia. In-lieu of readily available in situ matchup data, we present a comparison between SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Property Algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA).
Vardhan, J. Vishnu; Balasubramaniam, Krishnan; Krishnamurthy, C. V.
2007-03-21
The determination of material symmetries and principle plane orientations of anisotropic plates, whose planes of symmetries are not known apriori, were calculated using a Genetic Algorithm (GA) based blind inversion method. The ultrasonic phase velocity profiles were used as input data to the inversion. The assumption of a general anisotropy was imposed during the start of each blind inversion. The multi-parameter solution space of the Genetic Algorithm was exploited to identify the 'statistically significant' solution sets of elastic moduli in the geometric coordinate system of the plate, by thresholding the coefficients-of-variation (Cv). Using these ''statistically significant'' elastic moduli, the unknown material symmetry and the principle planes (angles between the geometrical coordinates and the material symmetry coordinates) were evaluated using the method proposed by Cowin and Mehrabadi. This procedure was verified using simulated ultrasonic velocity data sets on material with orthotropic symmetry. Experimental validation was also performed on unidirectional Graphite Epoxy [0]7s fiber reinforced composite plate.
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-08-19
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
NASA Astrophysics Data System (ADS)
Jiang, Mingfeng; Xia, Ling; Shou, Guofa; Tang, Min
2007-03-01
Computing epicardial potentials from body surface potentials constitutes one form of ill-posed inverse problem of electrocardiography (ECG). To solve this ECG inverse problem, the Tikhonov regularization and truncated singular-value decomposition (TSVD) methods have been commonly used to overcome the ill-posed property by imposing constraints on the magnitudes or derivatives of the computed epicardial potentials. Such direct regularization methods, however, are impractical when the transfer matrix is large. The least-squares QR (LSQR) method, one of the iterative regularization methods based on Lanczos bidiagonalization and QR factorization, has been shown to be numerically more reliable in various circumstances than the other methods considered. This LSQR method, however, to our knowledge, has not been introduced and investigated for the ECG inverse problem. In this paper, the regularization properties of the Krylov subspace iterative method of LSQR for solving the ECG inverse problem were investigated. Due to the 'semi-convergence' property of the LSQR method, the L-curve method was used to determine the stopping iteration number. The performance of the LSQR method for solving the ECG inverse problem was also evaluated based on a realistic heart-torso model simulation protocol. The results show that the inverse solutions recovered by the LSQR method were more accurate than those recovered by the Tikhonov and TSVD methods. In addition, by combing the LSQR with genetic algorithms (GA), the performance can be improved further. It suggests that their combination may provide a good scheme for solving the ECG inverse problem.
Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm
Chen, C.; Xia, J.; Liu, J.; Feng, G.
2006-01-01
Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant
Interlocked optimization and fast gradient algorithm for a seismic inverse problem
Metivier, Ludovic
2011-08-10
Highlights: {yields} A 2D extension of the 1D nonlinear inversion of well-seismic data is given. {yields} Appropriate regularization yields a well-determined large scale inverse problem. {yields} An interlocked optimization loop acts as an efficient preconditioner. {yields} The adjoint state method is used to compute the misfit function gradient. {yields} Domain decomposition method yields an efficient parallel implementation. - Abstract: We give a nonlinear inverse method for seismic data recorded in a well from sources at several offsets from the borehole in a 2D acoustic framework. Given the velocity field, approximate values of the impedance are recovered. This is a 2D extension of the 1D inversion of vertical seismic profiles . The inverse problem generates a large scale undetermined ill-conditioned problem. Appropriate regularization terms render the problem well-determined. An interlocked optimization algorithm yields an efficient preconditioning. A gradient algorithm based on the adjoint state method and domain decomposition gives a fast parallel numerical method. For a realistic test case, convergence is attained in an acceptable time with 128 processors.
ERIC Educational Resources Information Center
Brown, Malcolm
2009-01-01
Inversions are fascinating phenomena. They are reversals of the normal or expected order. They occur across a wide variety of contexts. What do inversions have to do with learning spaces? The author suggests that they are a useful metaphor for the process that is unfolding in higher education with respect to education. On the basis of…
A general rough-surface inversion algorithm: Theory and application to SAR data
NASA Technical Reports Server (NTRS)
Moghaddam, M.
1993-01-01
Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.
NASA Astrophysics Data System (ADS)
Song, Xianhai; Li, Lei; Zhang, Xueqiang; Huang, Jianquan; Shi, Xinchun; Jin, Si; Bai, Yiming
2014-10-01
In recent years, Rayleigh waves are gaining popularity to obtain near-surface shear (S)-wave velocity profiles. However, inversion of Rayleigh wave dispersion curves is challenging for most local-search methods due to its high nonlinearity and to its multimodality. In this study, we proposed and tested a new Rayleigh wave dispersion curve inversion scheme based on differential evolution (DE) algorithm. DE is a novel stochastic search approach that possesses several attractive advantages: (1) Capable of handling non-differentiable, non-linear and multimodal objective functions because of its stochastic search strategy; (2) Parallelizability to cope with computation intensive objective functions without being time consuming by using a vector population where the stochastic perturbation of the population vectors can be done independently; (3) Ease of use, i.e. few control variables to steer the minimization/maximization by DE's self-organizing scheme; and (4) Good convergence properties. The proposed inverse procedure was applied to nonlinear inversion of fundamental-mode Rayleigh wave dispersion curves for near-surface S-wave velocity profiles. To evaluate calculation efficiency and stability of DE, we firstly inverted four noise-free and four noisy synthetic data sets. Secondly, we investigated effects of the number of layers on DE algorithm and made an uncertainty appraisal analysis by DE algorithm. Thirdly, we made a comparative analysis with genetic algorithms (GA) by a synthetic data set to further investigate the performance of the proposed inverse procedure. Finally, we inverted a real-world example from a waste disposal site in NE Italy to examine the applicability of DE on Rayleigh wave dispersion curves. Furthermore, we compared the performance of the proposed approach to that of GA to further evaluate scores of the inverse procedure described here. Results from both synthetic and actual field data demonstrate that differential evolution algorithm applied
A fast new algorithm for a robot neurocontroller using inverse QR decomposition
Morris, A.S.; Khemaissia, S.
2000-01-01
A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array. Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.
Modeling and inversion Matlab algorithms for resistivity, induced polarization and seismic data
NASA Astrophysics Data System (ADS)
Karaoulis, M.; Revil, A.; Minsley, B. J.; Werkema, D. D.
2011-12-01
M. Karaoulis (1), D.D. Werkema (3), A. Revil (1,2), A., B. Minsley (4), (1) Colorado School of Mines, Dept. of Geophysics, Golden, CO, USA. (2) ISTerre, CNRS, UMR 5559, Université de Savoie, Equipe Volcan, Le Bourget du Lac, France. (3) U.S. EPA, ORD, NERL, ESD, CMB, Las Vegas, Nevada, USA . (4) USGS, Federal Center, Lakewood, 10, 80225-0046, CO. Abstract We propose 2D and 3D forward modeling and inversion package for DC resistivity, time domain induced polarization (IP), frequency-domain IP, and seismic refraction data. For the resistivity and IP case, discretization is based on rectangular cells, where each cell has as unknown resistivity in the case of DC modelling, resistivity and chargeability in the time domain IP modelling, and complex resistivity in the spectral IP modelling. The governing partial-differential equations are solved with the finite element method, which can be applied to both real and complex variables that are solved for. For the seismic case, forward modeling is based on solving the eikonal equation using a second-order fast marching method. The wavepaths are materialized by Fresnel volumes rather than by conventional rays. This approach accounts for complicated velocity models and is advantageous because it considers frequency effects on the velocity resolution. The inversion can accommodate data at a single time step, or as a time-lapse dataset if the geophysical data are gathered for monitoring purposes. The aim of time-lapse inversion is to find the change in the velocities or resistivities of each model cell as a function of time. Different time-lapse algorithms can be applied such as independent inversion, difference inversion, 4D inversion, and 4D active time constraint inversion. The forward algorithms are benchmarked against analytical solutions and inversion results are compared with existing ones. The algorithms are packaged as Matlab codes with a simple Graphical User Interface. Although the code is parallelized for multi
3D Motion Planning Algorithms for Steerable Needles Using Inverse Kinematics
Duindam, Vincent; Xu, Jijie; Alterovitz, Ron; Sastry, Shankar; Goldberg, Ken
2010-01-01
Steerable needles can be used in medical applications to reach targets behind sensitive or impenetrable areas. The kinematics of a steerable needle are nonholonomic and, in 2D, equivalent to a Dubins car with constant radius of curvature. In 3D, the needle can be interpreted as an airplane with constant speed and pitch rate, zero yaw, and controllable roll angle. We present a constant-time motion planning algorithm for steerable needles based on explicit geometric inverse kinematics similar to the classic Paden-Kahan subproblems. Reachability and path competitivity are analyzed using analytic comparisons with shortest path solutions for the Dubins car (for 2D) and numerical simulations (for 3D). We also present an algorithm for local path adaptation using null-space results from redundant manipulator theory. Finally, we discuss several ways to use and extend the inverse kinematics solution to generate needle paths that avoid obstacles. PMID:21359051
NASA Technical Reports Server (NTRS)
Brown, Margaret
1993-01-01
An inversion algorithm, constructed to deduce the emissions of a source gas required to produce a specified surface concentration, is applied to the observed surface concentrations of CFC 11, methylchloroform, and methane, using a two-dimensional chemical transport model. The information utilized for this deduction process is limited to the measured atmospheric concentration of the source gas, including the associated standard deviations of these measurements. In this way the amount of objective information available in these measurements is assessed. The algorithm is shown to be capable of producing a latitudinal emissions distribution as well as the error bounds on the deduced emission distribution. The 'ill posed' nature of this inverse problem is discussed as well as the implications this has on the spatial and temporal resolution at which emissions can be resolved. Finally, a methane emission distribution is deduced which has the expected seasonal variations and consistent with results from other, more subjective, deduction studies.
A pseudo-inverse algorithm for simultaneous measurements using multiple acoustical sources.
Xiang, Ning; Li, Shu
2007-03-01
Simultaneous multiple acoustical sources measurement (SMASM) has been proposed for more effective and reliable identification of acoustical systems under critical conditions [N. Xiang and M. R. Schroeder, J. Acoust. Soc. Am. 113, 2754-2761 (2003); N. Xiang, J. N. Daigle, and M. Kleiner, J. Acoust. Soc. Am. 117, 1889-1894 (2005)]. This paper presents a pseudo-inverse algorithm for the SMASM correlation technique as an alternative way of extracting impulse responses of acoustical channels. Simulations and room acoustics experiments are carried out and the results prove the feasibility of the proposed algorithm. PMID:17407864
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chen, C. L.
1989-01-01
Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.
NASA Astrophysics Data System (ADS)
Zhang, Zhi-Yong; Tan, Han-Dong; Wang, Kun-Peng; Lin, Chang-Hong; Zhang, Bin; Xie, Mao-Bi
2016-03-01
Traditional two-dimensional (2D) complex resistivity forward modeling is based on Poisson's equation but spectral induced polarization (SIP) data are the coproducts of the induced polarization (IP) and the electromagnetic induction (EMI) effects. This is especially true under high frequencies, where the EMI effect can exceed the IP effect. 2D inversion that only considers the IP effect reduces the reliability of the inversion data. In this paper, we derive differential equations using Maxwell's equations. With the introduction of the Cole-Cole model, we use the finite-element method to conduct 2D SIP forward modeling that considers the EMI and IP effects simultaneously. The data-space Occam method, in which different constraints to the model smoothness and parametric boundaries are introduced, is then used to simultaneously obtain the four parameters of the Cole—Cole model using multi-array electric field data. This approach not only improves the stability of the inversion but also significantly reduces the solution ambiguity. To improve the computational efficiency, message passing interface programming was used to accelerate the 2D SIP forward modeling and inversion. Synthetic datasets were tested using both serial and parallel algorithms, and the tests suggest that the proposed parallel algorithm is robust and efficient.
Accuracy of two dipolar inverse algorithms applying reciprocity for forward calculation.
Laarne, P; Hyttinen, J; Dodel, S; Malmivuo, J; Eskola, H
2000-06-01
Two inverse algorithms were applied for solving the EEG inverse problem assuming a single dipole as a source model. For increasing the efficiency of the forward computations the lead field approach based on the reciprocity theorem was applied. This method provides a procedure to calculate the computationally heavy forward problem by a single solution for each EEG lead. A realistically shaped volume conductor model with five major tissue compartments was employed to obtain the lead fields of the standard 10-20 EEG electrode system and the scalp potentials generated by simulated dipole sources. A least-squares method and a probability-based method were compared in their performance to reproduce the dipole source based on the reciprocal forward solution. The dipole localization errors were 0 to 9 mm and 2 to 22 mm without and with added noise in the simulated data, respectively. The two different inverse algorithms operated mainly very similarly. The lead field method appeared applicable for the solution of the inverse problem and especially useful when a number of sources, e.g., multiple EEG time instances, must be solved. PMID:10860584
Jiang, Mingfeng; Xia, Ling; Huang, Wenqing; Shou, Guofa; Liu, Feng; Crozier, Stuart
2009-10-01
Regularization is an effective method for the solution of ill-posed ECG inverse problems, such as computing epicardial potentials from body surface potentials. The aim of this work was to explore more robust regularization-based solutions through the application of subspace preconditioned LSQR (SP-LSQR) to the study of model-based ECG inverse problems. Here, we presented three different subspace splitting methods, i.e., SVD, wavelet transform and cosine transform schemes, to the design of the preconditioners for ill-posed problems, and to evaluate the performance of algorithms using a realistic heart-torso model simulation protocol. The results demonstrated that when compared with the LSQR, LSQR-Tik and Tik-LSQR method, the SP-LSQR produced higher efficiency and reconstructed more accurate epcicardial potential distributions. Amongst the three applied subspace splitting schemes, the SVD-based preconditioner yielded the best convergence rate and outperformed the other two in seeking the inverse solutions. Moreover, when optimized by the genetic algorithms (GA), the performances of SP-LSQR method were enhanced. The results from this investigation suggested that the SP-LSQR was a useful regularization technique for cardiac inverse problems. PMID:19564127
Estimates of the trace of the inverse of a symmetric matrix using the modified Chebyshev algorithm
NASA Astrophysics Data System (ADS)
Meurant, Gérard
2009-07-01
In this paper we study how to compute an estimate of the trace of the inverse of a symmetric matrix by using Gauss quadrature and the modified Chebyshev algorithm. As auxiliary polynomials we use the shifted Chebyshev polynomials. Since this can be too costly in computer storage for large matrices we also propose to compute the modified moments with a stochastic approach due to Hutchinson (Commun Stat Simul 18:1059-1076, 1989).
Analysis of AERONET Inversion Algorithm's Products at ``El Arenosillo'' Station, Southwest Spain
NASA Astrophysics Data System (ADS)
Prats, N.; Cachorro, V. E.; Sorribas, M.; Toledano, C.; Berjón, A.; Rodrigo, R.; Torres, B.; de Frutos, A. M.
2009-03-01
The present work shows the main results of the analysis of AERONET inversion algorithm's products of a sun-photometer installed at the Atmospheric Sounding Station "El Arenosillo." This station belongs to INTA (Instituto Nacional de Técnica Aerosoespacial) and is located in the southwest of the Iberian Peninsula (37.1 N—6.7 W). The aim of this work is the study of the optical aerosol properties of a long data series (August 2002-December 2005) that are products of the AERONET inversion algorithm: volume size distribution (VSD) and complex refractive index (REF), and a wide set of derived parameters: volume concentration (VolCon), asymmetry parameter (g), single scattering albedo (SSA), etc. Version 2 of the AERONET algorithm inversion is used here. A general statistic is carried out which includes the interannual monthly behaviour of the aerosol microphysical parameters. Aerosol volume concentration shows a good correlation with the aerosol optical depth (AOD) and also the fine mode volume fraction (Vf/Vt) with the alpha Ångström exponent (AE). A characterization of the VSD and derived parameters is performed depending on aerosol type. Optical properties will be analyzed only for cases of high AOD, because of the quality assured criteria of these parameters. These cases include desert dust, showing a scattering behaviour, and biomass burning aerosol, with an absorbing character.
A Niching Genetic Algorithm For Milne-Eddington Spectral Line Inversions
NASA Astrophysics Data System (ADS)
Harker, Brian; Balasubramaniam, K.; Sojka, Jan
2006-10-01
Stokes profile inversions form a basis for ``measuring'' solar magnetic fields. The High Altitude Observatory (HAO) Milne-Eddington (M-E) spectral line inversions have traditionally been used as initializations to more sophisticated inversion procedures. One such code uses a genetic-algorithm initialization to search the parameter space on a more global scale, in an effort to obtain a good starting guess for a more traditional hill-climbing (e.g. Levenberg-Marquardt) algorithm. A serious drawback to the type of genetic algorithm used is that it has been shown to perform poorly on high-dimensional spaces with multiple optima. A single-component M-E model atmosphere is typically described by about 7 free parameters, indicating a fairly high parameter space dimensionality. Two-component models increase the ability to fit frequently-observed asymmetric spectral lines, at the price of nearly doubling the dimension of the parameter space. Furthermore, spectral lines for large magnetic field strengths and large inclinations are very similar to profiles for weaker field strengths and small inclinations, indicating the potential presence of multiple optima that correspond to very different physical conditions. This poster presents an initial investigation into alleviating these difficulties by incorporating a more sophisticated evolutionary strategy into the SGA, and parallelizing over multiple processors.
TOPICAL REVIEW: Inversion algorithms for large-scale geophysical electromagnetic measurements
NASA Astrophysics Data System (ADS)
Abubakar, A.; Habashy, T. M.; Li, M.; Liu, J.
2009-12-01
Low-frequency surface electromagnetic prospecting methods have been gaining a lot of interest because of their capabilities to directly detect hydrocarbon reservoirs and to compliment seismic measurements for geophysical exploration applications. There are two types of surface electromagnetic surveys. The first is an active measurement where we use an electric dipole source towed by a ship over an array of seafloor receivers. This measurement is called the controlled-source electromagnetic (CSEM) method. The second is the Magnetotelluric (MT) method driven by natural sources. This passive measurement also uses an array of seafloor receivers. Both surface electromagnetic methods measure electric and magnetic field vectors. In order to extract maximal information from these CSEM and MT data we employ a nonlinear inversion approach in their interpretation. We present two types of inversion approaches. The first approach is the so-called pixel-based inversion (PBI) algorithm. In this approach the investigation domain is subdivided into pixels, and by using an optimization process the conductivity distribution inside the domain is reconstructed. The optimization process uses the Gauss-Newton minimization scheme augmented with various forms of regularization. To automate the algorithm, the regularization term is incorporated using a multiplicative cost function. This PBI approach has demonstrated its ability to retrieve reasonably good conductivity images. However, the reconstructed boundaries and conductivity values of the imaged anomalies are usually not quantitatively resolved. Nevertheless, the PBI approach can provide useful information on the location, the shape and the conductivity of the hydrocarbon reservoir. The second method is the so-called model-based inversion (MBI) algorithm, which uses a priori information on the geometry to reduce the number of unknown parameters and to improve the quality of the reconstructed conductivity image. This MBI approach can
NASA Astrophysics Data System (ADS)
McKinna, Lachlan I. W.; Fearns, Peter R. C.; Weeks, Scarla J.; Werdell, P. Jeremy; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.
2015-03-01
A semianalytical ocean color inversion algorithm was developed for improving retrievals of inherent optical properties (IOPs) in optically shallow waters. In clear, geometrically shallow waters, light reflected off the seafloor can contribute to the water-leaving radiance signal. This can have a confounding effect on ocean color algorithms developed for optically deep waters, leading to an overestimation of IOPs. The algorithm described here, the Shallow Water Inversion Model (SWIM), uses pre-existing knowledge of bathymetry and benthic substrate brightness to account for optically shallow effects. SWIM was incorporated into the NASA Ocean Biology Processing Group's L2GEN code and tested in waters of the Great Barrier Reef, Australia, using the Moderate Resolution Imaging Spectroradiometer (MODIS) Aqua time series (2002-2013). SWIM-derived values of the total non-water absorption coefficient at 443 nm, at(443), the particulate backscattering coefficient at 443 nm, bbp(443), and the diffuse attenuation coefficient at 488 nm, Kd(488), were compared with values derived using the Generalized Inherent Optical Properties algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA). The results indicated that in clear, optically shallow waters SWIM-derived values of at(443), bbp(443), and Kd(443) were realistically lower than values derived using GIOP and QAA, in agreement with radiative transfer modeling. This signified that the benthic reflectance correction was performing as expected. However, in more optically complex waters, SWIM had difficulty converging to a solution, a likely consequence of internal IOP parameterizations. Whilst a comprehensive study of the SWIM algorithm's behavior was conducted, further work is needed to validate the algorithm using in situ data.
NASA Technical Reports Server (NTRS)
Mckinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.
2015-01-01
A semianalytical ocean color inversion algorithm was developed for improving retrievals of inherent optical properties (IOPs) in optically shallow waters. In clear, geometrically shallow waters, light reflected off the seafloor can contribute to the water-leaving radiance signal. This can have a confounding effect on ocean color algorithms developed for optically deep waters, leading to an overestimation of IOPs. The algorithm described here, the Shallow Water Inversion Model (SWIM), uses pre-existing knowledge of bathymetry and benthic substrate brightness to account for optically shallow effects. SWIM was incorporated into the NASA Ocean Biology Processing Group's L2GEN code and tested in waters of the Great Barrier Reef, Australia, using the Moderate Resolution Imaging Spectroradiometer (MODIS) Aqua time series (2002-2013). SWIM-derived values of the total non-water absorption coefficient at 443 nm, at(443), the particulate backscattering coefficient at 443 nm, bbp(443), and the diffuse attenuation coefficient at 488 nm, Kd(488), were compared with values derived using the Generalized Inherent Optical Properties algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA). The results indicated that in clear, optically shallow waters SWIM-derived values of at(443), bbp(443), and Kd(443) were realistically lower than values derived using GIOP and QAA, in agreement with radiative transfer modeling. This signified that the benthic reflectance correction was performing as expected. However, in more optically complex waters, SWIM had difficulty converging to a solution, a likely consequence of internal IOP parameterizations. Whilst a comprehensive study of the SWIM algorithm's behavior was conducted, further work is needed to validate the algorithm using in situ data.
A hybrid algorithm for solving the EEG inverse problem from spatio-temporal EEG data.
Crevecoeur, Guillaume; Hallez, Hans; Van Hese, Peter; D'Asseler, Yves; Dupré, Luc; Van de Walle, Rik
2008-08-01
Epilepsy is a neurological disorder caused by intense electrical activity in the brain. The electrical activity, which can be modelled through the superposition of several electrical dipoles, can be determined in a non-invasive way by analysing the electro-encephalogram. This source localization requires the solution of an inverse problem. Locally convergent optimization algorithms may be trapped in local solutions and when using global optimization techniques, the computational effort can become expensive. Fast recovery of the electrical sources becomes difficult that way. Therefore, there is a need to solve the inverse problem in an accurate and fast way. This paper performs the localization of multiple dipoles using a global-local hybrid algorithm. Global convergence is guaranteed by using space mapping techniques and independent component analysis in a computationally efficient way. The accuracy is locally obtained by using the Recursively Applied and Projected-MUltiple Signal Classification (RAP-MUSIC) algorithm. When using this hybrid algorithm, a four times faster solution is obtained. PMID:18427852
NASA Astrophysics Data System (ADS)
Liu, Lisheng; Jiang, Zhenhua; Wang, Tingfeng; Guo, Jin
2015-03-01
An angular spectrum propagation (ASP) algorithm with a scaling parameter to simulate optical diffraction propagation through optical systems is studied. The alterable observation size is obtained by adding the scaling parameter to the Collins formula. A directly mathematical inverse transformation of the ASP algorithm (IASP) is proposed to calculate the source optical field from the known observation optical field, and the results are proved more precise. The IASP algorithm is applied to execute the phase retrieval to derive the aberrations of optical systems from intensity profiles measured in the observation plane. The derived aberrations are fitted by Zernike polynomials under the constraint that the wavefront aberrations are smooth. Numerical simulations are performed to test the accuracy of this method.
Llacer, J; Solberg, T D; Promberger, C
2001-10-01
This paper presents a description of tests carried out to compare the behaviour of five algorithms in inverse radiation therapy planning: (1) The Dynamically Penalized Likelihood (DPL), an algorithm based on statistical estimation theory; (2) an accelerated version of the same algorithm: (3) a new fast adaptive simulated annealing (ASA) algorithm; (4) a conjugate gradient method; and (5) a Newton gradient method. A three-dimensional mathematical phantom and two clinical cases have been studied in detail. The phantom consisted of a U-shaped tumour with a partially enclosed 'spinal cord'. The clinical examples were a cavernous sinus meningioma and a prostate case. The algorithms have been tested in carefully selected and controlled conditions so as to ensure fairness in the assessment of results. It has been found that all five methods can yield relatively similar optimizations, except when a very demanding optimization is carried out. For the easier cases. the differences are principally in robustness, ease of use and optimization speed. In the more demanding case, there are significant differences in the resulting dose distributions. The accelerated DPL emerges as possibly the algorithm of choice for clinical practice. An appendix describes the differences in behaviour between the new ASA method and the one based on a patent by the Nomos Corporation. PMID:11686280
LOTOS code for local earthquake tomographic inversion: benchmarks for testing tomographic algorithms
NASA Astrophysics Data System (ADS)
Koulakov, I. Yu.
2009-04-01
We present the LOTOS-07 code for performing local earthquake tomographic (LET) inversion, which is freely available at www.ivan-art.com/science/LOTOS_07. The initial data for the code are the arrival times from local seismicity and coordinates of the stations. It does not require any information about the sources. The calculations start from absolute location of sources and estimates of an optimal 1D velocity model. Then the sources are relocated simultaneously with the 3D velocity distribution during iterative coupled tomographic inversions. The code allows results to be compared based on node or cell parameterizations. Both Vp-Vs and Vp - Vp/Vs inversion schemes can be performed by the LOTOS code. The working ability of the LOTOS code is illustrated with different real and synthetic datasets. Some of the tests are used to disprove existing stereotypes of LET schemes such as using trade-off curves for evaluation of damping parameters and GAP criterion for selection of events. We also present a series of synthetic datasets with unknown sources and velocity models (www.ivan-art.com/science/benchmark) that can be used as blind benchmarks for testing different tomographic algorithms. We encourage other users of tomography algorithms to join the program on creating benchmarks that can be used to check existing codes. The program codes and testing datasets will be freely distributed during the poster presentation.
NASA Astrophysics Data System (ADS)
Wahyudi, Eko Januari
2013-09-01
As advancing application of soft computation technique in oil and gas industry, Genetic Algorithm (GA) also shows contribution in geophysical inverse problems in order to achieve better results and efficiency in computational process. In this paper, I would like to show the progress of my work in inverse modeling of time-lapse gravity data uses value encoding with alphabet formulation. The alphabet formulation designed to provide solution of characterization positive density change (+Δρ) and negative density change (-Δρ) respect to reference value (0 gr/cc). The inversion that utilize discrete model parameter, computed with GA as optimization algorithm. The challenge working with GA is take long time computational process, so the step in designing GA in this paper described through evaluation on GA operators performance test. The performances of several combinations of GA operators (selection, crossover, mutation, and replacement) tested with synthetic model in single-layer reservoir. Analysis on sufficient number of samples shows combination of SUS-MPCO-QSA/G-ND as the most promising results. Quantitative solution with more confidence level to characterize sharp boundary of density change zones was conducted with average calculation of sufficient model samples.
Improved Genetic Algorithm Based on the Cooperation of Elite and Inverse-elite
NASA Astrophysics Data System (ADS)
Kanakubo, Masaaki; Hagiwara, Masafumi
In this paper, we propose an improved genetic algorithm based on the combination of Bee system and Inverse-elitism, both are effective strategies for the improvement of GA. In the Bee system, in the beginning, each chromosome tries to find good solution individually as global search. When some chromosome is regarded as superior one, the other chromosomes try to find solution around there. However, since chromosomes for global search are generated randomly, Bee system lacks global search ability. On the other hand, in the Inverse-elitism, an inverse-elite whose gene values are reversed from the corresponding elite is produced. This strategy greatly contributes to diversification of chromosomes, but it lacks local search ability. In the proposed method, the Inverse-elitism with Pseudo-simplex method is employed for global search of Bee system in order to strengthen global search ability. In addition, it also has strong local search ability. The proposed method has synergistic effects of the three strategies. We confirmed validity and superior performance of the proposed method by computer simulations.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-08-01
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-03-21
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.
A new stochastic algorithm for inversion of dust aerosol size distribution
NASA Astrophysics Data System (ADS)
Wang, Li; Li, Feng; Yang, Ma-ying
2015-08-01
Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-03-21
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less
A new MCMC algorithm for seismic waveform inversion and corresponding uncertainty analysis
NASA Astrophysics Data System (ADS)
Hong, Tiancong; Sen, Mrinal K.
2009-04-01
It is superior to formulate an inverse problem in a Bayesian framework and fully solve it by stochastically constructing the posterior probability density (PPD) distribution using Markov chain Monte Carlo (MCMC) algorithms. The estimated PPD can also be used to compute several measures of dispersion in the model space. However, for realistic application, MCMC methods can be computationally expensive and may lead to inaccurate PPD estimation as well as uncertainty analysis due to the strong non-linearity and high dimensionality. In this paper, to address the fundamental issues of efficiency and accuracy in parameter estimation and PPD sampling, we incorporate some new developments into a standard genetic algorithm (GA) to design more powerful algorithms for the practical geophysical inverse problems such as a non-linear pre-stack seismic waveform inversion. First, a multiscale real-coded hybrid GA is developed to facilitate exploitation of the model space for optimal parameters at a fine scale. It is demonstrated that, by using real-coding and especially multiscaling to trade information between the model vectors defined at different resolutions, we attain a substantial speed-up in computation and obtain accurate parameter estimations. This new optimization method is further adapted to a new multiscale GA based MCMC method, in which multiple MCMC chains defined at different scales are run simultaneously in parallel. To gain the benefits of both the faster convergence of coarse scales and the greater detail of fine scales, realizations of chains at different scales are combined for intelligent proposals that facilitate exploration of the model space at the fine scale. In this study, the new MCMC is justified using an analytical example and its performance on PPD estimation, and uncertainty quantification is demonstrated using a non-linear seismic inverse problem. We find that incorporation of multiscaling in the Bayesian approach shows a great promise in solving
A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem
NASA Astrophysics Data System (ADS)
Park, Taehoon; Park, Won-Kwang
2015-09-01
Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation.
The application of inverse Broyden's algorithm for modeling of crack growth in iron crystals.
Telichev, Igor; Vinogradov, Oleg
2011-07-01
In the present paper we demonstrate the use of inverse Broyden's algorithm (IBA) in the simulation of fracture in single iron crystals. The iron crystal structure is treated as a truss system, while the forces between the atoms situated at the nodes are defined by modified Morse inter-atomic potentials. The evolution of lattice structure is interpreted as a sequence of equilibrium states corresponding to the history of applied load/deformation, where each equilibrium state is found using an iterative procedure based on IBA. The results presented demonstrate the success of applying the IBA technique for modeling the mechanisms of elastic, plastic and fracture behavior of single iron crystals. PMID:21042823
NASA Astrophysics Data System (ADS)
Kitaura, F. S.; Enßlin, T. A.
2008-09-01
We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood and numerical inverse extraregularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener filtering, Tikhonov regularization, ridge regression, maximum entropy and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton-Raphson, Landweber-Fridman and both linear and non-linear Krylov methods based on Fletcher-Reeves, Polak-Ribière and Hestenes-Stiefel conjugate gradients. The structures of the up-to-date highest performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener filter in the novel ARGO software package, the different numerical schemes are benchmarked with one-, two- and three-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark matter density field, the peculiar velocity field and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.
Two-dimensional reconstruction algorithm of an inverse-geometry volumetric CT system
NASA Astrophysics Data System (ADS)
Baek, Jongduk; Pelc, Norbert J.
2007-03-01
An inverse-geometry volumetric CT (IGCT) system uses a large source array opposite a smaller detector array. Conventional 2D IGCT reconstruction is performed by using gridding. We describe a 2D IGCT reconstruction algorithm without gridding. The IGCT raw data can be viewed as being composed of many fan beams, each with a detector at its focus. Each projection is undersampled but the missing samples are provided by other views. In order to get high spatial resolution, zeros are inserted between acquired projection samples in each fan beam, and reconstruction is performed using a direct fan beam reconstruction algorithm. Initial IGCT reconstruction results showed ringing artifacts caused by fact that the rho samples in the ensemble of views are not equally spaced. We present a new method for correcting the errors that reduces the artifacts to below one Hounsfield Unit
NASA Astrophysics Data System (ADS)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.
Chang, Chia-Ling; Lo, Shang-Lien; Yu, Shaw-L
2006-06-01
The inverse distance method, one of the commonly used methods for analyzing spatial variation of rainfall, is flexible if the order of distances in the method is adjustable. By applying the genetic algorithm (GA), the optimal order of distances can be found to minimize the difference between estimated and measured precipitation data. A case study of the Feitsui reservoir watershed in Taiwan is described in the present paper. The results show that the variability of the order of distances is small when the topography of rainfall stations is uniform. Moreover, when rainfall characteristic is uniform, the horizontal distance between rainfall stations and interpolated locations is the major factor influencing the order of distances. The results also verify that the variable-order inverse distance method is more suitable than the arithmetic average method and the Thiessen Polygons method in describing the spatial variation of rainfall. The efficiency and reliability of hydrologic modeling and hence of general water resource management can be significantly improved by more accurate rainfall data interpolated by the variable-order inverse distance method. PMID:16917704
Direct two-dimensional reconstruction algorithm for an inverse-geometry CT system.
Baek, Jongduk; Pelc, Norbert J
2009-02-01
An inverse-geometry computed tomography (IGCT) system uses a large source array opposite a smaller detector array. A previously described IGCT reconstruction algorithm uses gridding, but this gridding step produces blurring in the reconstructed image. In this article, the authors describe a two-dimensional (2D) IGCT reconstruction algorithm without gridding. In the transverse direction, the raw data of the IGCT system can be viewed as being composed of many fan beams. Because the spacing between source spots is larger than the spot width, each fan beam has undersampled projection data, but the missing samples are effectively provided by other undersampled fan beam views. In the proposed method, a direct fan beam reconstruction algorithm is used to process each undersampled fan beam. Initial images with this method showed ring artifacts caused by nonuniform sampling in the radial direction as compared to an ideal fan beam. A new method for correcting this effect was developed. With this correction, high quality images were obtained. The noise performance of the proposed 2D IGCT reconstruction algorithm was investigated, and it was comparable to that of the fan beam system. A MTF study showed that the proposed method achieves better resolution than the gridding method. PMID:19291978
NASA Astrophysics Data System (ADS)
Al-Ma'shumah, Fathimah; Permana, Dony; Sidarto, Kuntjoro Adji
2015-12-01
Customer Lifetime Value is an important and useful concept in marketing. One of its benefits is to help a company for budgeting marketing expenditure for customer acquisition and customer retention. Many mathematical models have been introduced to calculate CLV considering the customer retention/migration classification scheme. A fairly new class of these models which will be described in this paper uses Markov Chain Models (MCM). This class of models has the major advantage for its flexibility to be modified to several different cases/classification schemes. In this model, the probabilities of customer retention and acquisition play an important role. From Pfeifer and Carraway, 2000, the final formula of CLV obtained from MCM usually contains nonlinear form of the transition probability matrix. This nonlinearity makes the inverse problem of CLV difficult to solve. This paper aims to solve this inverse problem, yielding the approximate transition probabilities for the customers, by applying metaheuristic optimization algorithm developed by Yang, 2013, Flower Pollination Algorithm. The major interpretation of obtaining the transition probabilities are to set goals for marketing teams in keeping the relative frequencies of customer acquisition and customer retention.
Inverse problems with Poisson data: statistical regularization theory, applications and algorithms
NASA Astrophysics Data System (ADS)
Hohage, Thorsten; Werner, Frank
2016-09-01
Inverse problems with Poisson data arise in many photonic imaging modalities in medicine, engineering and astronomy. The design of regularization methods and estimators for such problems has been studied intensively over the last two decades. In this review we give an overview of statistical regularization theory for such problems, the most important applications, and the most widely used algorithms. The focus is on variational regularization methods in the form of penalized maximum likelihood estimators, which can be analyzed in a general setup. Complementing a number of recent convergence rate results we will establish consistency results. Moreover, we discuss estimators based on a wavelet-vaguelette decomposition of the (necessarily linear) forward operator. As most prominent applications we briefly introduce Positron emission tomography, inverse problems in fluorescence microscopy, and phase retrieval problems. The computation of a penalized maximum likelihood estimator involves the solution of a (typically convex) minimization problem. We also review several efficient algorithms which have been proposed for such problems over the last five years.
An implementation of differential evolution algorithm for inversion of geoelectrical data
NASA Astrophysics Data System (ADS)
Balkaya, Çağlayan
2013-11-01
Differential evolution (DE), a population-based evolutionary algorithm (EA) has been implemented to invert self-potential (SP) and vertical electrical sounding (VES) data sets. The algorithm uses three operators including mutation, crossover and selection similar to genetic algorithm (GA). Mutation is the most important operator for the success of DE. Three commonly used mutation strategies including DE/best/1 (strategy 1), DE/rand/1 (strategy 2) and DE/rand-to-best/1 (strategy 3) were applied together with a binomial type crossover. Evolution cycle of DE was realized without boundary constraints. For the test studies performed with SP data, in addition to both noise-free and noisy synthetic data sets two field data sets observed over the sulfide ore body in the Malachite mine (Colorado) and over the ore bodies in the Neem-Ka Thana cooper belt (India) were considered. VES test studies were carried out using synthetically produced resistivity data representing a three-layered earth model and a field data set example from Gökçeada (Turkey), which displays a seawater infiltration problem. Mutation strategies mentioned above were also extensively tested on both synthetic and field data sets in consideration. Of these, strategy 1 was found to be the most effective strategy for the parameter estimation by providing less computational cost together with a good accuracy. The solutions obtained by DE for the synthetic cases of SP were quite consistent with particle swarm optimization (PSO) which is a more widely used population-based optimization algorithm than DE in geophysics. Estimated parameters of SP and VES data were also compared with those obtained from Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing (SA) without cooling to clarify uncertainties in the solutions. Comparison to the M-H algorithm shows that DE performs a fast approximate posterior sampling for the case of low-dimensional inverse geophysical problems.
NASA Astrophysics Data System (ADS)
Belkebir, Kamal; Saillard, Marc
2005-12-01
This special section deals with the reconstruction of scattering objects from experimental data. A few years ago, inspired by the Ipswich database [1 4], we started to build an experimental database in order to validate and test inversion algorithms against experimental data. In the special section entitled 'Testing inversion algorithms against experimental data' [5], preliminary results were reported through 11 contributions from several research teams. (The experimental data are free for scientific use and can be downloaded from the web site.) The success of this previous section has encouraged us to go further and to design new challenges for the inverse scattering community. Taking into account the remarks formulated by several colleagues, the new data sets deal with inhomogeneous cylindrical targets and transverse electric (TE) polarized incident fields have also been used. Among the four inhomogeneous targets, three are purely dielectric, while the last one is a `hybrid' target mixing dielectric and metallic cylinders. Data have been collected in the anechoic chamber of the Centre Commun de Ressources Micro-ondes in Marseille. The experimental setup as well as the layout of the files containing the measurements are presented in the contribution by J-M Geffrin, P Sabouroux and C Eyraud. The antennas did not change from the ones used previously [5], namely wide-band horn antennas. However, improvements have been achieved by refining the mechanical positioning devices. In order to enlarge the scope of applications, both TE and transverse magnetic (TM) polarizations have been carried out for all targets. Special care has been taken not to move the target under test when switching from TE to TM measurements, ensuring that TE and TM data are available for the same configuration. All data correspond to electric field measurements. In TE polarization the measured component is orthogonal to the axis of invariance. Contributions A Abubakar, P M van den Berg and T M
NASA Technical Reports Server (NTRS)
Petty, Grant W.; Stettner, David R.
1994-01-01
This paper discusses certain aspects of a new inversion based algorithm for the retrieval of rain rate over the open ocean from the special sensor microwave/imager (SSM/I) multichannel imagery. This algorithm takes a more detailed physical approach to the retrieval problem than previously discussed algorithms that perform explicit forward radiative transfer calculations based on detailed model hydrometer profiles and attempt to match the observations to the predicted brightness temperature.
NASA Astrophysics Data System (ADS)
Assimaki, D.; Li, W.; Kalos, A.
2011-10-01
We present a full waveform inversion algorithm of downhole array seismogram recordings that can be used to estimate the inelastic soil behavior in situ during earthquake ground motion. For this purpose, we first develop a new hysteretic scheme that improves upon existing nonlinear site response models by allowing adjustment of the width and length of the hysteresis loop for a relatively small number of soil parameters. The constitutive law is formulated to approximate the response of saturated cohesive materials, and does not account for volumetric changes due to shear leading to pore pressure development and potential liquefaction. We implement the soil model in the forward operator of the inversion, and evaluate the constitutive parameters that maximize the cross-correlation between site response predictions and observations on ground surface. The objective function is defined in the wavelet domain, which allows equal weight to be assigned across all frequency bands of the non-stationary signal. We evaluate the convergence rate and robustness of the proposed scheme for noise-free and noise-contaminated data, and illustrate good performance of the inversion for signal-to-noise ratios as low as 3. We finally employ the proposed scheme to downhole array data, and show that results compare very well with published data on generic soil conditions and previous geotechnical investigation studies at the array site. By assuming a realistic hysteretic model and estimating the constitutive soil parameters, the proposed inversion accounts for the instantaneous adjustment of soil response to the level and strain and load path during transient loading, and allows results to be used in predictions of nonlinear site effects during future events.
Cardiac ablation catheter guidance by means of a single equivalent moving dipole inverse algorithm
Lee, Kichang; Lv, Wener; Ter-Ovanesyan, Evgeny; Barley, Maya E.; Voysey, Graham E.; Galea, Anna; Hirschman, Gordon; LeRoy, Kristen; Marini, Robert P.; Barrett, Conor; Armoundas, Antonis A.; Cohen, Richard J.
2015-01-01
We developed and evaluated a novel system for guiding radio-frequency catheter ablation therapy of ventricular tachycardia. This guidance system employs an Inverse Solution Guidance Algorithm (ISGA) utilizing a single equivalent moving dipole (SEMD) localization method. The method and system were evaluated in both a saline-tank phantom model and in-vivo animal (swine) experiments. A catheter with two platinum electrodes spaced 3 mm apart was used as the dipole source in the phantom study. A 40 Hz sinusoidal signal was applied to the electrode pair. In the animal study, four to eight electrodes were sutured onto the right ventricle. These electrodes were connected to a stimulus generator delivering one millisecond duration pacing pulses. Signals were recorded from 64 electrodes, located either on the inner surface of the saline-tank or the body surface of the pig, and then processed by the ISGA to localize the physical or bioelectrical SEMD. In the phantom studies, the guidance algorithm was used to advance a catheter tip to the location of the source dipole. The distance from the final position of the catheter tip to the position of the target dipole was 2.22 ± 0.78 mm in real space and 1.38± 0.78 mm in image space (computational space). The ISGA successfully tracked the locations of electrodes sutured on the ventricular myocardium and the movement of an endocardial catheter placed in the animal’s right ventricle. In conclusion, we successfully demonstrated the feasibility of using a SEMD inverse algorithm to guide a cardiac ablation catheter. PMID:23448231
Using Neighborhood-Algorithm Inversion to Test and Calibrate Landscape Evolution Models
NASA Astrophysics Data System (ADS)
Perignon, M. C.; Tucker, G. E.; Van Der Beek, P.; Hilley, G. E.; Arrowsmith, R.
2011-12-01
Landscape evolution models use mass transport rules to simulate the development of topography over timescales too long for humans to observe. The ability of models to reproduce various attributes of real landscapes must be tested against natural systems in which driving forces, boundary conditions, and timescales of landscape evolution can be well constrained. We test and calibrate a landscape evolution model by comparing it with a well-constrained natural experiment using a formal inversion method to obtain best-fitting parameter values. Our case study is the Dragon's Back Pressure Ridge, a region of elevated terrain parallel to the south central San Andreas Fault that serves as a natural laboratory for studying how the timing and spatial distribution of uplift affects topography. We apply an optimization procedure to identify the parameter ranges and combinations that best account for the observed topography. Through the use of repeat forward modeling, direct-search inversion models can be used to convert observations from such natural systems into inferences of the processes that governed their formation. Simple inversion techniques have been used before in landscape evolution modeling, but these are imprecise and computationally expensive. We present the application of a more efficient inversion technique, the Neighborhood Algorithm (NA), to optimize the search for the model parameters values that are most consistent with the formation of the Dragon's Back Pressure Ridge through repeat forward modeling using CHILD. Inversion techniques require the comparison of model results with direct observations to evaluate misfit. For our target landscape, this is done through a series of topographic metrics that include hypsometry, slope-area curves, and channel concavity. NA uses an initial Monte Carlo simulation for which misfits have been calculated to guide a new iteration of forward models. At each iteration, NA uses n-dimensional Voronoi cells to explore the
Adaptive Inverse Hyperbolic Tangent Algorithm for Dynamic Contrast Adjustment in Displaying Scenes
NASA Astrophysics Data System (ADS)
Yu, Cheng-Yi; Ouyang, Yen-Chieh; Wang, Chuin-Mu; Chang, Chein-I.
2010-12-01
Contrast has a great influence on the quality of an image in human visual perception. A poorly illuminated environment can significantly affect the contrast ratio, producing an unexpected image. This paper proposes an Adaptive Inverse Hyperbolic Tangent (AIHT) algorithm to improve the display quality and contrast of a scene. Because digital cameras must maintain the shadow in a middle range of luminance that includes a main object such as a face, a gamma function is generally used for this purpose. However, this function has a severe weakness in that it decreases highlight contrast. To mitigate this problem, contrast enhancement algorithms have been designed to adjust contrast to tune human visual perception. The proposed AIHT determines the contrast levels of an original image as well as parameter space for different contrast types so that not only the original histogram shape features can be preserved, but also the contrast can be enhanced effectively. Experimental results show that the proposed algorithm is capable of enhancing the global contrast of the original image adaptively while extruding the details of objects simultaneously.
An algorithmic framework for Mumford-Shah regularization of inverse problems in imaging
NASA Astrophysics Data System (ADS)
Hohm, Kilian; Storath, Martin; Weinmann, Andreas
2015-11-01
The Mumford-Shah model is a very powerful variational approach for edge preserving regularization of image reconstruction processes. However, it is algorithmically challenging because one has to deal with a non-smooth and non-convex functional. In this paper, we propose a new efficient algorithmic framework for Mumford-Shah regularization of inverse problems in imaging. It is based on a splitting into specific subproblems that can be solved exactly. We derive fast solvers for the subproblems which are key for an efficient overall algorithm. Our method neither requires a priori knowledge of the gray or color levels nor of the shape of the discontinuity set. We demonstrate the wide applicability of the method for different modalities. In particular, we consider the reconstruction from Radon data, inpainting, and deconvolution. Our method can be easily adapted to many further imaging setups. The relevant condition is that the proximal mapping of the data fidelity can be evaluated a within reasonable time. In other words, it can be used whenever classical Tikhonov regularization is possible.
NASA Astrophysics Data System (ADS)
Müller, D.; Böckmann, C.; Kolgotin, A.; Schneidenbach, L.; Chemyakin, E.; Rosemann, J.; Znak, P.; Romanov, A.
2015-12-01
We present a summary on the current status of two inversion algorithms that are used in EARLINET for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithms allow us to derive particle effective radius, and volume and surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. We discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work on the basis of a few exemplary simulations with synthetic optical data. The optical data used in our study cover a range of Ångström exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested aerosol scenarios that are considered highly unlikely, e.g., the lidar ratios fall outside the commonly accepted range of values measured with Raman lidar, even though the underlying microphysical particle properties are not uncommon. The goal of this part of the study is to test robustness of the algorithms toward their ability to identify aerosol types that have not been measured so far, but cannot be ruled out based on our current knowledge of
NASA Astrophysics Data System (ADS)
Raynaud, M.; Bransier, J.
A space-marching finite difference algorithm is developed for solving the one-dimensional inverse heat conduction problem. The method is easy to apply, stable, and as accurate as the most efficient existing methods. An experimental set-up made of a rectangular parallelepiped polymerized around a woof of thermocouples has been designed especially to validate the method. The thermal conductivity of the test specimen was previously determined with the same set-up, and the specific heat is estimated during the experiments. The estimated surface heat flux is in very good agreement with the heat flux measured by a foil heat flux gage, regardless of the sensor locations. These results show that the method remains effective in spite of the cumulated effects of the errors due to the data acquisition system, to the location and calibration of the sensors, and to the simultaneous estimation of the specific heat.
Self-potential data inversion through a Genetic-Price algorithm
NASA Astrophysics Data System (ADS)
Di Maio, R.; Rani, P.; Piegari, E.; Milano, L.
2016-09-01
A global optimization method based on a Genetic-Price hybrid Algorithm (GPA) is proposed for identifying the source parameters of self-potential (SP) anomalies. The effectiveness of the proposed approach is tested on synthetic SP data generated by simple polarized structures, like sphere, vertical cylinder, horizontal cylinder and inclined sheet. An extensive numerical analysis on signals affected by different percentage of white Gaussian random noise shows that the GPA is able to provide fast and accurate estimations of the true parameters in all tested examples. In particular, the calculation of the root-mean squared error between the true and inverted SP parameter sets is found to be crucial for the identification of the source anomaly shape. Finally, applications of the GPA to self-potential field data are presented and discussed in light of the results provided by other sophisticated inversion methods.
Efficient Algorithms for Analyzing Segmental Duplications, Deletions, and Inversions in Genomes
NASA Astrophysics Data System (ADS)
Kahn, Crystal L.; Mozes, Shay; Raphael, Benjamin J.
Segmental duplications, or low-copy repeats, are common in mammalian genomes. In the human genome, most segmental duplications are mosaics consisting of pieces of multiple other segmental duplications. This complex genomic organization complicates analysis of the evolutionary history of these sequences. Earlier, we introduced a genomic distance, called duplication distance, that computes the most parsimonious way to build a target string by repeatedly copying substrings of a source string. We also showed how to use this distance to describe the formation of segmental duplications according to a two-step model that has been proposed to explain human segmental duplications. Here we describe polynomial-time exact algorithms for several extensions of duplication distance including models that allow certain types of substring deletions and inversions. These extensions will permit more biologically realistic analyses of segmental duplications in genomes.
NASA Astrophysics Data System (ADS)
Li, Cong; Lei, Jianshe
2014-10-01
In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.
NASA Astrophysics Data System (ADS)
Li, Cong; Lei, Jianshe
2014-09-01
In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.
Three-dimensional inverse modelling of magnetic anomaly sources based on a genetic algorithm
NASA Astrophysics Data System (ADS)
Montesinos, Fuensanta G.; Blanco-Montenegro, Isabel; Arnoso, José
2016-04-01
We present a modelling method to estimate the 3-D geometry and location of homogeneously magnetized sources from magnetic anomaly data. As input information, the procedure needs the parameters defining the magnetization vector (intensity, inclination and declination) and the Earth's magnetic field direction. When these two vectors are expected to be different in direction, we propose to estimate the magnetization direction from the magnetic map. Then, using this information, we apply an inversion approach based on a genetic algorithm which finds the geometry of the sources by seeking the optimum solution from an initial population of models in successive iterations through an evolutionary process. The evolution consists of three genetic operators (selection, crossover and mutation), which act on each generation, and a smoothing operator, which looks for the best fit to the observed data and a solution consisting of plausible compact sources. The method allows the use of non-gridded, non-planar and inaccurate anomaly data and non-regular subsurface partitions. In addition, neither constraints for the depth to the top of the sources nor an initial model are necessary, although previous models can be incorporated into the process. We show the results of a test using two complex synthetic anomalies to demonstrate the efficiency of our inversion method. The application to real data is illustrated with aeromagnetic data of the volcanic island of Gran Canaria (Canary Islands).
An inverse kinematics algorithm for a highly redundant variable-geometry-truss manipulator
NASA Technical Reports Server (NTRS)
Naccarato, Frank; Hughes, Peter
1989-01-01
A new class of robotic arm consists of a periodic sequence of truss substructures, each of which has several variable-length members. Such variable-geometry-truss manipulator (VGTMs) are inherently highly redundant and promise a significant increase in dexterity over conventional anthropomorphic manipulators. This dexterity may be exploited for both obstacle avoidance and controlled deployment in complex workspaces. The inverse kinematics problem for such unorthodox manipulators, however, becomes complex because of the large number of degrees of freedom, and conventional solutions to the inverse kinematics problem become inefficient because of the high degree of redundancy. A solution is presented to this problem based on a spline-like reference curve for the manipulator's shape. Such an approach has a number of advantages: (1) direct, intuitive manipulation of shape; (2) reduced calculation time; and (3) direct control over the effective degree of redundancy of the manipulator. Furthermore, although the algorithm was developed primarily for variable-geometry-truss manipulators, it is general enough for application to a number of manipulator designs.
Evaluation of a Geothermal Prospect Using a Stochastic Joint Inversion Algorithm
NASA Astrophysics Data System (ADS)
Tompson, A. F.; Mellors, R. J.; Ramirez, A.; Dyer, K.; Yang, X.; Trainor-Guitton, W.; Wagoner, J. L.
2013-12-01
A stochastic joint inverse algorithm to analyze diverse geophysical and hydrologic data for a geothermal prospect is developed. The purpose is to improve prospect evaluation by finding an ensemble of hydrothermal flow models that are most consistent with multiple types of data sets. The staged approach combines Bayesian inference within a Markov Chain Monte Carlo (MCMC) global search algorithm. The method is highly flexible and capable of accommodating multiple and diverse datasets as a means to maximize the utility of all available data to understand system behavior. An initial application is made at a geothermal prospect located near Superstition Mountain in the western Salton Trough in California. Readily available data include three thermal gradient exploration boreholes, borehole resistivity logs, magnetotelluric and gravity geophysical surveys, surface heat flux measurements, and other nearby hydrologic and geologic information. Initial estimates of uncertainty in structural or parametric characteristics of the prospect are used to drive large numbers of simulations of hydrothermal fluid flow and related geophysical processes using random realizations of the conceptual geothermal system. Uncertainty in the results is represented within a ranked subset of model realizations that best match all available data within a specified norm or tolerance. Statistical (posterior) characteristics of these solutions reflect reductions in the perceived (prior) uncertainties. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-641792.
NASA Astrophysics Data System (ADS)
Patz, Mark David
A non-intrusive buried object classifier for a ground penetrating radar (GPR) system is developed. Various GPR data sets and the implemented processing are described. A model based inversion algorithm that utilizes correlation methodology for target classification is introduced. Experimental data was collected with a continuous wave GPR. Synthetic data was generated with a newly developed software package that implements mathematical models to predict the electromagnetic returns from an underground object. Sample targets and geometries were chosen to produce nine configurations/scenarios for analysis. The real measurement sets for each configuration and the synthetic sets for a family of similar configurations were imaged with the same state-of-the-art signal processing algorithms. The imaged results for the real data measurements were correlated with the imaged results for the synthetic data sets to produce performance measurements, thus producing a procedure that provides a non-invasive assessment of the object and medium determined by the synthetic data set that maximally correlated with the real data return. Synthetic results and experiment results showed good correlations. For the synthetic data, a mathematical model was developed for electromagnetic returns from an object shape (i.e., cylinder, parallelepiped, sphere) composed of a uniform construction (i.e., metal, wood, plastic, clay) within a uniform dielectric material (i.e., air, sand, loam, clay, water). This model was then implemented within a software package, thus providing the ability to generate simulated measurements from any combination of object, construction, and dielectric.
Identify Structural Flaw Location and Type with an Inverse Algorithm of Resonance Inspection
Xu, Wei; Lai, Canhai; Sun, Xin
2015-10-20
To evaluate the fitness-for-service of a structural component and to quantify its remaining useful life, aging and service-induced structural flaws must be quantitatively determined in service or during scheduled maintenance shutdowns. Resonance inspection (RI), a non-destructive evaluation (NDE) technique, distinguishes the anomalous parts from the good parts based on changes in the natural frequency spectra. Known for its numerous advantages, i.e., low inspection cost, high testing speed, and broad applicability to complex structures, RI has been widely used in the automobile industry for quality inspection. However, compared to other contemporary direct visualization-based NDE methods, a more widespread application of RI faces a fundamental challenge because such technology is unable to quantify the flaw details, e.g. location, dimensions, and types. In this study, the applicability of a maximum correlation-based inverse RI algorithm developed by the authors is further studied for various flaw cases. It is demonstrated that a variety of common structural flaws, i.e. stiffness degradation, voids, and cracks, can be accurately retrieved by this algorithm even when multiple different types of flaws coexist. The quantitative relations between the damage identification results and the flaw characteristics are also developed to assist the evaluation of the actual state of health of the engineering structures.
Geophysical inversion with a neighbourhood algorithm-II. Appraising the ensemble
NASA Astrophysics Data System (ADS)
Sambridge, Malcolm
1999-09-01
Monte Carlo direct search methods, such as genetic algorithms, simulated annealing, etc., are often used to explore a finite-dimensional parameter space. They require the solving of the forward problem many times, that is, making predictions of observables from an earth model. The resulting ensemble of earth models represents all `information' collected in the search process. Search techniques have been the subject of much study in geophysics; less attention is given to the appraisal of the ensemble. Often inferences are based on only a small subset of the ensemble, and sometimes a single member. This paper presents a new approach to the appraisal problem. To our knowledge this is the first time the general case has been addressed, that is, how to infer information from a complete ensemble, previously generated by any search method. The essence of the new approach is to use the information in the available ensemble to guide a resampling of the parameter space. This requires no further solving of the forward problem, but from the new `resampled' ensemble we are able to obtain measures of resolution and trade-off in the model parameters, or any combinations of them. The new ensemble inference algorithm is illustrated on a highly non-linear wave-form inversion problem. It is shown how the computation time and memory requirements scale with the dimension of the parameter space and size of the ensemble. The method is highly parallel, and may easily be distributed across several computers. Since little is assumed about the initial ensemble of earth models, the technique is applicable to a wide variety of situations. For example, it may be applied to perform `error analysis' using the ensemble generated by a genetic algorithm, or any other direct search method.
NASA Astrophysics Data System (ADS)
Harker, Brian J.
The measurement of vector magnetic fields on the sun is one of the most important diagnostic tools for characterizing solar activity. The ubiquitous solar wind is guided into interplanetary space by open magnetic field lines in the upper solar atmosphere. Highly-energetic solar flares and Coronal Mass Ejections (CMEs) are triggered in lower layers of the solar atmosphere by the driving forces at the visible "surface" of the sun, the photosphere. The driving forces there tangle and interweave the vector magnetic fields, ultimately leading to an unstable field topology with large excess magnetic energy, and this excess energy is suddenly and violently released by magnetic reconnection, emitting intense broadband radiation that spans the electromagnetic spectrum, accelerating billions of metric tons of plasma away from the sun, and finally relaxing the magnetic field to lower-energy states. These eruptive flaring events can have severe impacts on the near-Earth environment and the human technology that inhabits it. This dissertation presents a novel inversion method for inferring the properties of the vector magnetic field from telescopic measurements of the polarization states (Stokes vector) of the light received from the sun, in an effort to develop a method that is fast, accurate, and reliable. One of the long-term goals of this work is to develop such a method that is capable of rapidly-producing characterizations of the magnetic field from time-sequential data, such that near real-time projections of the complexity and flare- productivity of solar active regions can be made. This will be a boon to the field of solar flare forecasting, and should help mitigate the harmful effects of space weather on mankind's space-based endeavors. To this end, I have developed an inversion method based on genetic algorithms (GA) that have the potential for achieving such high-speed analysis.
RNAiFOLD: a constraint programming algorithm for RNA inverse folding and molecular design.
Garcia-Martin, Juan Antonio; Clote, Peter; Dotu, Ivan
2013-04-01
Synthetic biology is a rapidly emerging discipline with long-term ramifications that range from single-molecule detection within cells to the creation of synthetic genomes and novel life forms. Truly phenomenal results have been obtained by pioneering groups--for instance, the combinatorial synthesis of genetic networks, genome synthesis using BioBricks, and hybridization chain reaction (HCR), in which stable DNA monomers assemble only upon exposure to a target DNA fragment, biomolecular self-assembly pathways, etc. Such work strongly suggests that nanotechnology and synthetic biology together seem poised to constitute the most transformative development of the 21st century. In this paper, we present a Constraint Programming (CP) approach to solve the RNA inverse folding problem. Given a target RNA secondary structure, we determine an RNA sequence which folds into the target structure; i.e. whose minimum free energy structure is the target structure. Our approach represents a step forward in RNA design--we produce the first complete RNA inverse folding approach which allows for the specification of a wide range of design constraints. We also introduce a Large Neighborhood Search approach which allows us to tackle larger instances at the cost of losing completeness, while retaining the advantages of meeting design constraints (motif, GC-content, etc.). Results demonstrate that our software, RNAiFold, performs as well or better than all state-of-the-art approaches; nevertheless, our approach is unique in terms of completeness, flexibility, and the support of various design constraints. The algorithms presented in this paper are publicly available via the interactive webserver http://bioinformatics.bc.edu/clotelab/RNAiFold; additionally, the source code can be downloaded from that site. PMID:23600819
NASA Technical Reports Server (NTRS)
Pinkney, J.; Rhee, George F.; Burns, Jack O.; Batuski, D.; Hill, J. M.; Hintzen, P.; Oegerle, W.
1993-01-01
We have amassed a large sample of velocity data for the cluster of galaxies Abell 2634 which contains the wide-angle tail (WAT) radio source 3C 465. Robust indicators of location and scale and their confidence intervals are used to determine if the cD galaxy, containing the WAT, has a significant peculiar motion. We find a cD peculiar radial velocity of 219 plus or minus 98 km s(exp -1). Further dynamical analyses, including substructure and normality tests, suggest that A 2634 is an unrelaxed cluster whose radio source structure may be bent by the turbulent gas of a recent cluster-subcluster merger.
NASA Astrophysics Data System (ADS)
Fernández Martínez, Juan L.; García Gonzalo, Esperanza; Fernández Álvarez, José P.; Kuzma, Heidi A.; Menéndez Pérez, César O.
2010-05-01
PSO is an optimization technique inspired by the social behavior of individuals in nature (swarms) that has been successfully used in many different engineering fields. In addition, the PSO algorithm can be physically interpreted as a stochastic damped mass-spring system. This analogy has served to introduce the PSO continuous model and to deduce a whole family of PSO algorithms using different finite-differences schemes. These algorithms are characterized in terms of convergence by their respective first and second order stability regions. The performance of these new algorithms is first checked using synthetic functions showing a degree of ill-posedness similar to that found in many geophysical inverse problems having their global minimum located on a very narrow flat valley or surrounded by multiple local minima. Finally we present the application of these PSO algorithms to the analysis and solution of a VES inverse problem associated with a seawater intrusion in a coastal aquifer in southern Spain. PSO family members are successfully compared to other well known global optimization algorithms (binary genetic algorithms and simulated annealing) in terms of their respective convergence curves and the sea water intrusion depth posterior histograms.
NASA Astrophysics Data System (ADS)
Bao, Xingxian; Cao, Aixia; Zhang, Jing
2016-07-01
Modal parameters estimation plays an important role for structural health monitoring. Accurately estimating the modal parameters of structures is more challenging as the measured vibration response signals are contaminated with noise. This study develops a mathematical algorithm of solving the partially described inverse singular value problem (PDISVP) combined with the complex exponential (CE) method to estimate the modal parameters. The PDISVP solving method is to reconstruct an L2-norm optimized (filtered) data matrix from the measured (noisy) data matrix, when the prescribed data constraints are one or several sets of singular triplets of the matrix. The measured data matrix is Hankel structured, which is constructed based on the measured impulse response function (IRF). The reconstructed matrix must maintain the Hankel structure, and be lowered in rank as well. Once the filtered IRF is obtained, the CE method can be applied to extract the modal parameters. Two physical experiments, including a steel cantilever beam with 10 accelerometers mounted, and a steel plate with 30 accelerometers mounted, excited by an impulsive load, respectively, are investigated to test the applicability of the proposed scheme. In addition, the consistency diagram is proposed to exam the agreement among the modal parameters estimated from those different accelerometers. Results indicate that the PDISVP-CE method can significantly remove noise from measured signals and accurately estimate the modal frequencies and damping ratios.
Wang, Hong; Wang, Xi-cheng
2014-02-21
Metabolism is a very important cellular process and its malfunction contributes to human disease. Therefore, building dynamic models for metabolic networks with experimental data in order to analyze biological process rationally has attracted a lot of attention. Owing to the technical limitations, some unknown parameters contained in models need to be estimated effectively by means of the computational method. Generally, problems of parameter estimation of nonlinear biological network are known to be ill condition and multimodal. In particular, with the increasing amount and enlarging the scope of parameters, many optimization algorithms often fail to find a global solution. In this paper, two-stage variable factor Bregman regularization homotopy method is proposed. Discrete homotopy is used to identify the possible extreme region and continuous homotopy is executed for the purpose of stability of path tracing in the special region. Meanwhile, Latin hypercube sampling is introduced to get the good initial guess value and a perturbation strategy is developed to jump out of the local optimum. Three metabolic network inverse problems are investigated to demonstrate the effectiveness of the proposed method. PMID:24060619
Johnson, S A; Zhou, Y; Tracy, M K; Berggren, M J; Stenger, F
1984-01-01
olving the inverse scattering problem for the Helmholtz wave equation without employing the Born or Rytov approximations is a challenging problem, but some slow iterative methods have been proposed. One such method suggested by us is based on solving systems of nonlinear algebraic equations that are derived by applying the method of moments to a sinc basis function expansion of the fields and scattering potential. In the past, we have solved these equations for a 2-D object of n by n pixels in a time proportional to n5. In the present paper, we demonstrate a new method based on FFT convolution and the concept of backprojection which solves these equations in time proportional to n3 X log(n). Several numerical examples are given for images up to 7 by 7 pixels in size. Analogous algorithms to solve the Riccati wave equation in n3 X log(n) time are also suggested, but not verified. A method is suggested for interpolating measurements from one detector geometry to a new perturbed detector geometry whose measurement points fall on a FFT accessible, rectangular grid and thereby render many detector geometrics compatible for use by our fast methods. PMID:6540908
Mass Substructure in Abell 3128
NASA Astrophysics Data System (ADS)
McCleary, J.; dell'Antonio, I.; Huwe, P.
2015-05-01
We perform a detailed two-dimensional weak gravitational lensing analysis of the nearby (z = 0.058) galaxy cluster Abell 3128 using deep ugrz imaging from the Dark Energy Camera (DECam). We have designed a pipeline to remove instrumental artifacts from DECam images and stack multiple dithered observations without inducing a spurious ellipticity signal. We develop a new technique to characterize the spatial variation of the point-spread function that enables us to circularize the field to better than 0.5% and thereby extract the intrinsic galaxy ellipticities. By fitting photometric redshifts to sources in the observation, we are able to select a sample of background galaxies for weak-lensing analysis free from low-redshift contaminants. Photometric redshifts are also used to select a high-redshift galaxy subsample with which we successfully isolate the signal from an interloping z = 0.44 cluster. We estimate the total mass of Abell 3128 by fitting the tangential ellipticity of background galaxies with the weak-lensing shear profile of a Navarro-Frenk-White (NFW) halo and also perform NFW fits to substructures detected in the 2D mass maps of the cluster. This study yields one of the highest resolution mass maps of a low-z cluster to date and is the first step in a larger effort to characterize the redshift evolution of mass substructures in clusters.
NASA Astrophysics Data System (ADS)
Li, Zhanhui; Huang, Qinghua; Xie, Xingbing; Tang, Xingong; Chang, Liao
2016-08-01
We present a generic 1D forward modeling and inversion algorithm for transient electromagnetic (TEM) data with an arbitrary horizontal transmitting loop and receivers at any depth in a layered earth. Both the Hankel and sine transforms required in the forward algorithm are calculated using the filter method. The adjoint-equation method is used to derive the formulation of data sensitivity at any depth in non-permeable media. The inversion algorithm based on this forward modeling algorithm and sensitivity formulation is developed using the Gauss-Newton iteration method combined with the Tikhonov regularization. We propose a new data-weighting method to minimize the initial model dependence that enhances the convergence stability. On a laptop with a CPU of i7-5700HQ@3.5 GHz, the inversion iteration of a 200 layered input model with a single receiver takes only 0.34 s, while it increases to only 0.53 s for the data from four receivers at a same depth. For the case of four receivers at different depths, the inversion iteration runtime increases to 1.3 s. Modeling the data with an irregular loop and an equal-area square loop indicates that the effect of the loop geometry is significant at early times and vanishes gradually along the diffusion of TEM field. For a stratified earth, inversion of data from more than one receiver is useful in noise reducing to get a more credible layered earth. However, for a resistive layer shielded below a conductive layer, increasing the number of receivers on the ground does not have significant improvement in recovering the resistive layer. Even with a down-hole TEM sounding, the shielded resistive layer cannot be recovered if all receivers are above the shielded resistive layer. However, our modeling demonstrates remarkable improvement in detecting the resistive layer with receivers in or under this layer.
NASA Astrophysics Data System (ADS)
Li, Zhanhui; Huang, Qinghua; Xie, Xingbing; Tang, Xingong; Chang, Liao
2016-07-01
We present a generic 1D forward modeling and inversion algorithm for transient electromagnetic (TEM) data with an arbitrary horizontal transmitting loop and receivers at any depth in a layered earth. Both the Hankel and sine transforms required in the forward algorithm are calculated using the filter method. The adjoint-equation method is used to derive the formulation of data sensitivity at any depth in non-permeable media. The inversion algorithm based on this forward modeling algorithm and sensitivity formulation is developed using the Gauss-Newton iteration method combined with the Tikhonov regularization. We propose a new data-weighting method to minimize the initial model dependence that enhances the convergence stability. On a laptop with a CPU of i7-5700HQ@3.5 GHz, the inversion iteration of a 200 layered input model with a single receiver takes only 0.34 s, while it increases to only 0.53 s for the data from four receivers at a same depth. For the case of four receivers at different depths, the inversion iteration runtime increases to 1.3 s. Modeling the data with an irregular loop and an equal-area square loop indicates that the effect of the loop geometry is significant at early times and vanishes gradually along the diffusion of TEM field. For a stratified earth, inversion of data from more than one receiver is useful in noise reducing to get a more credible layered earth. However, for a resistive layer shielded below a conductive layer, increasing the number of receivers on the ground does not have significant improvement in recovering the resistive layer. Even with a down-hole TEM sounding, the shielded resistive layer cannot be recovered if all receivers are above the shielded resistive layer. However, our modeling demonstrates remarkable improvement in detecting the resistive layer with receivers in or under this layer.
Jiang, Xiaoming; Van den Broek, Wouter; Koch, Christoph T
2016-04-01
Inverse dynamical photon scattering (IDPS), an artificial neural network based algorithm for three-dimensional quantitative imaging in optical microscopy, is introduced. Because the inverse problem entails numerical minimization of an explicit error metric, it becomes possible to freely choose a more robust metric, to introduce regularization of the solution, and to retrieve unknown experimental settings or microscope values, while the starting guess is simply set to zero. The regularization is accomplished through an alternate directions augmented Lagrangian approach, implemented on a graphics processing unit. These improvements are demonstrated on open source experimental data, retrieving three-dimensional amplitude and phase for a thick specimen. PMID:27136994
NASA Astrophysics Data System (ADS)
Hetmaniok, Edyta
2015-08-01
In this paper the procedure for solving the inverse problem for the binary alloy solidification in the casting mould is presented. Proposed approach is based on the mathematical model suitable for describing the investigated solidification process, the lever arm model describing the macrosegregation process, the finite element method for solving the direct problem and the artificial bee colony algorithm for minimizing the functional expressing the error of approximate solution. Goal of the discussed inverse problem is the reconstruction of heat transfer coefficient and distribution of temperature in investigated region on the basis of known measurements of temperature.
NASA Astrophysics Data System (ADS)
Hetmaniok, Edyta
2016-07-01
In this paper the procedure for solving the inverse problem for the binary alloy solidification in the casting mould is presented. Proposed approach is based on the mathematical model suitable for describing the investigated solidification process, the lever arm model describing the macrosegregation process, the finite element method for solving the direct problem and the artificial bee colony algorithm for minimizing the functional expressing the error of approximate solution. Goal of the discussed inverse problem is the reconstruction of heat transfer coefficient and distribution of temperature in investigated region on the basis of known measurements of temperature.
NASA Astrophysics Data System (ADS)
Shao, Xinxing; Dai, Xiangjun; He, Xiaoyuan
2015-08-01
The inverse compositional Gauss-Newton (IC-GN) algorithm is one of the most popular sub-pixel registration algorithms in digital image correlation (DIC). The IC-GN algorithm, compared with the traditional forward additive Newton-Raphson (FA-NR) algorithm, can achieve the same accuracy in less time. However, there are no clear results regarding the noise robustness of IC-GN algorithm and the computational efficiency is still in need of further improvements. In this paper, a theoretical model of the IC-GN algorithm was derived based on the sum of squared differences correlation criterion and linear interpolation. The model indicates that the IC-GN algorithm has better noise robustness than the FA-NR algorithm, and shows no noise-induced bias if the gray gradient operator is chosen properly. Both numerical simulations and experiments show good agreements with the theoretical predictions. Furthermore, a seed point-based parallel method is proposed to improve the calculation speed. Compared with the recently proposed path-independent method, our model is feasible and practical, and it can maximize the computing speed using an improved initial guess. Moreover, we compared the computational efficiency of our method with that of the reliability-guided method using a four-point bending experiment, and the results show that the computational efficiency is greatly improved. This proposed parallel IC-GN algorithm has good noise robustness and is expected to be a practical option for real-time DIC.
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
NASA Technical Reports Server (NTRS)
Bayo, Eduardo; Ledesma, Ragnar
1993-01-01
A technique is presented for solving the inverse dynamics of flexible planar multibody systems. This technique yields the non-causal joint efforts (inverse dynamics) as well as the internal states (inverse kinematics) that produce a prescribed nominal trajectory of the end effector. A non-recursive global Lagrangian approach is used in formulating the equations for motion as well as in solving the inverse dynamics equations. Contrary to the recursive method previously presented, the proposed method solves the inverse problem in a systematic and direct manner for both open-chain as well as closed-chain configurations. Numerical simulation shows that the proposed procedure provides an excellent tracking of the desired end effector trajectory.
Zeng, C.; Xia, J.; Miller, R.D.; Tsoflias, G.P.
2011-01-01
Conventional surface wave inversion for shallow shear (S)-wave velocity relies on the generation of dispersion curves of Rayleigh waves. This constrains the method to only laterally homogeneous (or very smooth laterally heterogeneous) earth models. Waveform inversion directly fits waveforms on seismograms, hence, does not have such a limitation. Waveforms of Rayleigh waves are highly related to S-wave velocities. By inverting the waveforms of Rayleigh waves on a near-surface seismogram, shallow S-wave velocities can be estimated for earth models with strong lateral heterogeneity. We employ genetic algorithm (GA) to perform waveform inversion of Rayleigh waves for S-wave velocities. The forward problem is solved by finite-difference modeling in the time domain. The model space is updated by generating offspring models using GA. Final solutions can be found through an iterative waveform-fitting scheme. Inversions based on synthetic records show that the S-wave velocities can be recovered successfully with errors no more than 10% for several typical near-surface earth models. For layered earth models, the proposed method can generate one-dimensional S-wave velocity profiles without the knowledge of initial models. For earth models containing lateral heterogeneity in which case conventional dispersion-curve-based inversion methods are challenging, it is feasible to produce high-resolution S-wave velocity sections by GA waveform inversion with appropriate priori information. The synthetic tests indicate that the GA waveform inversion of Rayleigh waves has the great potential for shallow S-wave velocity imaging with the existence of strong lateral heterogeneity. ?? 2011 Elsevier B.V.
Song, Yang; Zhang, Bin; He, Anzhi
2006-11-01
A novel algebraic iterative algorithm based on deflection tomography is presented. This algorithm is derived from the essentials of deflection tomography with a linear expansion of the local basis functions. By use of this algorithm the tomographic problem is finally reduced to the solution of a set of linear equations. The algorithm is demonstrated by mapping a three-peak Gaussian simulative temperature field. Compared with reconstruction results obtained by other traditional deflection algorithms, its reconstruction results provide a significant improvement in reconstruction accuracy, especially in cases with noisy data added. In the density diagnosis of a hypersonic wind tunnel, this algorithm is adopted to reconstruct density distributions of an axial symmetry flow field. One cross section of the reconstruction results is selected to be compared with the inverse Abel transform algorithm. Results show that the novel algorithm can achieve an accuracy equivalent to the inverse Abel transform algorithm. However, the novel algorithm is more versatile because it is applicable to arbitrary kinds of distribution. PMID:17068552
Kinugawa, Tohru
2014-02-15
This paper presents a simple but nontrivial generalization of Abel's mechanical problem, based on the extended isochronicity condition and the superposition principle. There are two primary aims. The first one is to reveal the linear relation between the transit-time T and the travel-length X hidden behind the isochronicity problem that is usually discussed in terms of the nonlinear equation of motion (d{sup 2}X)/(dt{sup 2}) +(dU)/(dX) =0 with U(X) being an unknown potential. Second, the isochronicity condition is extended for the possible Abel-transform approach to designing the isochronous trajectories of charged particles in spectrometers and/or accelerators for time-resolving experiments. Our approach is based on the integral formula for the oscillatory motion by Landau and Lifshitz [Mechanics (Pergamon, Oxford, 1976), pp. 27–29]. The same formula is used to treat the non-periodic motion that is driven by U(X). Specifically, this unknown potential is determined by the (linear) Abel transform X(U) ∝ A[T(E)], where X(U) is the inverse function of U(X), A=(1/√(π))∫{sub 0}{sup E}dU/√(E−U) is the so-called Abel operator, and T(E) is the prescribed transit-time for a particle with energy E to spend in the region of interest. Based on this Abel-transform approach, we have introduced the extended isochronicity condition: typically, τ = T{sub A}(E) + T{sub N}(E) where τ is a constant period, T{sub A}(E) is the transit-time in the Abel type [A-type] region spanning X > 0 and T{sub N}(E) is that in the Non-Abel type [N-type] region covering X < 0. As for the A-type region in X > 0, the unknown inverse function X{sub A}(U) is determined from T{sub A}(E) via the Abel-transform relation X{sub A}(U) ∝ A[T{sub A}(E)]. In contrast, the N-type region in X < 0 does not ensure this linear relation: the region is covered with a predetermined potential U{sub N}(X) of some arbitrary choice, not necessarily obeying the Abel-transform relation. In discussing
NASA Astrophysics Data System (ADS)
Kinugawa, Tohru
2014-02-01
This paper presents a simple but nontrivial generalization of Abel's mechanical problem, based on the extended isochronicity condition and the superposition principle. There are two primary aims. The first one is to reveal the linear relation between the transit-time T and the travel-length X hidden behind the isochronicity problem that is usually discussed in terms of the nonlinear equation of motion {d^2X}/{dt^2} + {dU}/{dX} = 0 with U(X) being an unknown potential. Second, the isochronicity condition is extended for the possible Abel-transform approach to designing the isochronous trajectories of charged particles in spectrometers and/or accelerators for time-resolving experiments. Our approach is based on the integral formula for the oscillatory motion by Landau and Lifshitz [Mechanics (Pergamon, Oxford, 1976), pp. 27-29]. The same formula is used to treat the non-periodic motion that is driven by U(X). Specifically, this unknown potential is determined by the (linear) Abel transform X(U) ∝ A[T(E)], where X(U) is the inverse function of U(X), A = (1/sqrt{π })int 0E {dU}/sqrt{E-U} is the so-called Abel operator, and T(E) is the prescribed transit-time for a particle with energy E to spend in the region of interest. Based on this Abel-transform approach, we have introduced the extended isochronicity condition: typically, τ = TA(E) + TN(E) where τ is a constant period, TA(E) is the transit-time in the Abel type [A-type] region spanning X > 0 and TN(E) is that in the Non-Abel type [N-type] region covering X < 0. As for the A-type region in X > 0, the unknown inverse function XA(U) is determined from TA(E) via the Abel-transform relation XA(U) ∝ A[TA(E)]. In contrast, the N-type region in X < 0 does not ensure this linear relation: the region is covered with a predetermined potential UN(X) of some arbitrary choice, not necessarily obeying the Abel-transform relation. In discussing the isochronicity problem, there has been no attempt of N-type regions that are
Harada, Ryuhei; Takano, Yu; Shigeta, Yasuteru
2016-05-10
The TaBoo SeArch (TBSA) algorithm [ Harada et al. J. Comput. Chem. 2015 , 36 , 763 - 772 and Harada et al. Chem. Phys. Lett. 2015 , 630 , 68 - 75 ] was recently proposed as an enhanced conformational sampling method for reproducing biologically relevant rare events of a given protein. In TBSA, an inverse histogram of the original distribution, mapped onto a set of reaction coordinates, is constructed from trajectories obtained by multiple short-time molecular dynamics (MD) simulations. Rarely occurring states of a given protein are statistically selected as new initial states based on the inverse histogram, and resampling is performed by restarting the MD simulations from the new initial states to promote the conformational transition. In this process, the definition of the inverse histogram, which characterizes the rarely occurring states, is crucial for the efficiency of TBSA. In this study, we propose a simple modification of the inverse histogram to further accelerate the convergence of TBSA. As demonstrations of the modified TBSA, we applied it to (a) hydrogen bonding rearrangements of Met-enkephalin, (b) large-amplitude domain motions of Glutamine-Binding Protein, and (c) folding processes of the B domain of Staphylococcus aureus Protein A. All demonstrations numerically proved that the modified TBSA reproduced these biologically relevant rare events with nanosecond-order simulation times, although a set of microsecond-order, canonical MD simulations failed to reproduce the rare events, indicating the high efficiency of the modified TBSA. PMID:27070761
Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang
2016-01-01
Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938
Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim
2014-02-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.
Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang
2016-01-01
Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938
NASA Astrophysics Data System (ADS)
Driver, Simon
1999-07-01
We request 24 orbits to obtain a deep mosaic {6 * 4-orbit pointings} of the central region of A868, a rich Abell cluster which we have imaged comprehensively from the ground. The objective is to identify and characterise the morphological nature of the dwarf galaxy population{s} responsible for the steep upturn seen in this cluster's luminosity function. While similar upturns have been reported in many clusters, the specifics of the dwarf population remain unknown as these objects cannot be resolved from the ground. What type of dwarf galaxies are they ? Is there more than one population contributing ? How are they clustered ? By obtaining deep high-resolution HST WFPC2 imaging over a central field roughly 7.5' * 3.75' we will be able to measure morphologies, light- profiles and the clustering properties of the dwarf population{s} down to M_I = -16 mags {H_o=75kms^-1Mpc^-1}. Although we shall primarily concentrate on the dwarf galaxies, we will also recover the cluster's morphological luminosity distributions for elliptical, spirals and irregulars over a broad absolute magnitude range {-24 < M_I < -16 mags} as well as the more quantitative bivariate brightness distribution {- 24 < M_I < -16 mags, 17.0 < mu_e^I < 25 mags per sq arcsec}. Comparing these results to those recently derived for the general field will provide an insight into the environmental influences on morphology and surface brightness.
NASA Astrophysics Data System (ADS)
Zhang, Wei; Zhao, Chunhui; He, Xing; Zhang, Weidong
2016-05-01
In this paper, the structure feature of the inverse of a multi-input/multi-output square transfer function matrix is explored. Instead of complicated advanced mathematical tools, we only use basic results of complex analysis in the analysing procedure. By employing the Laurent expression, an elegant structure form of the expansion is obtained for the transfer function matrix inverse. This expansion form is the key of deriving an analytical solution to the inner-outer factorisation for both stable plants and unstable plants. Different from other computation algorithm, the obtained inner-outer factorisation is given in an analytical form. The solution is exact and without approximation. Numerical examples are provided to verify the correctness of the obtained results.
NASA Astrophysics Data System (ADS)
Boukabara, S. A.; Garrett, K.
2014-12-01
A one-dimensional variational retrieval system has been developed, capable of producing temperature and water vapor profiles in clear, cloudy and precipitating conditions. The algorithm, known as the Microwave Integrated Retrieval System (MiRS), is currently running operationally at the National Oceanic and Atmospheric Administration (NOAA) National Environmental Satellite Data and Information Service (NESDIS), and is applied to a variety of data from the AMSU-A/MHS sensors on board the NOAA-18, NOAA-19, and MetOp-A/B polar satellite platforms, as well as SSMI/S on board both DMSP F-16 and F18, and from the NPP ATMS sensor. MiRS inverts microwave brightness temperatures into atmospheric temperature and water vapor profiles, along with hydrometeors and surface parameters, simultaneously. This atmosphere/surface coupled inversion allows for more accurate retrievals in the lower tropospheric layers by accounting for the surface emissivity impact on the measurements. It also allows the inversion of the soundings in all-weather conditions thanks to the incorporation of the hydrometeors parameters in the inverted state vector as well as to the inclusion of the emissivity in the same state vector, which is accounted for dynamically for the highly variable surface conditions found under precipitating atmospheres. The inversion is constrained in precipitating conditions by the inclusion of covariances for hydrometeors, to take advantage of the natural correlations that exist between temperature and water vapor with liquid and ice cloud along with rain water. In this study, we present a full assessment of temperature and water vapor retrieval performances in all-weather conditions and over all surface types (ocean, sea-ice, land, and snow) using matchups with radiosonde as well as Numerical Weather Prediction and other satellite retrieval algorithms as references. An emphasis is placed on retrievals in cloudy and precipitating atmospheres, including extreme weather events
NASA Astrophysics Data System (ADS)
Reuter, M.; Bösch, H.; Bovensmann, H.; Bril, A.; Buchwitz, M.; Butz, A.; Burrows, J. P.; O'Dell, C. W.; Guerlet, S.; Hasekamp, O.; Heymann, J.; Kikuchi, N.; Oshchepkov, S.; Parker, R.; Pfeifer, S.; Schneising, O.; Yokota, T.; Yoshida, Y.
2012-09-01
We analyze an ensemble of seven XCO2 retrieval algorithms for SCIAMACHY and GOSAT. The ensemble spread can be interpreted as regional uncertainty and can help to identify locations for new TCCON validation sites. Additionally, we introduce the ensemble median algorithm EMMA combining individual soundings of the seven algorithms into one new dataset. The ensemble takes advantage of the algorithms' independent developments. We find ensemble spreads being often <1 ppm but rising up to 2 ppm especially in the tropics and East Asia. On the basis of gridded monthly averages, we compare EMMA and all individual algorithms with TCCON and CarbonTracker model results (potential outliers, north/south gradient, seasonal (peak-to-peak) amplitude, standard deviation of the difference). Our findings show that EMMA is a promising candidate for inverse modeling studies. Compared to CarbonTracker, the satellite retrievals find consistently larger north/south gradients (by 0.3 ppm-0.9 ppm) and seasonal amplitudes (by 1.5 ppm-2.0 ppm).
NASA Astrophysics Data System (ADS)
Reuter, M.; Bösch, H.; Bovensmann, H.; Bril, A.; Buchwitz, M.; Butz, A.; Burrows, J. P.; O'Dell, C. W.; Guerlet, S.; Hasekamp, O.; Heymann, J.; Kikuchi, N.; Oshchepkov, S.; Parker, R.; Pfeifer, S.; Schneising, O.; Yokota, T.; Yoshida, Y.
2013-02-01
We analyze an ensemble of seven XCO2 retrieval algorithms for SCIAMACHY (scanning imaging absorption spectrometer of atmospheric chartography) and GOSAT (greenhouse gases observing satellite). The ensemble spread can be interpreted as regional uncertainty and can help to identify locations for new TCCON (total carbon column observing network) validation sites. Additionally, we introduce the ensemble median algorithm EMMA combining individual soundings of the seven algorithms into one new data set. The ensemble takes advantage of the algorithms' independent developments. We find ensemble spreads being often < 1 ppm but rising up to 2 ppm especially in the tropics and East Asia. On the basis of gridded monthly averages, we compare EMMA and all individual algorithms with TCCON and CarbonTracker model results (potential outliers, north/south gradient, seasonal (peak-to-peak) amplitude, standard deviation of the difference). Our findings show that EMMA is a promising candidate for inverse modeling studies. Compared to CarbonTracker, the satellite retrievals find consistently larger north/south gradients (by 0.3-0.9 ppm) and seasonal amplitudes (by 1.5-2.0 ppm).
NASA Astrophysics Data System (ADS)
Padhi, Amit; Mallick, Subhashis
2014-03-01
Inversion of band- and offset-limited single component (P wave) seismic data does not provide robust estimates of subsurface elastic parameters and density. Multicomponent seismic data can, in principle, circumvent this limitation but adds to the complexity of the inversion algorithm because it requires simultaneous optimization of multiple objective functions, one for each data component. In seismology, these multiple objectives are typically handled by constructing a single objective given as a weighted sum of the objectives of individual data components and sometimes with additional regularization terms reflecting their interdependence; which is then followed by a single objective optimization. Multi-objective problems, inclusive of the multicomponent seismic inversion are however non-linear. They have non-unique solutions, known as the Pareto-optimal solutions. Therefore, casting such problems as a single objective optimization provides one out of the entire set of the Pareto-optimal solutions, which in turn, may be biased by the choice of the weights. To handle multiple objectives, it is thus appropriate to treat the objective as a vector and simultaneously optimize each of its components so that the entire Pareto-optimal set of solutions could be estimated. This paper proposes such a novel multi-objective methodology using a non-dominated sorting genetic algorithm for waveform inversion of multicomponent seismic data. The applicability of the method is demonstrated using synthetic data generated from multilayer models based on a real well log. We document that the proposed method can reliably extract subsurface elastic parameters and density from multicomponent seismic data both when the subsurface is considered isotropic and transversely isotropic with a vertical symmetry axis. We also compute approximate uncertainty values in the derived parameters. Although we restrict our inversion applications to horizontally stratified models, we outline a practical
NASA Technical Reports Server (NTRS)
Moghaddam, Mahta
1995-01-01
In this work, the application of an inversion algorithm based on a nonlinear opimization technique to retrieve forest parameters from multifrequency polarimetric SAR data is discussed. The approach discussed here allows for retrieving and monitoring changes in forest parameters in a quantative and systematic fashion using SAR data. The parameters to be inverted directly from the data are the electromagnetic scattering properties of the forest components such as their dielectric constants and size characteristics. Once these are known, attributes such as canopy moisture content can be obtained, which are useful in the ecosystem models.
Rapid Inversion of Angular Deflection Data for Certain Axisymmetric Refractive Index Distributions
NASA Technical Reports Server (NTRS)
Rubinstein, R.; Greenberg, P. S.
1994-01-01
Certain functions useful for representing axisymmetric refractive-index distributions are shown to have exact solutions for Abel transformation of the resulting angular deflection data. An advantage of this procedure over direct numerical Abel inversion is that least-squares curve fitting is a smoothing process that reduces the noise sensitivity of the computation
NASA Astrophysics Data System (ADS)
Tompson, A. F. B.; Mellors, R. J.; Dyer, K.; Yang, X.; Chen, M.; Trainor Guitton, W.; Wagoner, J. L.; Ramirez, A. L.
2014-12-01
A stochastic joint inverse algorithm is used to analyze diverse geophysical and hydrologic data associated with a geothermal prospect. The approach uses a Markov Chain Monte Carlo (MCMC) global search algorithm to develop an ensemble of hydrothermal groundwater flow models that are most consistent with the observations. The algorithm utilizes an initial conceptual model descriptive of structural (geology), parametric (permeability) and hydrothermal (saturation, temperature) characteristics of the geologic system. Initial (a-priori) estimates of uncertainty in these characteristics are used to drive simulations of hydrothermal fluid flow and related geophysical processes in a large number of random realizations of the conceptual geothermal system spanning these uncertainties. The process seeks to improve the conceptual model by developing a ranked subset of model realizations that best match all available data within a specified norm or tolerance. Statistical (posterior) characteristics of these solutions reflect reductions in the a-priori uncertainties. The algorithm has been tested on a geothermal prospect located at Superstition Mountain, California and has been successful in creating a suite of models compatible with available temperature, surface resistivity, and magnetotelluric (MT) data. Although the MCMC method is highly flexible and capable of accommodating multiple and diverse datasets, a typical inversion may require the evaluation of thousands of possible model runs whose sophistication and complexity may evolve with the magnitude of data considered. As a result, we are testing the use of sensitivity analyses to better identify critical uncertain variables, lower order surrogate models to streamline computational costs, and value of information analyses to better assess optimal use of related data. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL
NASA Astrophysics Data System (ADS)
Monteiller, Vadim; Beller, Stephen; Nolet, Guust; Operto, Stephane; Brossier, Romain; Métivier, Ludovic; Paul, Anne; Virieux, Jean
2014-05-01
The current development of dense seismic arrays and high performance computing make feasible today application of full-waveform inversion (FWI) on teleseismic data for high-resolution lithospheric imaging. In teleseismic configuration, the source is to first-order a plane-wave that impinges the base of the lithospheric target located below the receiver array. In this setting, FWI aims to exploit not only the forward-scattered waves propagating up to the receiver but also second-order arrivals that are back-scattered from the free-surface and the reflectors before their recordings on the surface. FWI requires using full-wave methods modeling such as finite-difference or finite-element methods. In this framework, careful design of FWI algorithms is topical to mitigate as much as possible the computational burden of multi-source full-waveform modeling. In this presentation, we review some key specifications that might be considered for versatile FWI implementation. An abstraction level between the forward and inverse problems that allows for the interfacing of different modeling engines with the inversion. This requires the subsurface meshings that are used to perform seismic modeling and update the subsurface models during inversion to be fully independent through some back-and-forth projection processes. The subsurface parameterization should be carefully chosen during multi-parameter FWI as it controls the trade-off between parameters of different nature. A versatile FWI algorithm should be designed such that different subsurface parameterizations for the model update can be easily implemented. The gradient of the misfit function should be computed as easily as possible with the adjoint-state method in parallel environment. This first requires the gradient to be independent to the discretization method that is used to perform seismic modeling. Second, the incident and adjoint wavefields should be computed with the same numerical scheme, even if the forward problem
NASA Astrophysics Data System (ADS)
Mordret, A.; Landès, M.; Shapiro, N. M.; Singh, S. C.; Roux, P.
2014-09-01
This study presents a depth inversion of Scholte wave group and phase velocity maps obtained from cross-correlation of 6.5 hr of noise data from the Valhall Life of Field Seismic network. More than 2 600 000 vertical-vertical component cross-correlations are computed from the 2320 available sensors, turning each sensor into a virtual source emitting Scholte waves. We used a traditional straight-ray surface wave tomography to compute the group velocity map. The phase velocity maps have been computed using the Eikonal tomography method. The inversion of these maps in depth are done with the Neighbourhood Algorithm. To reduce the number of free parameters to invert, geological a priori information are used to propose a power-law 1-D velocity profile parametrization extended with a gaussian high-velocity layer where needed. These parametrizations allowed us to create a high-resolution 3-D S-wave model of the first 600 m of the Valhall subsurface and to precise the locations of geological structures at depth. These results would have important implication for shear wave statics and monitoring of seafloor subsidence due to oil extraction. The 3-D model could also be a good candidate for a starting model used in full-waveform inversions.
NASA Astrophysics Data System (ADS)
Jesús Moral García, Francisco; Rebollo Castillo, Francisco Javier; Monteiro Santos, Fernando
2016-04-01
Maps of apparent electrical conductivity of the soil are commonly used in precision agriculture to indirectly characterize some important properties like salinity, water, and clay content. Traditionally, these studies are made through an empirical relationship between apparent electrical conductivity and properties measured in soil samples collected at a few locations in the experimental area and at a few selected depths. Recently, some authors have used not the apparent conductivity values but the soil bulk conductivity (in 2D or 3D) calculated from measured apparent electrical conductivity through the application of an inversion method. All the published works used data collected with electromagnetic (EM) instruments. We present a new software to invert the apparent electrical conductivity data collected with VERIS 3100 and 3150 (or the more recent version with three pairs of electrodes) using the 1D spatially constrained inversion method (1D SCI). The software allows the calculation of the distribution of the bulk electrical conductivity in the survey area till a depth of 1 m. The algorithm is applied to experimental data and correlations with clay and water content have been established using soil samples collected at some boreholes. Keywords: Digital soil mapping; inversion modelling; VERIS; soil apparent electrical conductivity.
Bakhos, Tania; Saibaba, Arvind K.; Kitanidis, Peter K.
2015-10-15
We consider the problem of estimating parameters in large-scale weakly nonlinear inverse problems for which the underlying governing equations is a linear, time-dependent, parabolic partial differential equation. A major challenge in solving these inverse problems using Newton-type methods is the computational cost associated with solving the forward problem and with repeated construction of the Jacobian, which represents the sensitivity of the measurements to the unknown parameters. Forming the Jacobian can be prohibitively expensive because it requires repeated solutions of the forward and adjoint time-dependent parabolic partial differential equations corresponding to multiple sources and receivers. We propose an efficient method based on a Laplace transform-based exponential time integrator combined with a flexible Krylov subspace approach to solve the resulting shifted systems of equations efficiently. Our proposed solver speeds up the computation of the forward and adjoint problems, thus yielding significant speedup in total inversion time. We consider an application from Transient Hydraulic Tomography (THT), which is an imaging technique to estimate hydraulic parameters related to the subsurface from pressure measurements obtained by a series of pumping tests. The algorithms discussed are applied to a synthetic example taken from THT to demonstrate the resulting computational gains of this proposed method.
NASA Technical Reports Server (NTRS)
Suttles, John T.; Wielicki, Bruce A.; Vemury, Sastri
1992-01-01
The ERBE algorithm is applied to the Nimbus-7 earth radiation budget (ERB) scanner data for June 1979 to analyze the performance of an inversion method in deriving top-of-atmosphere albedos and longwave radiative fluxes. The performance is assessed by comparing ERBE algorithm results with appropriate results derived using the sorting-by-angular-bins (SAB) method, the ERB MATRIX algorithm, and the 'new-cloud ERB' (NCLE) algorithm. Comparisons are made for top-of-atmosphere albedos, longwave fluxes, viewing zenith-angle dependence of derived albedos and longwave fluxes, and cloud fractional coverage. Using the SAB method as a reference, the rms accuracy of monthly average ERBE-derived results are estimated to be 0.0165 (5.6 W/sq m) for albedos (shortwave fluxes) and 3.0 W/sq m for longwave fluxes. The ERBE-derived results were found to depend systematically on the viewing zenith angle, varying from near nadir to near the limb by about 10 percent for albedos and by 6-7 percent for longwave fluxes. Analyses indicated that the ERBE angular models are the most likely source of the systematic angular dependences. Comparison of the ERBE-derived cloud fractions, based on a maximum-likelihood estimation method, with results from the NCLE showed agreement within about 10 percent.
Qiu, Xiao-han; Zhang, Yu-jun; Yin, Gao-fang; Shi, Chao-yi; Yu, Xiao-ya; Zhao, Nan-jing; Liu, Wen-qing
2015-08-01
The fast chlorophyll fluorescence induction curve contains rich information of photosynthesis. It can reflect various information of vegetation, such as, the survival status, the pathological condition and the physiology trends under the stress state. Through the acquisition of algae fluorescence and induced optical signal, the fast phase of chlorophyll fluorescence kinetics curve was fitted. Based on least square fitting method, we introduced adaptive minimum error approaching method for fast multivariate nonlinear regression fitting toward chlorophyll fluorescence kinetics curve. We realized Fo (fixedfluorescent), Fm (maximum fluorescence yield), σPSII (PSII functional absorption cross section) details parameters inversion and the photosynthetic parameters inversion of Chlorella pyrenoidosa. And we also studied physiological variation of Chlorella pyrenoidosa under the stress of Cu(2+). PMID:26672292
NASA Technical Reports Server (NTRS)
Dubovik, O; Herman, M.; Holdak, A.; Lapyonok, T.; Taure, D.; Deuze, J. L.; Ducos, F.; Sinyuk, A.
2011-01-01
The proposed development is an attempt to enhance aerosol retrieval by emphasizing statistical optimization in inversion of advanced satellite observations. This optimization concept improves retrieval accuracy relying on the knowledge of measurement error distribution. Efficient application of such optimization requires pronounced data redundancy (excess of the measurements number over number of unknowns) that is not common in satellite observations. The POLDER imager on board the PARASOL microsatellite registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. The completeness of such observations is notably higher than for most currently operating passive satellite aerosol sensors. This provides an opportunity for profound utilization of statistical optimization principles in satellite data inversion. The proposed retrieval scheme is designed as statistically optimized multi-variable fitting of all available angular observations obtained by the POLDER sensor in the window spectral channels where absorption by gas is minimal. The total number of such observations by PARASOL always exceeds a hundred over each pixel and the statistical optimization concept promises to be efficient even if the algorithm retrieves several tens of aerosol parameters. Based on this idea, the proposed algorithm uses a large number of unknowns and is aimed at retrieval of extended set of parameters affecting measured radiation.
NASA Astrophysics Data System (ADS)
Eladj, Said; bansir, fateh; ouadfeul, sid Ali
2016-04-01
The application of genetic algorithm starts with an initial population of chromosomes representing a "model space". Chromosome chains are preferentially Reproduced based on Their fitness Compared to the total population. However, a good chromosome has a Greater opportunity to Produce offspring Compared To other chromosomes in the population. The advantage of the combination HGA / SAA is the use of a global search approach on a large population of local maxima to Improve Significantly the performance of the method. To define the parameters of the Hybrid Genetic Algorithm Steepest Ascent Auto Statics (HGA / SAA) job, we Evaluated by testing in the first stage of "Steepest Ascent," the optimal parameters related to the data used. 1- The number of iterations "Number of hill climbing iteration" is equal to 40 iterations. This parameter defines the participation of the algorithm "SA", in this hybrid approach. 2- The minimum eigenvalue for SA '= 0.8. This is linked to the quality of data and S / N ratio. To find an implementation performance of hybrid genetic algorithms in the inversion for estimating of the residual static corrections, tests Were Performed to determine the number of generation of HGA / SAA. Using the values of residual static corrections already calculated by the Approaches "SAA and CSAA" learning has Proved very effective in the building of the cross-correlation table. To determine the optimal number of generation, we Conducted a series of tests ranging from [10 to 200] generations. The application on real seismic data in southern Algeria allowed us to judge the performance and capacity of the inversion with this hybrid method "HGA / SAA". This experience Clarified the influence of the corrections quality estimated from "SAA / CSAA" and the optimum number of generation hybrid genetic algorithm "HGA" required to have a satisfactory performance. Twenty (20) generations Were enough to Improve continuity and resolution of seismic horizons. This Will allow
NASA Astrophysics Data System (ADS)
Hunziker, J.; Thorbecke, J.; Slob, E. C.
2014-12-01
Commonly, electromagnetic measurements for exploring and monitoring hydrocarbon reservoirs are inverted for the subsurface conductivity distribution by minimizing the difference between the actual data and a forward modeled dataset. The convergence of the inversion process to the correct solution strongly depends on the shape of the solution space. Since this is a non-linear problem, there exist a multitude of minima of which only the global one provides the correct conductivity values. To easily find the global minimum we desire it to have a broad cone of attraction, while it should also feature a very narrow bottom in order to obtain the subsurface conductivity with high resolution. In this study, we aim to determine which combination of input data corresponds to a favorable shape of the solution space. Since the solution space is N-dimensional, with N being the number of unknown subsurface parameters, plotting it is out of the question. In our approach, we use a genetic algorithm (Goldberg, 1989) to probe the solution space. Such algorithms have the advantage that every run of the same problem will end up at a different solution. Most of these solutions are expected to lie close to the global minimum. A situation where only few runs end up in the global minimum indicates that the solution space consists of a lot of local minima or that the cone of attraction of the global minimum is small. If a lot of runs end up with a similar data-misfit but with a large spread of the subsurface medium parameters in one or more direction, it can be concluded that the chosen data-input is not sensitive with respect to that direction. Compared to the study of Hunziker et al. 2014, we allow also to invert for subsurface boundaries and include more combinations of input datasets. The results so far suggest that it is essential to include the magnetic field in the inversion process in order to find the anisotropic conductivity values. ReferencesGoldberg, D. E., 1989. Genetic
NASA Astrophysics Data System (ADS)
Karthik, Victor U.; Sivasuthan, Sivamayam; Hoole, Samuel Ratnajeevan H.
2014-02-01
The computational algorithms for device synthesis and nondestructive evaluation (NDE) are often the same. In both we have a goal - a particular field configuration yielding the design performance in synthesis or to match exterior measurements in NDE. The geometry of the design or the postulated interior defect is then computed. Several optimization methods are available for this. The most efficient like conjugate gradients are very complex to program for the required derivative information. The least efficient zeroth order algorithms like the genetic algorithm take much computational time but little programming effort. This paper reports launching a Genetic Algorithm kernel on thousands of compute unified device architecture (CUDA) threads exploiting the NVIDIA graphics processing unit (GPU) architecture. The efficiency of parallelization, although below that on shared memory supercomputer architectures, is quite effective in cutting down solution time into the realm of the practicable. We carry this further into multi-physics electro-heat problems where the parameters of description are in the electrical problem and the object function in the thermal problem. Indeed, this is where the derivative of the object function in the heat problem with respect to the parameters in the electrical problem is the most difficult to compute for gradient methods, and where the genetic algorithm is most easily implemented.
NASA Astrophysics Data System (ADS)
Hou, W.; Wang, J.; Xu, X.; Ding, S.; Han, D.; Leitch, J. W.; Delker, T.; Chen, G.
2014-12-01
This paper presents an inversion method to retrieve aerosol properties from the hyperspectral data collected by airborne GeoTASO (Geostationary Trance gas and Aerosol Sensor Optimization). Mounted on the NASA HU-25C aircraft, GeoTASO measures radiation in 1000 spectral bands from 415 nm to 696 nm, and is a prototype for the TEMPO (Tropospheric Emissions: Monitoring of Pollution) instrument. It flew over Houston during September 2013 and gathered several days' of airborne hyperspectral remote sensing data for our research. Our inversion method, which is based on the optimization theory and different from the traditional lookup table (LUT) retrieval technique, can simultaneously retrieve parameters of atmospheric aerosols such as the aerosol optical depth and other aerosol parameters, as well as the surface reflectance albedo. To provide constraints of hyperspectral surface reflectance in the inversion, we first conduct principal component analysis (PCA) using 46 reflectance spectra of various plants and vegetation to identify the most influential components. With the first six principal components and the corresponding calculated weight vector, the spectra could be reconstructed with an accuracy of 1%. UNL-VRTM (UNified Linearized Radiative Transfer Model) is employed for forward model calculation, and its outputs include not only the Stokes 4-vector elements, but also their sensitivities (Jacobians) with respect to the aerosol properties parameters and the principal components of surface spectral reflectance. The inversion is carried out with optimization algorithm L-BFGS-B (Large scale BFGS Bound constrained), and is conducted iteratively until the modeled spectral radiance fits with GeoTASO measurements. Finally, the retrieval results of aerosol optical depth and other aerosol parameters are compared against those retrieved by AEROENT and/or in situ measurements during the aircraft campaign.
Abel's Theorem Simplifies Reduction of Order
ERIC Educational Resources Information Center
Green, William R.
2011-01-01
We give an alternative to the standard method of reduction or order, in which one uses one solution of a homogeneous, linear, second order differential equation to find a second, linearly independent solution. Our method, based on Abel's Theorem, is shorter, less complex and extends to higher order equations.
Li, M.; Sun, Z.; Zeng, D.
1996-12-31
Linear stability theory was applied in the present paper to analyze the stability of the basic state solution of thermocapillary convection in a liquid bridge with liquid encapsulation. Discretizing the linearized disturbance equations by using finite-difference approximation, stability analysis is evolved to a complex generalized eigenvalue problem with a complicated band structure of matrix. The influence of dimensionless parameters on the stability of the system is revealed through solving the complex generalized eigenvalue problem by inverse iteration. The results provide a theoretical and numerical foundation for crystal growth by float-zone method and for other engineering applications.
NASA Astrophysics Data System (ADS)
Kravtsov, Yu. A.; Chrzanowski, J.; Mazon, D.
2011-06-01
New procedure for plasma polarimetry data inversion is suggested, which fits two parameter knowledge-based plasma model to the measured parameters (azimuthal and ellipticity angles) of the polarization ellipse. The knowledge-based model is supposed to use the magnetic field and electron density profiles, obtained from magnetic measurements and LIDAR data on the Thomson scattering. In distinction to traditional polarimetry, polarization evolution along the ray is determined on the basis of angular variables technique (AVT). The paper contains a few examples of numerical solutions of these equations, which are applicable in conditions, when Faraday and Cotton-Mouton effects are simultaneously strong.
NASA Astrophysics Data System (ADS)
Li, Dongxing; Zhao, Yan; Dong, Xu
2008-03-01
In general image restoration, the point spread function (PSF) of the imaging system, and the observation noise, are known a priori information. The aero-optics effect is yielded when the objects ( e.g, missile, aircraft etc.) are flying in high speed or ultrasonic speed. In this situation, the PSF and the observation noise are unknown a priori. The identification and the restoration of the turbulence degraded images is a challenging problem in the world. The algorithm based on the nonnegativity and support constraints recursive inverse filtering (NAS-RIF) is proposed in order to identify and restore the turbulence degraded images. The NAS-RIF technique applies to situations in which the scene consists of a finite support object against a uniformly black, grey, or white background. The restoration procedure of NAS-RIF involves recursive filtering of the blurred image to minimize a convex cost function. The algorithm proposed in this paper is that the turbulence degraded image is filtered before it passes the recursive filter. The conjugate gradient minimization routine was used for minimization of the NAS-RIF cost function. The algorithm based on the NAS-RIF is used to identify and restore the wind tunnel tested images. The experimental results show that the restoration effect is improved obviously.
Li, Mao; Wittek, Adam; Miller, Karol
2014-01-01
Biomechanical modeling methods can be used to predict deformations for medical image registration and particularly, they are very effective for whole-body computed tomography (CT) image registration because differences between the source and target images caused by complex articulated motions and soft tissues deformations are very large. The biomechanics-based image registration method needs to deform the source images using the deformation field predicted by finite element models (FEMs). In practice, the global and local coordinate systems are used in finite element analysis. This involves the transformation of coordinates from the global coordinate system to the local coordinate system when calculating the global coordinates of image voxels for warping images. In this paper, we present an efficient numerical inverse isoparametric mapping algorithm to calculate the local coordinates of arbitrary points within the eight-noded hexahedral finite element. Verification of the algorithm for a nonparallelepiped hexahedral element confirms its accuracy, fast convergence, and efficiency. The algorithm's application in warping of the whole-body CT using the deformation field predicted by means of a biomechanical FEM confirms its reliability in the context of whole-body CT registration. PMID:24828796
Bracarda, Sergio; Sisani, Michele; Marrocolo, Francesca; Hamzaj, Alketa; del Buono, Sabrina; De Simone, Valeria
2014-03-01
Metastatic renal cell carcinoma (mRCC), considered almost an orphan disease only six years ago, appears today a very dynamic pathology. The recently switch to the actual overcrowded scenario defined by seven active drugs has driven physicians to an incertitude status, due to difficulties in defining the best possible treatment strategy. This situation is mainly related to the absence of predictive biomarkers for any available or new therapy. Such issue, associated with the nearly absence of published face-to-face studies, draws a complex picture frame. In order to solve this dilemma, decisional algorithms tailored on drug efficacy data and patient profile are recognized as very useful tools. These approaches try to select the best therapy suitable for every patient profile. On the contrary, the present review has the "goal" to suggest a reverse approach: basing on the pivotal studies, post-marketing surveillance reports and our experience, we defined the polarizing toxicity (the most frequent toxicity in the light of clinical experience) for every single therapy, creating a new algorithm able to identify the patient profile, mainly comorbidities, unquestionably unsuitable for each single agent presently available for either the first- or the second-line therapy. The GOAL inverse decision-making algorithm, proposed at the end of this review, allows to select the best therapy for mRCC by reducing the risk of limiting toxicities. PMID:24309065
NASA Astrophysics Data System (ADS)
Yoon, Kyung-Beom; Park, Won-Hee
2015-04-01
The convective heat transfer coefficient and surface emissivity before and after flame occurrence on a wood specimen surface and the flame heat flux were estimated using the repulsive particle swarm optimization algorithm and cone heater test results. The cone heater specified in the ISO 5660 standards was used, and six cone heater heat fluxes were tested. Preservative-treated Douglas fir 21 mm in thickness was used as the wood specimen in the tests. This study confirmed that the surface temperature of the specimen, which was calculated using the convective heat transfer coefficient, surface emissivity and flame heat flux on the wood specimen by a repulsive particle swarm optimization algorithm, was consistent with the measured temperature. Considering the measurement errors in the surface temperature of the specimen, the applicability of the optimization method considered in this study was evaluated.
NASA Astrophysics Data System (ADS)
Lahanas, Michael; Schreibmann, Eduard; Baltas, Dimos
2003-09-01
We consider the behaviour of the limited memory L-BFGS algorithm as a representative constraint-free gradient-based algorithm which is used for multiobjective (MO) dose optimization for intensity modulated radiotherapy (IMRT). Using a parameter transformation, the positivity constraint problem of negative beam fluences is entirely eliminated: a feature which to date has not been fully understood by all investigators. We analyse the global convergence properties of L-BFGS by searching for the existence and the influence of possible local minima. With a fast simulated annealing (FSA) algorithm we examine whether the L-BFGS solutions are globally Pareto optimal. The three examples used in our analysis are a brain tumour, a prostate tumour and a test case with a C-shaped PTV. In 1% of the optimizations global convergence is violated. A simple mechanism practically eliminates the influence of this failure and the obtained solutions are globally optimal. A single-objective dose optimization requires less than 4 s for 5400 parameters and 40 000 sampling points. The elimination of the problem of negative beam fluences and the high computational speed permit constraint-free gradient-based optimization algorithms to be used for MO dose optimization. In this situation, a representative spectrum of possible solutions is obtained which contains information such as the trade-off between the objectives and range of dose values. Using simple decision making tools the best of all the possible solutions can be chosen. We perform an MO dose optimization for the three examples and compare the spectra of solutions, firstly using recommended critical dose values for the organs at risk and secondly, setting these dose values to zero.
Comparison of four stable numerical methods for Abel's integral equation
NASA Technical Reports Server (NTRS)
Murio, Diego A.; Mejia, Carlos E.
1991-01-01
The 3-D image reconstruction from cone-beam projections in computerized tomography leads naturally, in the case of radial symmetry, to the study of Abel-type integral equations. If the experimental information is obtained from measured data, on a discrete set of points, special methods are needed in order to restore continuity with respect to the data. A new combined Regularized-Adjoint-Conjugate Gradient algorithm, together with two different implementations of the Mollification Method (one based on a data filtering technique and the other on the mollification of the kernal function) and a regularization by truncation method (initially proposed for 2-D ray sample schemes and more recently extended to 3-D cone-beam image reconstruction) are extensively tested and compared for accuracy and numerical stability as functions of the level of noise in the data.
Neural network-based inversion algorithms in magnetic flux leakage nondestructive evaluation
NASA Astrophysics Data System (ADS)
Ramuhalli, Pradeep; Udpa, Lalita; Udpa, Satish S.
2003-05-01
Magnetic flux leakage (MFL) methods are commonly used in the nondestructive evaluation (NDE) of ferromagnetic materials. An important problem in MFL NDE is the determination of flaw parameters such as the flaw length, depth, and shape (profile) from the measured values of the flux density B. Commonly used methods use a forward model in a loop to determine B for a given set of flaw parameters. This approach iteratively adjusts the flaw parameters to minimize the error between the measured and predicted values of B. This article proposes the use of neural networks as forward models. The proposed approach uses two neural networks in feedback configuration—a forward network and an inverse network. The second network is used to predict the profile given the measured value of B, and acts to constrain the solution space. Results of applying these methods to MFL data obtained from a two-dimensional finite-element model, with rectangular flaws of various dimensions, are presented.
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software
Fabregat-Traver, Diego; Sharapov, Sodbo Zh.; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the ’omics’ context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL. PMID:25717363
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software.
Fabregat-Traver, Diego; Sharapov, Sodbo Zh; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the 'omics' context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL. PMID:25717363
NASA Astrophysics Data System (ADS)
Bastani, Mehrdad; Kholghi, Majid; Rakhshandehroo, Gholam Reza
2010-08-01
Flow and mass transport parameter estimation was done by creating an inverse model of a seawater intrusion system using a genetic algorithm (GA) method as the optimization procedure. Firstly, the SEAWAT code was used for the forward solution part and then a program was written in MATLAB for coupling the forward and inverse processes. The auto-calibration objective function was defined with the root mean square errors (RMSE) between the observed and the simulated values. A simple GA was used to minimize the RMSE criterion. The methodology was applied to a coastal aquifer with heterogeneous formations in a semi-arid area near salty Tashk Lake (electrical conductivity 61,420 µS/cm), Fars province, Iran. In the last two decades, the overexploitation of groundwater has caused a major water level drawdown and, consequently, salt-water intrusion. Firstly, flow and transport parameters (hydraulic conductivity, porosity, specific storage coefficient and longitudinal dispersivity) were estimated simultaneously in steady-state and, secondly, in the developed code, these results were used as initial values of the parameters in transient-state. Results show a good match for observed and simulated data. It can be concluded that GA is a helpful tool for automatic calibration of variable density fluid systems such as seawater intrusion cases.
The clusters Abell 222 and Abell 223: a multi-wavelength view
NASA Astrophysics Data System (ADS)
Durret, F.; Laganá, T. F.; Adami, C.; Bertin, E.
2010-07-01
Context. The Abell 222 and 223 clusters are located at an average redshift z ~ 0.21 and are separated by 0.26 deg. Signatures of mergers have been previously found in these clusters, both in X-rays and at optical wavelengths, thus motivating our study. In X-rays, they are relatively bright, and Abell 223 shows a double structure. A filament has also been detected between the clusters both at optical and X-ray wavelengths. Aims: We analyse the optical properties of these two clusters based on deep imaging in two bands, derive their galaxy luminosity functions (GLFs) and correlate these properties with X-ray characteristics derived from XMM-Newton data. Methods: The optical part of our study is based on archive images obtained with the CFHT Megaprime/Megacam camera, covering a total region of about 1 deg2, or 12.3 × 12.3 Mpc2 at a redshift of 0.21. The X-ray analysis is based on archive XMM-Newton images. Results: The GLFs of Abell 222 in the g' and r' bands are well fit by a Schechter function; the GLF is steeper in r' than in g'. For Abell 223, the GLFs in both bands require a second component at bright magnitudes, added to a Schechter function; they are similar in both bands. The Serna & Gerbal method allows to separate well the two clusters. No obvious filamentary structures are detected at very large scales around the clusters, but a third cluster at the same redshift, Abell 209, is located at a projected distance of 19.2 Mpc. X-ray temperature and metallicity maps reveal that the temperature and metallicity of the X-ray gas are quite homogeneous in Abell 222, while they are very perturbed in Abell 223. Conclusions: The Abell 222/Abell 223 system is complex. The two clusters that form this structure present very different dynamical states. Abell 222 is a smaller, less massive and almost isothermal cluster. On the other hand, Abell 223 is more massive and has most probably been crossed by a subcluster on its way to the northeast. As a consequence, the
NASA Astrophysics Data System (ADS)
Palacios, S. L.; Schafer, C. B.; Broughton, J.; Guild, L. S.; Kudela, R. M.
2013-12-01
There is a need in the Biological Oceanography community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand energy flow through ecosystems, to track the fate of carbon in the ocean, and to detect and monitor-for harmful algal blooms (HABs). The ocean color community has responded to this demand with the development of phytoplankton functional type (PFT) discrimination algorithms. These PFT algorithms fall into one of three categories depending on the science application: size-based, biogeochemical function, and taxonomy. The new PFT algorithm Phytoplankton Detection with Optics (PHYDOTax) is an inversion algorithm that discriminates taxon-specific biomass to differentiate among six taxa found in the California Current System: diatoms, dinoflagellates, haptophytes, chlorophytes, cryptophytes, and cyanophytes. PHYDOTax was developed and validated in Monterey Bay, CA for the high resolution imaging spectrometer, Spectroscopic Aerial Mapping System with On-board Navigation (SAMSON - 3.5 nm resolution). PHYDOTax exploits the high spectral resolution of an imaging spectrometer and the improved spatial resolution that airborne data provides for coastal areas. The objective of this study was to apply PHYDOTax to a relatively lower resolution imaging spectrometer to test the algorithm's sensitivity to atmospheric correction, to evaluate capability with other sensors, and to determine if down-sampling spectral resolution would degrade its ability to discriminate among phytoplankton taxa. This study is a part of the larger Hyperspectral Infrared Imager (HyspIRI) airborne simulation campaign which is collecting Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imagery aboard NASA's ER-2 aircraft during three seasons in each of two years over terrestrial and marine targets in California. Our aquatic component seeks to develop and test algorithms to retrieve water quality properties (e.g. HABs and river plumes) in both marine and in
NASA Technical Reports Server (NTRS)
Palacios, Sherry L.; Schafer, Chris; Broughton, Jennifer; Guild, Liane S.; Kudela, Raphael M.
2013-01-01
There is a need in the Biological Oceanography community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand energy flow through ecosystems, to track the fate of carbon in the ocean, and to detect and monitor-for harmful algal blooms (HABs). The ocean color community has responded to this demand with the development of phytoplankton functional type (PFT) discrimination algorithms. These PFT algorithms fall into one of three categories depending on the science application: size-based, biogeochemical function, and taxonomy. The new PFT algorithm Phytoplankton Detection with Optics (PHYDOTax) is an inversion algorithm that discriminates taxon-specific biomass to differentiate among six taxa found in the California Current System: diatoms, dinoflagellates, haptophytes, chlorophytes, cryptophytes, and cyanophytes. PHYDOTax was developed and validated in Monterey Bay, CA for the high resolution imaging spectrometer, Spectroscopic Aerial Mapping System with On-board Navigation (SAMSON - 3.5 nm resolution). PHYDOTax exploits the high spectral resolution of an imaging spectrometer and the improved spatial resolution that airborne data provides for coastal areas. The objective of this study was to apply PHYDOTax to a relatively lower resolution imaging spectrometer to test the algorithm's sensitivity to atmospheric correction, to evaluate capability with other sensors, and to determine if down-sampling spectral resolution would degrade its ability to discriminate among phytoplankton taxa. This study is a part of the larger Hyperspectral Infrared Imager (HyspIRI) airborne simulation campaign which is collecting Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imagery aboard NASA's ER-2 aircraft during three seasons in each of two years over terrestrial and marine targets in California. Our aquatic component seeks to develop and test algorithms to retrieve water quality properties (e.g. HABs and river plumes) in both marine and in
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming
2014-12-01
The Ant Colony Optimization algorithm based on the probability density function (PDF-ACO) is applied to estimate the bimodal aerosol particle size distribution (PSD). The direct problem is solved by the modified Anomalous Diffraction Approximation (ADA, as an approximation for optically large and soft spheres, i.e., χ≫1 and |m-1|≪1) and the Beer-Lambert law. First, a popular bimodal aerosol PSD and three other bimodal PSDs are retrieved in the dependent model by the multi-wavelength extinction technique. All the results reveal that the PDF-ACO algorithm can be used as an effective technique to investigate the bimodal PSD. Then, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution function to retrieve the bimodal PSDs under the independent model. Finally, the J-SB and M-β functions are applied to recover actual measurement aerosol PSDs over Beijing and Shanghai obtained from the aerosol robotic network (AERONET). The numerical simulation and experimental results demonstrate that these two general functions, especially the J-SB function, can be used as a versatile distribution function to retrieve the bimodal aerosol PSD when no priori information about the PSD is available.
NASA Astrophysics Data System (ADS)
Zhao, Jingtao; Peng, Suping; Du, Wenfeng
2016-02-01
We consider sparsity-constraint inversion method for detecting seismic small-scale discontinuities, such as edges, faults and cavities, which provide rich information about petroleum reservoirs. However, where there is karstification and interference caused by macro-scale fault systems, these seismic small-scale discontinuities are hard to identify when using currently available discontinuity-detection methods. In the subsurface, these small-scale discontinuities are separately and sparsely distributed and their seismic responses occupy a very small part of seismic image. Considering these sparsity and non-smooth features, we propose an effective L 2-L 0 norm model for improvement of their resolution. First, we apply a low-order plane-wave destruction method to eliminate macro-scale smooth events. Then, based the residual data, we use a nonlinear structure-enhancing filter to build a L 2-L 0 norm model. In searching for its solution, an efficient and fast convergent penalty decomposition method is employed. The proposed method can achieve a significant improvement in enhancing seismic small-scale discontinuities. Numerical experiment and field data application demonstrate the effectiveness and feasibility of the proposed method in studying the relevant geology of these reservoirs.
NASA Astrophysics Data System (ADS)
Kozlovskaya, Elena
2000-06-01
This paper presents an inversion algorithm that can be used to solve a wide range of geophysical nonlinear inverse problems. The algorithm in based upon the principle of a direct search for the optimal solution in the parameter space. The main difference of the algorithm from existing techniques such as genetic algorithms and simulated annealing is that the optimum search is performed under control of a priori information formulated as a fuzzy set in the parameter space. In such a formulation the inverse problem becomes a multiobjective optimization problem with two objective functions, one of them is a membership function of the fuzzy set of feasible solutions, the other is the conditional probability density function of the observed data. The solution to such a problem is a set of Pareto optimal solutions that is constructed in the parameter space by a three-stage search procedure. The advantage of the proposed technique is that it provides the possibility of involving a wide range of non-probabilistic a priori information into the inversion procedure and can be applied to the solution of strongly nonlinear problems. It allows one to decrease the number of forward-problem calculations due to selective sampling of trial points from the parameter space. The properties of the algorithm are illustrated with an application to a local earthquake hypocentre location problem with synthetic and real data.
NASA Astrophysics Data System (ADS)
Hansen, John-Are; Bergh, Steffen G.; Osmundsen, Per Terje; Redfield, Tim F.
2015-01-01
We propose a new method for stress inversion and separation of principal stress states from heterogeneous fault-slip data. The method is semi-automatic, and is based on the moment method of stress inversion (Fry 1999) in combination with the objective function algorithm (OFA) for stress separation (Shan et al 2003). In the presented routine we randomly partition the heterogeneous fault-slip dataset into subsets ranging between one and six. The number of subsets K represents the number of possible mixed stress states in the fault-slip dataset. For each partition number K, we run the OFA 1000 times. Following this we plot and contour the principal stress axes, corresponding to the minimum value of the objective function for each run, in a stereonet. By evaluating how solution clusters of principal stress axes change with increasing number of subsets K, we are able to determine the number of mixed stress states and their optimal solutions for heterogeneous fault-slip datasets. While the numbers of subsets are underestimated, solution-clusters of principal stress axes represent average stress states. However, once the correct number of subsets is reached, solution clusters align with the slip-generating principal stress axes. The solution clusters then become stable, and overestimating the number of subsets does not significantly alter their orientation. The partition number K when stability is obtained thus determines the number of mixed stress states in the heterogeneous dataset, while the corresponding highest density solution clusters give the best estimate of the slip-generating principal stress axes and corresponding stress shape ratios. The inversion routine is tested and confirmed using synthetic data and fault-slip data from the Gullkista fault in Northern Norway. Because the stress calculation is based on the moment method, the inversion routine is insensitive to the correct assessment of slip sense, and only requires the slip vector and orientation of the
SelInv - An Algorithm for Selected Inversion of a Sparse Symmetric Matrix
Lin, Lin; Yang, Chao; Meza, Juan C.; Lu, Jianfeng; Ying, Lexing; E, Weinan
2009-10-16
We describe an efficient implementation of an algorithm for computing selected elements of a general sparse symmetric matrix A that can be decomposed as A = LDL^T, where L is lower triangular and D is diagonal. Our implementation, which is called SelInv, is built on top of an efficient supernodal left-looking LDL^T factorization of A. We discuss how computational efficiency can be gained by making use of a relative index array to handle indirect addressing. We report the performance of SelInv on a collection of sparse matrices of various sizes and nonzero structures. We also demonstrate how SelInv can be used in electronic structure calculations.
NASA Astrophysics Data System (ADS)
Gurarslan, Gurhan; Karahan, Halil
2015-09-01
In this study, an accurate model was developed for solving problems of groundwater-pollution-source identification. In the developed model, the numerical simulations of flow and pollutant transport in groundwater were carried out using MODFLOW and MT3DMS software. The optimization processes were carried out using a differential evolution algorithm. The performance of the developed model was tested on two hypothetical aquifer models using real and noisy observation data. In the first model, the release histories of the pollution sources were determined assuming that the numbers, locations and active stress periods of the sources are known. In the second model, the release histories of the pollution sources were determined assuming that there is no information on the sources. The results obtained by the developed model were found to be better than those reported in literature.
NASA Astrophysics Data System (ADS)
Liu, Yi; Yin, Zengshan; Yang, Zhongdong; Zheng, Yuquan; Yan, Changxiang; Tian, Xiangjun; Yang, Dongxu
2016-04-01
After 5 years development, The Chinese carbon dioxide observation satellite (TanSat), the first scientific experimental CO2 satellite of China, step into the pre-launch phase. The characters of pre-launch carbon dioxide spectrometer have been optimized during the laboratory test and calibration. Radiometric calibration shows a SNR of 440 (O2A 0.76um band), 300 (CO2 1.61um band) and 180 (CO2 2.06um band) on average in the typical radiance condition. Instrument line shape was calibrated automatically in using a well design testing system with laser control and record. After a series of test and calibration in laboratory, the instrumental performances meet the design requirements. TanSat will be launched on August 2016. The optimal estimation theory was involved in TanSat XCO2 retrieval algorithm in a full physics way with simulation of the radiance transfer in atmosphere. Gas absorption, aerosol and cirrus scattering and surface reflectance associate with wavelength dispersion have been considered in inversion for better correction the interference errors to XCO2. In order to simulate the radiance transfer precisely and efficiently, we develop a fast vector radiative transfer simulation method. Application of TanSat algorithm on GOSAT observation (ATANGO) is appropriate to evaluate the performance of algorithm. Validated with TCCON measurements, the ATANGO product achieves a 1.5 ppm precision. A Chinese carbon cycle data- assimilation system Tan-Tracker is developed based on the atmospheric chemical transport model GEOS-Chem. Tan-Tracker is a dual-pass data-assimilation system in which both CO2 concentrations and CO2 fluxes are simultaneously assimilated from atmospheric observations. A validation network has been established around China to support a series of CO2 satellite of China, which include 3 IFS-125HR and 4 Optical Spectrum Analyzer etc.
A Strong Merger Shock in Abell 665
NASA Astrophysics Data System (ADS)
Dasadia, S.; Sun, M.; Sarazin, C.; Morandi, A.; Markevitch, M.; Wik, D.; Feretti, L.; Giovannini, G.; Govoni, F.; Vacca, V.
2016-03-01
Deep (103 ks) Chandra observations of Abell 665 have revealed rich structures in this merging galaxy cluster, including a strong shock and two cold fronts. The newly discovered shock has a Mach number of M = 3.0 ± 0.6, propagating in front of a cold disrupted cloud. This makes Abell 665 the second cluster, after the Bullet cluster, where a strong merger shock of M ≈ 3 has been detected. The shock velocity from jump conditions is consistent with (2.7 ± 0.7) × 103 km s-1. The new data also reveal a prominent southern cold front with potentially heated gas ahead of it. Abell 665 also hosts a giant radio halo. There is a hint of diffuse radio emission extending to the shock at the north, which needs to be examined with better radio data. This new strong shock provides a great opportunity to study the re-acceleration model with the X-ray and radio data combined.
NASA Technical Reports Server (NTRS)
Kurtz, M. J.; Huchra, J. P.; Beers, T. C.; Geller, M. J.; Gioia, I. M.
1985-01-01
X-ray and optical observations of the cluster of galaxies Abell 744 are presented. The X-ray flux (assuming H(0) = 100 km/s per Mpc) is about 9 x 10 to the 42nd erg/s. The X-ray source is extended, but shows no other structure. Photographic photometry (in Kron-Cousins R), calibrated by deep CCD frames, is presented for all galaxies brighter than 19th magnitude within 0.75 Mpc of the cluster center. The luminosity function is normal, and the isopleths show little evidence of substructure near the cluster center. The cluster has a dominant central galaxy, which is classified as a normal brightest-cluster elliptical on the basis of its luminosity profile. New redshifts were obtained for 26 galaxies in the vicinity of the cluster center; 20 appear to be cluster members. The spatial distribution of redshifts is peculiar; the dispersion within the 150 kpc core radius is much greater than outside. Abell 744 is similar to the nearby cluster Abell 1060.
ROSAT HRI images of Abell 85 and Abell 496: Evidence for inhomogeneities in cooling flows
NASA Technical Reports Server (NTRS)
Prestwich, Andrea H.; Guimond, Stephen J.; Luginbuhl, Christian; Joy, Marshall
1994-01-01
We present ROSAT HRI images of two clusters of galaxies with cooling flows, Abell 496 and Abell 85. In these clusters, x-ray emission on small scales above the general cluster emission is significant at the 3 sigma level. There is no evidence for optical counterparts. The enhancements may be associated with lumps of gas at a lower temperature and higher density than the ambient medium, or hotter, denser gas perhaps compressed by magnetic fields. These observations can be used to test models of how thermal instabilities form and evolve in cooling flows.
An improved inversion for FORMOSAT-3/COSMIC ionosphere electron density profiles
NASA Astrophysics Data System (ADS)
Pedatella, N. M.; Yue, X.; Schreiner, W. S.
2015-10-01
An improved method to retrieve electron density profiles from Global Positioning System (GPS) radio occultation (RO) data is presented and applied to Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) observations. The improved inversion uses a monthly grid of COSMIC F region peak densities (NmF2), which are obtained via the standard Abel inversion, to aid the Abel inversion by providing information on the horizontal gradients in the ionosphere. This lessens the impact of ionospheric gradients on the retrieval of GPS RO electron density profiles, reducing the dominant error source in the standard Abel inversion. Results are presented that demonstrate the NmF2 aided retrieval significantly improves the quality of the COSMIC electron density profiles. Improvements are most notable at E region altitudes, where the improved inversion reduces the artificial plasma cave that is generated by the Abel inversion spherical symmetry assumption at low latitudes during the daytime. Occurrence of unphysical negative electron densities at E region altitudes is also reduced. Furthermore, the NmF2 aided inversion has a positive impact at F region altitudes, where it results in a more distinct equatorial ionization anomaly. COSMIC electron density profiles inverted using our new approach are currently available through the University Corporation for Atmospheric Research COSMIC Data Analysis and Archive Center. Owing to the significant improvement in the results, COSMIC data users are encouraged to use electron density profiles based on the improved inversion rather than those inverted by the standard Abel inversion.
The cluster of galaxies Abell 2670
NASA Astrophysics Data System (ADS)
Shambrook, Anouk Aimee
2001-10-01
The rich cluster of galaxies Abell 2670 provides a laboratory in which to observe how galaxy properties change as a function of environment. Though initially considered a relaxed cluster, Abell 2670 exhibits substructure in optical, x-ray, and radio 21 cm H I line data. The cluster hosts a plethora of elliptical galaxies as well as spiral galaxies including galaxies rich in cold gas (some with more than 1010 Msolar in H I), and K+A galaxies. A group of galaxies rich in cold gas may be entering the cluster environment for the first time, making Abell 2670 a valuable case study. This thesis presents a catalog of UBV RI colors for objects located in an area 1° x 1° centered on Abell 2670, based on observations using the CTIO 0.9-m Schmidt telescope. Follow up observations using the Keck II 10-m and the CTIO 4-m telescopes will enable the classification of galaxy morphology. Using evolutionary synthesis models by Poggianti and Barbaro, a photometric redshift analysis yields a best- fit redshift and spectral energy distribution for each galaxy. The results are checked with galaxies observed by Sharples, Ellis, and Gray, which are known cluster members. Radial density profiles of cluster and field galaxies are modeled by King and uniform distributions respectively. A set of simulated galaxies, drawn from a combination of the two models, is compared to the data; for each redshift classification (based on the photometric redshift analysis), Kolmogorov-Smirnov tests characterize the probable fraction of cluster galaxies relative to the total. For the galaxies classified by the photometric redshift analysis as E, Sa, and Sc, an overdensity value is calculated, quantifying the density-morphology relation for this sample. A detailed study of this low redshift (z = 0.076) cluster may inform future studies of high redshift clusters. The optical UBV RI catalog is an important part of a multiwavelength set of data on Abell 2670 which in the future will probably lend itself well
Martin, Andre-Guy; Roy, Jean; Beaulieu, Luc; Pouliot, Jean; Harel, Francois; Vigneault, Eric . E-mail: Eric.Vigneault@chuq.qc.ca
2007-02-01
Purpose: To report outcomes and toxicity of the first Canadian permanent prostate implant program. Methods and Materials: 396 consecutive patients (Gleason {<=}6, initial prostate specific antigen (PSA) {<=}10 and stage T1-T2a disease) were implanted between June 1994 and December 2001. The median follow-up is of 60 months (maximum, 136 months). All patients were planned with fast-simulated annealing inverse planning algorithm with high activity seeds ([gt] 0.76 U). Acute and late toxicity is reported for the first 213 patients using a modified RTOG toxicity scale. The Kaplan-Meier biochemical failure-free survival (bFFS) is reported according to the ASTRO and Houston definitions. Results: The bFFS at 60 months was of 88.5% (90.5%) according to the ASTRO (Houston) definition and, of 91.4% (94.6%) in the low risk group (initial PSA {<=}10 and Gleason {<=}6 and Stage {<=}T2a). Risk factors statistically associated with bFFS were: initial PSA >10, a Gleason score of 7-8, and stage T2b-T3. The mean D90 was of 151 {+-} 36.1 Gy. The mean V100 was of 85.4 {+-} 8.5% with a mean V150 of 60.1 {+-} 12.3%. Overall, the implants were well tolerated. In the first 6 months, 31.5% of the patients were free of genitourinary symptoms (GUs), 12.7% had Grade 3 GUs; 91.6% were free of gastrointestinal symptoms (GIs). After 6 months, 54.0% were GUs free, 1.4% had Grade 3 GUs; 95.8% were GIs free. Conclusion: The inverse planning with fast simulated annealing and high activity seeds gives a 5-year bFFS, which is comparable with the best published series with a low toxicity profile.
Tauberian theorems for Abel summability of sequences of fuzzy numbers
NASA Astrophysics Data System (ADS)
Yavuz, Enes; ćoşkun, Hüsamettin
2015-09-01
We give some conditions under which Abel summable sequences of fuzzy numbers are convergent. As corollaries we obtain the results given in [E. Yavuz, Ö. Talo, Abel summability of sequences of fuzzy numbers, Soft computing 2014, doi: 10.1007/s00500-014-1563-7].
Semenov, Alexander; Zaikin, Oleg
2016-01-01
In this paper we propose an approach for constructing partitionings of hard variants of the Boolean satisfiability problem (SAT). Such partitionings can be used for solving corresponding SAT instances in parallel. For the same SAT instance one can construct different partitionings, each of them is a set of simplified versions of the original SAT instance. The effectiveness of an arbitrary partitioning is determined by the total time of solving of all SAT instances from it. We suggest the approach, based on the Monte Carlo method, for estimating time of processing of an arbitrary partitioning. With each partitioning we associate a point in the special finite search space. The estimation of effectiveness of the particular partitioning is the value of predictive function in the corresponding point of this space. The problem of search for an effective partitioning can be formulated as a problem of optimization of the predictive function. We use metaheuristic algorithms (simulated annealing and tabu search) to move from point to point in the search space. In our computational experiments we found partitionings for SAT instances encoding problems of inversion of some cryptographic functions. Several of these SAT instances with realistic predicted solving time were successfully solved on a computing cluster and in the volunteer computing project SAT@home. The solving time agrees well with estimations obtained by the proposed method. PMID:27190753
NASA Astrophysics Data System (ADS)
Ge, Xinmin; Wang, Hua; Fan, Yiren; Cao, Yingchang; Chen, Hua; Huang, Rui
2016-01-01
With more information than the conventional one dimensional (1D) longitudinal relaxation time (T1) and transversal relaxation time (T2) spectrums, a two dimensional (2D) T1-T2 spectrum in a low field nuclear magnetic resonance (NMR) is developed to discriminate the relaxation components of fluids such as water, oil and gas in porous rock. However, the accuracy and efficiency of the T1-T2 spectrum are limited by the existing inversion algorithms and data acquisition schemes. We introduce a joint method to inverse the T1-T2 spectrum, which combines iterative truncated singular value decomposition (TSVD) and a parallel particle swarm optimization (PSO) algorithm to get fast computational speed and stable solutions. We reorganize the first kind Fredholm integral equation of two kernels to a nonlinear optimization problem with non-negative constraints, and then solve the ill-conditioned problem by the iterative TSVD. Truncating positions of the two diagonal matrices are obtained by the Akaike information criterion (AIC). With the initial values obtained by TSVD, we use a PSO with parallel structure to get the global optimal solutions with a high computational speed. We use the synthetic data with different signal to noise ratio (SNR) to test the performance of the proposed method. The result shows that the new inversion algorithm can achieve favorable solutions for signals with SNR larger than 10, and the inversion precision increases with the decrease of the components of the porous rock.
NASA Astrophysics Data System (ADS)
Auken, Esben; Christiansen, Anders Vest; Kirkegaard, Casper; Fiandaca, Gianluca; Schamper, Cyril; Behroozmand, Ahmad Ali; Binley, Andrew; Nielsen, Emil; Effersø, Flemming; Christensen, Niels Bøie; Sørensen, Kurt; Foged, Nikolaj; Vignoli, Giulio
2015-07-01
We present an overview of a mature, robust and general algorithm providing a single framework for the inversion of most electromagnetic and electrical data types and instrument geometries. The implementation mainly uses a 1D earth formulation for electromagnetics and magnetic resonance sounding (MRS) responses, while the geoelectric responses are both 1D and 2D and the sheet's response models a 3D conductive sheet in a conductive host with an overburden of varying thickness and resistivity. In all cases, the focus is placed on delivering full system forward modelling across all supported types of data. Our implementation is modular, meaning that the bulk of the algorithm is independent of data type, making it easy to add support for new types. Having implemented forward response routines and file I/O for a given data type provides access to a robust and general inversion engine. This engine includes support for mixed data types, arbitrary model parameter constraints, integration of prior information and calculation of both model parameter sensitivity analysis and depth of investigation. We present a review of our implementation and methodology and show four different examples illustrating the versatility of the algorithm. The first example is a laterally constrained joint inversion (LCI) of surface time domain induced polarisation (TDIP) data and borehole TDIP data. The second example shows a spatially constrained inversion (SCI) of airborne transient electromagnetic (AEM) data. The third example is an inversion and sensitivity analysis of MRS data, where the electrical structure is constrained with AEM data. The fourth example is an inversion of AEM data, where the model is described by a 3D sheet in a layered conductive host.
X-ray morphologies of Abell clusters
NASA Technical Reports Server (NTRS)
Mcmillan, S. L. W.; Kowalski, M. P.; Ulmer, M. P.
1989-01-01
Results are presented for X-ray measurements made with the Einstein Observatory's IPC for a sample of 49 Abell clusters, which were used to determine quantitative measures of two morphological parameters of these clusters, the orientation and ellipticity. Consideration is given to the techniques used for estimating and removing background noise in the images and for determining the variation of these parameters with the flux level of a cluster. It was found that most clusters are clearly flattened; for 20 of these clusters, the orientation was unambiguously determined. A catalog of cluster properties is presented.
Sun, Deyong; Hu, Chuanmin; Qiu, Zhongfeng; Wang, Shengqiang
2015-06-01
A new scheme has been proposed by Lee et al. (2014) to reconstruct hyperspectral (400 - 700 nm, 5 nm resolution) remote sensing reflectance (R_{rs}(λ), sr^{-1}) of representative global waters using measurements at 15 spectral bands. This study tested its applicability to optically complex turbid inland waters in China, where R_{rs}(λ) are typically much higher than those used in Lee et al. (2014). Strong interdependence of R_{rs}(λ) between neighboring bands (≤ 10 nm interval) was confirmed, with Pearson correlation coefficient (PCC) mostly above 0.98. The scheme of Lee et al. (2014) for R_{rs}(λ) re-construction with its original global parameterization worked well with this data set, while new parameterization showed improvement in reducing uncertainties in the reconstructed R_{rs}(λ). Mean absolute error (MAE_{Rrs}(λ_{i})) in the reconstructed R_{rs}(λ) was mostly < 0.0002 sr^{-1} between 400 and 700nm, and mean relative error (MRE_{Rrs}(λ_{i})) was < 1% when the comparison was made between reconstructed and measured R_{rs}(λ) spectra. When R_{rs}(λ) at the MODIS bands were used to reconstruct the hyperspectral R_{rs}(λ), MAE_{Rrs}(λ_{i}) was < 0.001 sr^{-1} and MRE_{Rrs}(λ_{i}) was < 3%. When R_{rs}(λ) at the MERIS bands were used, MAE_{Rrs}(λ_{i}) in the reconstructed hyperspectral R_{rs}(λ) was < 0.0004 sr^{-1} and MRE_{Rrs}(λ_{i}) was < 1%. These results have significant implications for inversion algorithms to retrieve concentrations of phytoplankton pigments (e.g., chlorophyll-a or Chla, and phycocyanin or PC) and total suspended materials (TSM) as well as absorption coefficient of colored dissolved organic matter (CDOM), as some of the algorithms were developed from in situ R_{rs}(λ) data using spectral bands that
Mapping the intracluster medium of Abell 3627
NASA Astrophysics Data System (ADS)
Banfield, Julie; Koribalski, Baerbel; Johnston-Hollitt, Melanie; Wong, O. Ivy; Serra, Paolo; Schnitzeler, Dominic; Dehghan, Siamak
2013-10-01
Galaxy clusters are among the largest structures in the Universe. They provide a high density environment where galaxies undergo high-speed collisions, ram pressure stripping, and tidal interactions. The resulting debris can sometimes be detected in the form of neutral or ionised intergalactic filaments. Abell 3627 lies at a distance of ~66kpc right in the heart of the Great Attractor and is one of the most massive clusters known. We propose to map an area of 1 sq. deg. around Abell 3627 at 1 - 3 GHz to study the polarised emission in and between cluster members and search for HI absorption of neutral intracluster gas. We will be able to: (1) test cluster magnetic field turbulence on very small scales; (2) examine rotation measure (RM) spectra to understand the effect of radio sources in cluster environments; (3) detect the intracluster medium (ICM) magnetic field; (4) determine the magnetic field strength of the cluster and place upper limits on the age; and (5) constrain the HI column density in the ICM. All of these goals together will provide information to understand how the large-scale structure of the Universe evolves.
NASA Astrophysics Data System (ADS)
Fang, Hongjian; Zhang, Haijiang; Yao, Huajian; Allam, Amir; Zigone, Dimitri; Ben-Zion, Yehuda; Thurber, Clifford; vanÂ derÂ Hilst, Robert D.
2016-05-01
We introduce a new algorithm for joint inversion of body wave and surface wave data to get better 3-D P wave (Vp) and S wave (Vs) velocity models by taking advantage of the complementary strengths of each data set. Our joint inversion algorithm uses a one-step inversion of surface wave traveltime measurements at different periods for 3-D Vs and Vp models without constructing the intermediate phase or group velocity maps. This allows a more straightforward modeling of surface wave traveltime data with the body wave arrival times. We take into consideration the sensitivity of surface wave data with respect to Vp in addition to its large sensitivity to Vs, which means both models are constrained by two different data types. The method is applied to determine 3-D crustal Vp and Vs models using body wave and Rayleigh wave data in the Southern California plate boundary region, which has previously been studied with both double-difference tomography method using body wave arrival times and ambient noise tomography method with Rayleigh and Love wave group velocity dispersion measurements. Our approach creates self-consistent and unique models with no prominent gaps, with Rayleigh wave data resolving shallow and large-scale features and body wave data constraining relatively deeper structures where their ray coverage is good. The velocity model from the joint inversion is consistent with local geological structures and produces better fits to observed seismic waveforms than the current Southern California Earthquake Center (SCEC) model.
Are Abell Clusters Correlated with Gamma-Ray Bursts?
NASA Technical Reports Server (NTRS)
Hurley, K.; Hartmann, D.; Kouveliotou, C.; Fishman, G.; Laros, J.; Cline, T.; Boer, M.
1997-01-01
A recent study has presented marginal statistical evidence that gamma-ray burst (GRB) sources are correlated with Abell clusters, based on analyses of bursts in the BATSE 3B catalog. Using precise localization information from the Third Interplanetary Network, we have reanalyzed this possible correlation. We find that most of the Abell clusters that are in the relatively large 3B error circles are not in the much smaller IPN/BATSE error regions. We believe that this argues strongly against an Abell cluster-GRB correlation.
NASA Astrophysics Data System (ADS)
Deng, Shaoyong; Zhang, Qi; Xia, Junying
2014-12-01
A totally self-designed experimental system based on dynamic light scattering is developed. The method of photon correlation spectroscopy is used to simulate the autocorrelation of measured scattering photons and scattering field. The dynamic autocorrelation software is self-compiled to replace the popular hardware digital correlator for much more correlation channels and much lower costs. Several inverse algorithms such as 1st-order Cumulants, 2nd-order Cumulants, NNLS, CONTIN and Double Exponents are used to compute the particle sizes and decay linewidths of both monodisperse systems and polydisperse systems. The programs based on these inverse algorithms are all self-compiled except the CONTIN. Influences of systematical parameters such as sample time, the last delay time, elapsed time, suspension's concentration and the baseline of scattering photons autocorrelation on the scattering photon counts, the autocorrelations of scattering photons and scattering field and the distribution of particle sizes are all investigated detailedly and are explained theoretically. The appropriate choices of systematical parameters are pointed out to make the experimental system more perfect. The limitations of the inverse algorithms are described and explained for the self-designed system. The methods of corrected 1st-order Cumulants and corrected Double Exponents are developed to compute particle sizes correctly at wide time scale. The particle sizes measured by the optimized experimental system are very accurate.
NASA Astrophysics Data System (ADS)
Chand, Shyam; Minshull, Tim A.; Priest, Jeff A.; Best, Angus I.; Clayton, Christopher R. I.; Waite, William F.
2006-08-01
The presence of gas hydrate in marine sediments alters their physical properties. In some circumstances, gas hydrate may cement sediment grains together and dramatically increase the seismic P- and S-wave velocities of the composite medium. Hydrate may also form a load-bearing structure within the sediment microstructure, but with different seismic wave attenuation characteristics, changing the attenuation behaviour of the composite. Here we introduce an inversion algorithm based on effective medium modelling to infer hydrate saturations from velocity and attenuation measurements on hydrate-bearing sediments. The velocity increase is modelled as extra binding developed by gas hydrate that strengthens the sediment microstructure. The attenuation increase is modelled through a difference in fluid flow properties caused by different permeabilities in the sediment and hydrate microstructures. We relate velocity and attenuation increases in hydrate-bearing sediments to their hydrate content, using an effective medium inversion algorithm based on the self-consistent approximation (SCA), differential effective medium (DEM) theory, and Biot and squirt flow mechanisms of fluid flow. The inversion algorithm is able to convert observations in compressional and shear wave velocities and attenuations to hydrate saturation in the sediment pore space. We applied our algorithm to a data set from the Mallik 2L-38 well, Mackenzie delta, Canada, and to data from laboratory measurements on gas-rich and water-saturated sand samples. Predictions using our algorithm match the borehole data and water-saturated laboratory data if the proportion of hydrate contributing to the load-bearing structure increases with hydrate saturation. The predictions match the gas-rich laboratory data if that proportion decreases with hydrate saturation. We attribute this difference to differences in hydrate formation mechanisms between the two environments.
Chand, S.; Minshull, T.A.; Priest, J.A.; Best, A.I.; Clayton, C.R.I.; Waite, W.F.
2006-01-01
The presence of gas hydrate in marine sediments alters their physical properties. In some circumstances, gas hydrate may cement sediment grains together and dramatically increase the seismic P- and S-wave velocities of the composite medium. Hydrate may also form a load-bearing structure within the sediment microstructure, but with different seismic wave attenuation characteristics, changing the attenuation behaviour of the composite. Here we introduce an inversion algorithm based on effective medium modelling to infer hydrate saturations from velocity and attenuation measurements on hydrate-bearing sediments. The velocity increase is modelled as extra binding developed by gas hydrate that strengthens the sediment microstructure. The attenuation increase is modelled through a difference in fluid flow properties caused by different permeabilities in the sediment and hydrate microstructures. We relate velocity and attenuation increases in hydrate-bearing sediments to their hydrate content, using an effective medium inversion algorithm based on the self-consistent approximation (SCA), differential effective medium (DEM) theory, and Biot and squirt flow mechanisms of fluid flow. The inversion algorithm is able to convert observations in compressional and shear wave velocities and attenuations to hydrate saturation in the sediment pore space. We applied our algorithm to a data set from the Mallik 2L–38 well, Mackenzie delta, Canada, and to data from laboratory measurements on gas-rich and water-saturated sand samples. Predictions using our algorithm match the borehole data and water-saturated laboratory data if the proportion of hydrate contributing to the load-bearing structure increases with hydrate saturation. The predictions match the gas-rich laboratory data if that proportion decreases with hydrate saturation. We attribute this difference to differences in hydrate formation mechanisms between the two environments.
Quantification of Substructure in Nearby Abell Clusters
NASA Astrophysics Data System (ADS)
Kriessler, J. R.; Beers, T. C.; Odewahn, S. C.
1995-05-01
Theory, as well as numerical simulations, suggests that Omega_o may be observationally constrained by the amount of substructure observed in present-day clusters of galaxies. We have therefore begun a study of the 116 Abell clusters with richness class greater than or equal to 1 and distance class less than or equal to 4, the so-called ``volume-limited'' sample of Hoessel, Gunn, & Thuan 1980 (ApJ 241, 486) to determine the prevalence of substructure in the clusters' projected galaxy positions. We use positions of galaxies identified by the Minnesota Automated Plate Scanner to obtain contour plots of the available clusters using an adaptive kernel routine. Significance of substructure is evaluated using the 2-D Lee test as well as a likelihood-ratio test on fits made with mixtures of two-dimensional gaussians. We also present nonparametric density profile estimates obtained with the program MAPEL (Merritt and Tremblay 1994, AJ 108, 514).
The genus curve of the Abell clusters
NASA Technical Reports Server (NTRS)
Rhoads, James E.; Gott, J. Richard, III; Postman, Marc
1994-01-01
We study the topology of large-scale structure through a genus curve measurement of the recent Abell catalog redshift survey of Postman, Huchra, and Geller (1992). The structure is found to be spongelike near median density and to exhibit isolated superclusters and voids at high and low densities, respectively. The genus curve shows a slight shift toward 'meatball' topology, but remains consistent with the hypothesis of Gaussian random phase initial conditions. The amplitude of the genus curve corresponds to a power-law spectrum with index n = 0.21(sub -0.47 sup +0.43) on scales of 48/h Mpc or to a cold dark matter power spectrum with omega h = 0.36(sub -0.17 sup +0.46).
The magnitude-redshift relation for 561 Abell clusters
NASA Technical Reports Server (NTRS)
Postman, M.; Huchra, J. P.; Geller, M. J.; Henry, J. P.
1985-01-01
The Hubble diagram for the 561 Abell clusters with measured redshifts has been examined using Abell's (1958) corrected photo-red magnitudes for the tenth-ranked cluster member (m10). After correction for the Scott effect and K dimming, the data are in good agreement with a linear magnitude-redshift relation with a slope of 0.2 out to z = 0.1. New redshift data are also presented for 20 Abell clusters. Abell's m10 is suitable for redshift estimation for clusters with m10 of no more than 16.5. At fainter m10, the number of foreground galaxies expected within an Abell radius is large enough to make identification of the tenth-ranked galaxy difficult. Interlopers bias the estimated redshift toward low values at high redshift. Leir and van den Bergh's (1977) redshift estimates suffer from this same bias but to a smaller degree because of the use of multiple cluster parameters. Constraints on deviations of cluster velocities from the mean cosmological flow require greater photometric accuracy than is provided by Abell's m10 magnitudes.
NASA Astrophysics Data System (ADS)
Li, Tao; Mallick, Subhashis
2015-02-01
Consideration of azimuthal anisotropy, at least to an orthorhombic symmetry is important in exploring the naturally fractured and unconventional hydrocarbon reservoirs. Full waveform inversion of multicomponent seismic data can, in principle, provide more robust estimates of subsurface elastic parameters and density than the inversion of single component (P wave) seismic data. In addition, azimuthally dependent anisotropy can only be resolved by carefully studying the multicomponent seismic displacement data acquired and processed along different azimuths. Such an analysis needs an inversion algorithm capable of simultaneously optimizing multiple objectives, one for each data component along each azimuth. These multicomponent and multi-azimuthal seismic inversions are non-linear with non-unique solutions; it is therefore appropriate to treat the objectives as a vector and simultaneously optimize each of its components such that the optimal set of solutions could be obtained. The fast non-dominated sorting genetic algorithm (NSGA II) is a robust stochastic global search method capable of handling multiple objectives, but its computational expense increases with increasing number of objectives and the number of model parameters to be inverted for. In addition, an accurate extraction of subsurface azimuthal anisotropy requires multicomponent seismic data acquired at a fine spatial resolution along many source-to-receiver azimuths. Because routine acquisition of such data is prohibitively expensive, they are typically available along two or at most three azimuthal orientations at a spatial resolution where such an inversion could be applied. This paper proposes a novel multi-objective methodology using a parallelized version of NSGA II for waveform inversion of multicomponent seismic displacement data along two azimuths. By scaling the objectives prior to ranking, redefining the crowding distance as functions of the scaled objective and the model spaces, and varying
Separating the BL Lac and cluster X-ray emissions in Abell 689 with Chandra
NASA Astrophysics Data System (ADS)
Giles, P. A.; Maughan, B. J.; Birkinshaw, M.; Worrall, D. M.; Lancaster, K.
2012-01-01
We present the results of a Chandra observation of the galaxy cluster Abell 689 (z = 0.279). Abell 689 is one of the most luminous clusters detected in the ROSAT All Sky Survey (RASS), but was flagged as possibly including significant point source contamination. The small point spread function of the Chandra telescope allows us to confirm this and separate the point source from the extended cluster X-ray emission. For the cluster, we determine a bolometric luminosity of Lbol= (3.3 ± 0.3) × 1044 erg s-1 and a temperature of kT = 5.1+2.2- 1.3 keV when including a physically motivated background model. We compare our measured luminosity for A689 to that quoted in the RASS, and find L0.1-2.4 keV= 2.8 × 1044 erg s-1, a value ˜10 times lower than the ROSAT measurement. Our analysis of the point source shows evidence for significant pile-up, with a pile-up fraction of ≃60 per cent. Sloan Digital Sky Survey spectra and Hubble Space Telescope (HST) images lead us to the conclusion that the point source within Abell 689 is a BL Lac object. Using radio and optical observations from the Very Large Array and HST archives, we determine αro= 0.50, αox= 0.77 and αrx= 0.58 for the BL Lac, which would classify it as being of 'high-energy peak BL Lac' type. Spectra extracted of A689 show a hard X-ray excess at energies above 6 keV that we interpret as inverse-Compton emission from aged electrons that may have been transported into the cluster from the BL Lac.
NASA Astrophysics Data System (ADS)
Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Guillemot, C.; Murray, J. R.
2015-12-01
The earthquake early warning (EEW) systems in California and elsewhere can greatly benefit from algorithms that generate estimates of finite-fault parameters. These estimates could significantly improve real-time shaking calculations and yield important information for immediate disaster response. Minson et al. (2015) determined that combining FinDer's seismic-based algorithm (Böse et al., 2012) with BEFORES' geodetic-based algorithm (Minson et al., 2014) yields a more robust and informative joint solution than using either algorithm alone. FinDer examines the distribution of peak ground accelerations from seismic stations and determines the best finite-fault extent and strike from template matching. BEFORES employs a Bayesian framework to search for the best slip inversion over all possible fault geometries in terms of strike and dip. Using FinDer and BEFORES together generates estimates of finite-fault extent, strike, dip, preferred slip, and magnitude. To yield the quickest, most flexible, and open-source version of the joint algorithm, we translated BEFORES and FinDer from Matlab into C++. We are now developing a C++ Application Protocol Interface for these two algorithms to be connected to the seismic and geodetic data flowing from the EEW system. The interface that is being developed will also enable communication between the two algorithms to generate the joint solution of finite-fault parameters. Once this interface is developed and implemented, the next step will be to run test seismic and geodetic data through the system via the Earthworm module, Tank Player. This will allow us to examine algorithm performance on simulated data and past real events.
NASA Technical Reports Server (NTRS)
Aires, F.; Rossow, W. B.; Scott, N. A.; Chedin, A.; Hansen, James E. (Technical Monitor)
2001-01-01
A fast temperature water vapor and ozone atmospheric profile retrieval algorithm is developed for the high spectral resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. Compression and de-noising of IASI observations are performed using Principal Component Analysis. This preprocessing methodology also allows, for a fast pattern recognition in a climatological data set to obtain a first guess. Then, a neural network using first guess information is developed to retrieve simultaneously temperature, water vapor and ozone atmospheric profiles. The performance of the resulting fast and accurate inverse model is evaluated with a large diversified data set of radiosondes atmospheres including rare events.
NASA Astrophysics Data System (ADS)
Sellitto, P.; Del Frate, F.
2014-07-01
Atmospheric temperature profiles are inferred from passive satellite instruments, using thermal infrared or microwave observations. Here we investigate on the feasibility of the retrieval of height resolved temperature information in the ultraviolet spectral region. The temperature dependence of the absorption cross sections of ozone in the Huggins band, in particular in the interval 320-325 nm, is exploited. We carried out a sensitivity analysis and demonstrated that a non-negligible information on the temperature profile can be extracted from this small band. Starting from these results, we developed a neural network inversion algorithm, trained and tested with simulated nadir EnviSat-SCIAMACHY ultraviolet observations. The algorithm is able to retrieve the temperature profile with root mean square errors and biases comparable to existing retrieval schemes that use thermal infrared or microwave observations. This demonstrates, for the first time, the feasibility of temperature profiles retrieval from space-borne instruments operating in the ultraviolet.
The Filtered Abel Transform and Its Application in Combustion Diagnostics
NASA Technical Reports Server (NTRS)
Simons, Stephen N. (Technical Monitor); Yuan, Zeng-Guang
2003-01-01
Many non-intrusive combustion diagnosis methods generate line-of-sight projections of a flame field. To reconstruct the spatial field of the measured properties, these projections need to be deconvoluted. When the spatial field is axisymmetric, commonly used deconvolution method include the Abel transforms, the onion peeling method and the two-dimensional Fourier transform method and its derivatives such as the filtered back projection methods. This paper proposes a new approach for performing the Abel transform method is developed, which possesses the exactness of the Abel transform and the flexibility of incorporating various filters in the reconstruction process. The Abel transform is an exact method and the simplest among these commonly used methods. It is evinced in this paper that all the exact reconstruction methods for axisymmetric distributions must be equivalent to the Abel transform because of its uniqueness and exactness. Detailed proof is presented to show that the two dimensional Fourier methods when applied to axisymmetric cases is identical to the Abel transform. Discrepancies among various reconstruction method stem from the different approximations made to perform numerical calculations. An equation relating the spectrum of a set of projection date to that of the corresponding spatial distribution is obtained, which shows that the spectrum of the projection is equal to the Abel transform of the spectrum of the corresponding spatial distribution. From the equation, if either the projection or the distribution is bandwidth limited, the other is also bandwidth limited, and both have the same bandwidth. If the two are not bandwidth limited, the Abel transform has a bias against low wave number components in most practical cases. This explains why the Abel transform and all exact deconvolution methods are sensitive to high wave number noises. The filtered Abel transform is based on the fact that the Abel transform of filtered projection data is equal
Chandra View of Galaxy Cluster Abell 2554
NASA Astrophysics Data System (ADS)
kıyami Erdim, Muhammed; Hudaverdi, Murat
2016-07-01
We study the structure of the galaxy cluster Abell 2554 at z = 0.11, which is a member of Aquarius Super cluster using the Chandra archival data. The X-ray peak coincides with a bright elliptical cD galaxy. Slightly elongated X-ray plasma has an average temperature and metal abundance values of ˜6 keV and 0.28 solar, respectively. We observe small-scale temperature variations in the ICM. There is a significantly hot wall-like structure with 9 keV at the SE and also radio-lope locates at the tip of this hot region. A2554 is also part of a trio-cluster. Its close neighbors A2550 (at SW) and A2556 (at SE) have only 2 Mpc and 1.5 Mpc separations with A2554. Considering the temperature fluctuations and the dynamical environment of super cluster, we examine the possible ongoing merger scenarios within A2554.
Abell 1033: birth of a radio phoenix
NASA Astrophysics Data System (ADS)
de Gasperin, F.; Ogrean, G. A.; van Weeren, R. J.; Dawson, W. A.; Brüggen, M.; Bonafede, A.; Simionescu, A.
2015-04-01
Extended steep-spectrum radio emission in a galaxy cluster is usually associated with a recent merger. However, given the complex scenario of galaxy cluster mergers, many of the discovered sources hardly fit into the strict boundaries of a precise taxonomy. This is especially true for radio phoenixes that do not have very well defined observational criteria. Radio phoenixes are aged radio galaxy lobes whose emission is reactivated by compression or other mechanisms. Here, we present the detection of a radio phoenix close to the moment of its formation. The source is located in Abell 1033, a peculiar galaxy cluster which underwent a recent merger. To support our claim, we present unpublished Westerbork Synthesis Radio Telescope and Chandra observations together with archival data from the Very Large Array and the Sloan Digital Sky Survey. We discover the presence of two subclusters displaced along the N-S direction. The two subclusters probably underwent a recent merger which is the cause of a moderately perturbed X-ray brightness distribution. A steep-spectrum extended radio source very close to an active galactic nucleus (AGN) is proposed to be a newly born radio phoenix: the AGN lobes have been displaced/compressed by shocks formed during the merger event. This scenario explains the source location, morphology, spectral index, and brightness. Finally, we show evidence of a density discontinuity close to the radio phoenix and discuss the consequences of its presence.
NASA Astrophysics Data System (ADS)
Gilat Schmidt, Taly; Sidky, Emil Y.
2015-03-01
Photon-counting detectors with pulse-height analysis have shown promise for improved spectral CT imaging. This study investigated a novel spectral CT reconstruction method that directly estimates basis-material images from the measured energy-bin data (i.e., `one-step' reconstruction). The proposed algorithm can incorporate constraints to stabilize the reconstruction and potentially reduce noise. The algorithm minimizes the error between the measured energy-bin data and the data estimated from the reconstructed basis images. A total variation (TV) constraint was also investigated for additional noise reduction. The proposed one-step algorithm was applied to simulated data of an anthropomorphic phantom with heterogeneous tissue composition. Reconstructed water, bone, and gadolinium basis images were compared for the proposed one-step algorithm and the conventional `two-step' method of decomposition followed by reconstruction. The unconstrained algorithm provided a 30% to 60% reduction in noise standard deviation compared to the two-step algorithm. The fTV =0.8 constraint provided a small reduction in noise (˜ 1%) compared to the unconstrained reconstruction. Images reconstructed with the fTV =0.5 constraint demonstrated 77% to 94% standard deviation reduction compared to the two-step reconstruction, however with increased blurring. There were no significant differences in the mean values reconstructed by the investigated algorithms. Overall, the proposed one-step spectral CT reconstruction algorithm provided three-material-decomposition basis images with reduced noise compared to the conventional two-step approach. When using a moderate TV constraint factor (fTV = 0.8), a 30%-60% reduction in noise standard deviation was achieved while preserving the edge profile for this simulated phantom.
NASA Astrophysics Data System (ADS)
Voznyuk, I.; Litman, A.; Tortel, H.
2015-08-01
A Quasi-Newton method for reconstructing the constitutive parameters of three-dimensional (3D) penetrable scatterers from scattered field measurements is presented. This method is adapted for handling large-scale electromagnetic problems while keeping the memory requirement and the time flexibility as low as possible. The forward scattering problem is solved by applying the finite-element tearing and interconnecting full-dual-primal (FETI-FDP2) method which shares the same spirit as the domain decomposition methods for finite element methods. The idea is to split the computational domain into smaller non-overlapping sub-domains in order to simultaneously solve local sub-problems. Various strategies are proposed in order to efficiently couple the inversion algorithm with the FETI-FDP2 method: a separation into permanent and non-permanent subdomains is performed, iterative solvers are favorized for resolving the interface problem and a marching-on-in-anything initial guess selection further accelerates the process. The computational burden is also reduced by applying the adjoint state vector methodology. Finally, the inversion algorithm is confronted to measurements extracted from the 3D Fresnel database.
Bao Yidong; Hu Sibo; Lang Zhikui; Hu Ping
2005-08-05
A fast simulation scheme for 3D curved binder flanging and blank shape prediction of sheet metal based on one-step inverse finite element method is proposed, in which the total plasticity theory and proportional loading assumption are used. The scheme can be actually used to simulate 3D flanging with complex curve binder shape, and suitable for simulating any type of flanging model by numerically determining the flanging height and flanging lines. Compared with other methods such as analytic algorithm and blank sheet-cut return method, the prominent advantage of the present scheme is that it can directly predict the location of the 3D flanging lines when simulating the flanging process. Therefore, the prediction time of flanging lines will be obviously decreased. Two typical 3D curve binder flanging including stretch and shrink characters are simulated in the same time by using the present scheme and incremental FE non-inverse algorithm based on incremental plasticity theory, which show the validity and high efficiency of the present scheme.
NASA Astrophysics Data System (ADS)
Tsekeri, Alexandra; Gross, Barry; Moshary, Fred; Ahmed, Samir
2009-08-01
Quantifying aerosols on a global scale is extremely important due to their strong but anomalous impact on the global climate. Traditionally, the aerosols retrievals use only the intensity measurements of the scattered light. However, these measurements are less sensitive to aerosol type and also suffer contamination from ground surfaces. It is with these limitations in mind that we plan to improve the quality and scope of aerosol retrieval by making use of soon to be available polarimetric sensors such as the Aerosol Polarimetry Sensor (APS) on the GLORY satellite and combine them with other available datasets such as lidar data from the CALIPSO satellite for vertical profiling, and high-spatialcoverage intensity measurements from MODIS. To handle these extremely large sensor data sets, we will explore the capabilities of various statistical methods and even combine them to create inversion algorithms that will work best. Up to now, we worked with the simplest case, the single-scattering approximation and built a retrieval algorithm using multi-angular, multi-wavelength simulated measurements of intensity and polarization. The inversion techniques we used are the optimal estimator and the neural networks.
NASA Astrophysics Data System (ADS)
Sourbier, F.; Operto, S.; Virieux, J.
2006-12-01
We present a distributed-memory parallel algorithm for 2D visco-acoustic full-waveform inversion of wide-angle seismic data. Our code is written in fortran90 and use MPI for parallelism. The algorithm was applied to real wide-angle data set recorded by 100 OBSs with a 1-km spacing in the eastern-Nankai trough (Japan) to image the deep structure of the subduction zone. Full-waveform inversion is applied sequentially to discrete frequencies by proceeding from the low to the high frequencies. The inverse problem is solved with a classic gradient method. Full-waveform modeling is performed with a frequency-domain finite-difference method. In the frequency-domain, solving the wave equation requires resolution of a large unsymmetric system of linear equations. We use the massively parallel direct solver MUMPS (http://www.enseeiht.fr/irit/apo/MUMPS) for distributed-memory computer to solve this system. The MUMPS solver is based on a multifrontal method for the parallel factorization. The MUMPS algorithm is subdivided in 3 main steps: a symbolic analysis step that performs re-ordering of the matrix coefficients to minimize the fill-in of the matrix during the subsequent factorization and an estimation of the assembly tree of the matrix. Second, the factorization is performed with dynamic scheduling to accomodate numerical pivoting and provides the LU factors distributed over all the processors. Third, the resolution is performed for multiple sources. To compute the gradient of the cost function, 2 simulations per shot are required (one to compute the forward wavefield and one to back-propagate residuals). The multi-source resolutions can be performed in parallel with MUMPS. In the end, each processor stores in core a sub-domain of all the solutions. These distributed solutions can be exploited to compute in parallel the gradient of the cost function. Since the gradient of the cost function is a weighted stack of the shot and residual solutions of MUMPS, each processor
NASA Astrophysics Data System (ADS)
Ansari, R.; Campagne, J. E.; Colom, P.; Ferrari, C.; Magneville, Ch.; Martin, J. M.; Moniez, M.; Torrentó, A. S.
2016-02-01
We have observed regions of three galaxy clusters at z˜[0.06÷0.09] (Abell85, Abell1205, Abell2440) with the Nançay radiotelescope (NRT) to search for 21 cm emission and to fully characterize the FPGA based BAORadio digital backend. We have tested the new BAORadio data acquisition system by observing sources in parallel with the NRT standard correlator (ACRT) back-end over several months. BAORadio enables wide band instantaneous observation of the [1250,1500] MHz frequency range, as well as the use of powerful RFI mitigation methods thanks to its fine time sampling. A number of questions related to instrument stability, data processing and calibration are discussed. We have obtained the radiometer curves over the integration time range [0.01,10 000] seconds and we show that sensitivities of few mJy over most of the wide frequency band can be reached with the NRT. It is clearly shown that in blind line search, which is the context of H I intensity mapping for Baryon Acoustic Oscillations, the new acquisition system and processing pipeline outperforms the standard one. We report a positive detection of 21 cm emission at 3 σ-level from galaxies in the outer region of Abell85 at ≃1352 MHz (14400 km/s) corresponding to a line strength of ≃0.8 Jy km/s. We also observe an excess power around ≃1318 MHz (21600 km/s), although at lower statistical significance, compatible with emission from Abell1205 galaxies. Detected radio line emissions have been cross matched with optical catalogs and we have derived hydrogen mass estimates.
NASA Astrophysics Data System (ADS)
Belchansky, G.; Alpatsky, I.; Mordvintsev, I.; Douglas, D.
Investigating new methods to estimate sea-ice geophysical parameters using multisensor satellite data is critical for global change studies. The most widely used and consistent data to study sea ice at global scale are SMMR and SSM/I passive microwave measurements available since 1978. However, comparisons with LANDSAT, AVHRR and ERS-1 SAR have demonstrated substantial seasonal and regional differences in SSM/I ice parameter estimates (Belchansky and Douglas, 2000, 2002). This report presents investigating methods of improving SSM/I and OKEAN sea ice inversion parameters using MLP neural networks, and compare the sea ice classification results from different neural networks and linear mixture model. Efficiency of four sea ice type inversion (classification) algorithms utilizing SSM/I, OKEAN-01, ERS and RADARSAT satellite data were compared and investigated. The first one applied different linear mixture models (NASA Team, Bootstrap, and OKEAN). The second, third and fourth algorithms applied the modified MLP neural networks with different learning algorithms based, respectively, on 1) error back propagation and simulated annealing (Kirkpatrick, 1983); 2) dynamic learning and polynomial basis function (Chen et al., 1996); and 3) dynamic learning and two-step optimization. Both last algorithms used the Kalman filtering technique. Our studies demonstrated that both modified MLP neural networks with dynamic learning were more efficient (in terms of learning time, accuracy, and ability to generalize the selected learning data) than modified MLP neural network with learning algorithms based on the error back propagation and simulated annealing for simple approximation problems. MY sea ice and albedo inversion from SSM/I brightness temperatures and respective OKEAN learning data sets demonstrated that these algorithms caused over-fitting in comparison with the MLP neural network with the error back propagation and simulated annealing. Therefore, for MY sea ice inversion
The merging cluster Abell 1758 revisited: multi-wavelength observations and numerical simulations
NASA Astrophysics Data System (ADS)
Durret, F.; Laganá, T. F.; Haider, M.
2011-05-01
Context. Cluster properties can be more distinctly studied in pairs of clusters, where we expect the effects of interactions to be strong. Aims: We here discuss the properties of the double cluster Abell 1758 at a redshift z ~ 0.279. These clusters show strong evidence for merging. Methods: We analyse the optical properties of the North and South cluster of Abell 1758 based on deep imaging obtained with the Canada-France-Hawaii Telescope (CFHT) archive Megaprime/Megacam camera in the g' and r' bands, covering a total region of about 1.05 × 1.16 deg2, or 16.1 × 17.6 Mpc2. Our X-ray analysis is based on archive XMM-Newton images. Numerical simulations were performed using an N-body algorithm to treat the dark-matter component, a semi-analytical galaxy-formation model for the evolution of the galaxies and a grid-based hydrodynamic code with a parts per million (PPM) scheme for the dynamics of the intra-cluster medium. We computed galaxy luminosity functions (GLFs) and 2D temperature and metallicity maps of the X-ray gas, which we then compared to the results of our numerical simulations. Results: The GLFs of Abell 1758 North are well fit by Schechter functions in the g' and r' bands, but with a small excess of bright galaxies, particularly in the r' band; their faint-end slopes are similar in both bands. In contrast, the GLFs of Abell 1758 South are not well fit by Schechter functions: excesses of bright galaxies are seen in both bands; the faint-end of the GLF is not very well defined in g'. The GLF computed from our numerical simulations assuming a halo mass-luminosity relation agrees with those derived from the observations. From the X-ray analysis, the most striking features are structures in the metal distribution. We found two elongated regions of high metallicity in Abell 1758 North with two peaks towards the centre. In contrast, Abell 1758 South shows a deficit of metals in its central regions. Comparing observational results to those derived from numerical
A 1400-MHz survey of 1478 Abell clusters of galaxies
NASA Technical Reports Server (NTRS)
Owen, F. N.; White, R. A.; Hilldrup, K. C.; Hanisch, R. J.
1982-01-01
Observations of 1478 Abell clusters of galaxies with the NRAO 91-m telescope at 1400 MHz are reported. The measured beam shape was deconvolved from the measured source Gaussian fits in order to estimate the source size and position angle. All detected sources within 0.5 corrected Abell cluster radii are listed, including the cluster number, richness class, distance class, magnitude of the tenth brightest galaxy, redshift estimate, corrected cluster radius in arcmin, right ascension and error, declination and error, total flux density and error, and angular structure for each source.
Radio Galaxies in Abell Rich Clusters
NASA Astrophysics Data System (ADS)
Ledlow, M. J.
1994-05-01
We have defined a complete sample of radio galaxies chosen from Abell's northern catalog consisting of all clusters with measured redshifts < 0.09. This sample consists of nearly 300 clusters. A multiwavelength survey including optical CCD R-Band imaging, optical spectroscopy, and VLA 20 cm radio maps has been compiled. I have used this database to study the optical/radio properties of radio galaxies in the cluster environment. In particular, optical properties have been compared to a radio-quiet selected sample to look for optical signatures which may distinguish radio galaxies from normal radio-quiet ellipticals. The correlations between radio morphology and galaxy type, the optical dependence of the FR I/II break, and the univariate and bivariate luminosity functions have been examined for this sample. This study is aimed at understanding radio galaxies as a population and examining their status in the AGN heirarchy. The results of this work will be applied to models of radio source evolution. The results from the optical data analysis suggest that radio galaxies, as a class, cannot be distinguished from non-radio selected elliptical galaxies. The magnitude/size relationship, the surface-brightness profiles, the fundamental plane, and the intrinsic shape of the radio galaxies are consistent between our radio galaxy and control sample. The radio galaxies also trace the elliptical galaxy optical luminosity function in clusters very well; with many more L(*) galaxies than brightest cluster members. Combined with the results of the spectroscopy, the data are consistent with the idea that all elliptical galaxies may at some point in their lifetimes become radio sources. In conclusion, I present a new observational picture for radio galaxies and discuss the important properties which may determine the evolution of individual sources.
LensPerfect Analysis of Abell 1689
NASA Astrophysics Data System (ADS)
Coe, Dan A.
2007-12-01
I present the first massmap to perfectly reproduce the position of every gravitationally-lensed multiply-imaged galaxy detected to date in ACS images of Abell 1689. This massmap was obtained using a powerful new technique made possible by a recent advance in the field of Mathematics. It is the highest resolution assumption-free Dark Matter massmap to date, with the resolution being limited only by the number of multiple images detected. We detect 8 new multiple image systems and identify multiple knots in individual galaxies to constrain a grand total of 168 knots within 135 multiple images of 42 galaxies. No assumptions are made about mass tracing light, and yet the brightest visible structures in A1689 are reproduced in our massmap, a few with intriguing positional offsets. Our massmap probes radii smaller than that resolvable in current Dark Matter simulations of galaxy clusters. And at these radii, we observe slight deviations from the NFW and Sersic profiles which describe simulated Dark Matter halos so well. While we have demonstrated that our method is able to recover a known input massmap (to limited resolution), further tests are necessary to determine the uncertainties of our mass profile and positions of massive subclumps. I compile the latest weak lensing data from ACS, Subaru, and CFHT, and attempt to fit a single profile, either NFW or Sersic, to both the observed weak and strong lensing. I confirm the finding of most previous authors, that no single profile fits extremely well to both simultaneously. Slight deviations are revealed, with the best fits slightly over-predicting the mass profile at both large and small radius. Our easy-to-use software, called LensPerfect, will be made available soon. This research was supported by the European Commission Marie Curie International Reintegration Grant 017288-BPZ and the PNAYA grant AYA2005-09413-C02.
The Merger Dynamics of Abell 2061
NASA Astrophysics Data System (ADS)
Bailey, Avery; Sarazin, Craig L.; Clarke, Tracy E.; Chatzikos, Marios; Hogge, Taylor; Wik, Daniel R.; Rudnick, Lawrence; Farnsworth, Damon; Van Weeren, Reinout J.; Brown, Shea
2016-04-01
Abell 2061, a galaxy cluster at a redshift of z=.0784 in the Corona Borealis Supercluster, displays features in both the X-ray and radio indicative of merger activity. Observations by the GBT and the Westerbork Northern Sky Survey (WENSS) have indicated the presence of an extended, central radio halo/relic coincident with the cluster's main X-ray emission and a bright radio relic to the SW of the center of the cluster. Previous observations by ROSAT, Beppo-SAX, and Chandra show an elongated structure (referred to as the ‘Plume’), emitting in the soft X-ray and stretching to the NE of the cluster’s center. The Beppo-SAX and Chandra observations also suggest the presence of a hard X-ray shock slightly NE of the cluster’s center. Here we present the details of an August 2013 XMM-Newton observation of A2061 which has greater field of view and longer exposure (48.6 ks) than the previous Chandra observation. We present images displaying the cluster’s soft and hard X-ray emission and also a temperature map of the cluster. This temperature map highlights the presence of a previously unseen cool region of the cluster which we hypothesize to be the cool core of one of the subclusters involved in this merger. We also discuss the structural similarity of this cluster with a simulated high mass-ratio offset cluster merger taken from the Simulation Library of Astrophysical cluster Mergers (SLAM). This simulation would suggest that the Plume is gas from the cool core of a subcluster which is now falling back into the center of the cluster after initial core passage.
Clerbaux, Cathy; Hadji-Lazaro, Juliette; Payan, Sébastien; Camy-Peyret, Claude; Wang, Jinxue; Edwards, David P; Luo, Ming
2002-11-20
Four inversion schemes based on various retrieval approaches (digital gas correlation, nonlinear least squares, global fit adjustment, and neural networks) developed to retrieve CO from nadir radiances measured by such downward-looking satelliteborne instruments as the Measurement of Pollution in the Troposphere (MOPITT), the Tropospheric Emission Spectrometer (TES), and the Infrared Atmospheric Sounding Interferometer (IASI) instruments were compared both for simulated cases and for atmospheric spectra recorded by the Interferometric Monitor for Greenhouse Gases (IMG). The sensitivity of the retrieved CO total column amount to properties that may affect the inversion accuracy (noise, ancillary temperature profile, and water-vapor content) was investigated. The CO column amounts for the simulated radiance spectra agreed within 4%, whereas larger discrepancies were obtained when atmospheric spectra recorded by the IMG instrument were analyzed. The assumed vertical temperature profile is shown to be a critical parameter for accurate CO retrieval. The instrument's line shape was also identified as a possible cause of disagreement among the result provided by the groups of scientist who are participating in this study. PMID:12463254
Mass Profile of Abell 2204 An X-Ray Analysis of Abell 2204 using XMM-Newton Data
Lau, Travis
2003-09-05
The vast majority of the matter in the universe is of an unknown type. This matter is called dark matter by astronomers. The dark matter manifests itself only through gravitational interaction and is otherwise undetectable. The distribution of this matter in can be better understood by studying the mass profile of galaxy clusters. The X-ray emissions of the galaxy cluster Abell 2204 were analyzed using archived data from the XMM-Newton space telescope. We analyze a 40ks observation of Abell 2204 and present a radial temperature and radial mass profile based on hydrostatic equilibrium calculations.
Fox, Andrew; Williams, Mathew; Richardson, Andrew D.; Cameron, David; Gove, Jeffrey H.; Quaife, Tristan; Ricciuto, Daniel M; Reichstein, Markus; Tomelleri, Enrico; Trudinger, Cathy; Van Wijk, Mark T.
2009-10-01
We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) ofCO2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration,were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving>80% success rate and mean NEE confidence intervals <110 gCm-2 year-1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence
AGES Observations of Abell1367 and its Outskirts
NASA Astrophysics Data System (ADS)
Cortese, L.; Minchin, R. F.; Auld, R. R.; Davies, J. I.; Catinella, B.; Momjian, E.; Rosenberg, J. L.; O'Neil, K.
2007-05-01
The Arecibo Galactic Environment Survey (AGES) will map over the next years ˜200 square degrees using the ALFA feed array at the 305-m Arecibo Telescope. AGES is specifically designed to investigate various galactic environments from local voids to interacting groups and cluster of galaxies. AGES will map 20 square degrees in the Coma-Abell1367 supercluster covering all the Abell cluster 1367 and its outskirts (˜2 virial radii). In Spring 2006 we nearly completed the observations of 5 square degrees in the range (11:34
Retrieval Performance and Indexing Differences in ABELL and MLAIB
ERIC Educational Resources Information Center
Graziano, Vince
2012-01-01
Searches for 117 British authors are compared in the Annual Bibliography of English Language and Literature (ABELL) and the Modern Language Association International Bibliography (MLAIB). Authors are organized by period and genre within the early modern era. The number of records for each author was subdivided by format, language of publication,…
Sarode, Ketan Dinkar; Kumar, V Ravi; Kulkarni, B D
2016-05-01
An efficient inverse problem approach for parameter estimation, state and structure identification from dynamic data by embedding training functions in a genetic algorithm methodology (ETFGA) is proposed for nonlinear dynamical biosystems using S-system canonical models. Use of multiple shooting and decomposition approach as training functions has been shown for handling of noisy datasets and computational efficiency in studying the inverse problem. The advantages of the methodology are brought out systematically by studying it for three biochemical model systems of interest. By studying a small-scale gene regulatory system described by a S-system model, the first example demonstrates the use of ETFGA for the multifold aims of the inverse problem. The estimation of a large number of parameters with simultaneous state and network identification is shown by training a generalized S-system canonical model with noisy datasets. The results of this study bring out the superior performance of ETFGA on comparison with other metaheuristic approaches. The second example studies the regulation of cAMP oscillations in Dictyostelium cells now assuming limited availability of noisy data. Here, flexibility of the approach to incorporate partial system information in the identification process is shown and its effect on accuracy and predictive ability of the estimated model are studied. The third example studies the phenomenological toy model of the regulation of circadian oscillations in Drosophila that follows rate laws different from S-system power-law. For the limited noisy data, using a priori information about properties of the system, we could estimate an alternate S-system model that showed robust oscillatory behavior with predictive abilities. PMID:26968929
The Morphological Decomposition of Abell 868
NASA Astrophysics Data System (ADS)
Driver, S. P.; Odewahn, S. C.; Echevarria, L.; Cohen, S. H.; Windhorst, R. A.; Phillipps, S.; Couch, W. J.
2003-12-01
We report on the morphological luminosity functions (LFs) and radial profiles derived for the galaxy population within the rich cluster Abell 868 (z=0.153) based purely on Hubble Space Telescope imaging in F606W. We recover Schechter functions (-24.0
Lin, Lin; Yang, Chao; Lu, Jiangfeng; Ying, Lexing; E, Weinan
2009-09-25
We present an efficient parallel algorithm and its implementation for computing the diagonal of $H^-1$ where $H$ is a 2D Kohn-Sham Hamiltonian discretized on a rectangular domain using a standard second order finite difference scheme. This type of calculation can be used to obtain an accurate approximation to the diagonal of a Fermi-Dirac function of $H$ through a recently developed pole-expansion technique \\cite{LinLuYingE2009}. The diagonal elements are needed in electronic structure calculations for quantum mechanical systems \\citeHohenbergKohn1964, KohnSham 1965,DreizlerGross1990. We show how elimination tree is used to organize the parallel computation and how synchronization overhead is reduced by passing data level by level along this tree using the technique of local buffers and relative indices. We analyze the performance of our implementation by examining its load balance and communication overhead. We show that our implementation exhibits an excellent weak scaling on a large-scale high performance distributed parallel machine. When compared with standard approach for evaluating the diagonal a Fermi-Dirac function of a Kohn-Sham Hamiltonian associated a 2D electron quantum dot, the new pole-expansion technique that uses our algorithm to compute the diagonal of $(H-z_i I)^-1$ for a small number of poles $z_i$ is much faster, especially when the quantum dot contains many electrons.
AVO inversion based on inverse operator estimation in trust region
NASA Astrophysics Data System (ADS)
Yin, Xing-Yao; Deng, Wei; Zong, Zhao-Yun
2016-04-01
Amplitude variation with offset (AVO) inversion is widely utilized in exploration geophysics, especially for reservoir prediction and fluid identification. Inverse operator estimation in the trust region algorithm is applied for solving AVO inversion problems in which optimization and inversion directly are integrated. The L1 norm constraint is considered on the basis of reasonable initial model in order to improve effciency and stability during the AVO inversion process. In this study, high-order Zoeppritz approximation is utilized to establish the inversion objective function in which variation of {{v}\\text{p}}/{{v}\\text{s}} with time is taken into consideration. A model test indicates that the algorithm has a relatively higher stability and accuracy than the damp least-squares algorithm. Seismic data inversion is feasible and inversion values of three parameters ({{v}\\text{p}},{{v}\\text{s}},ρ ) maintain good consistency with logging curves.
NASA Astrophysics Data System (ADS)
Ganapol, B. D.; Furfaro, R.; Johnson, L. F.; Herwitz, S. R.
2003-12-01
Over the past two years, NASA has had great interest in exploring the economic potential of deploying UAVs (Unmanned Aerial Vehicles) as long-duration platforms equipped with high resolution imaging systems for commercial agricultural applications. In October 2002, a team in the Ecosystem Science and Technology Branch at NASA/Ames Research Center prepared and successfully flew a UAV, equipped with off-the-shelf camera systems, over coffee plantations at Kauai (Hawaii). The idea is to help growers to find the best possible harvesting strategy. The most important information that needs to be conveyed to the growers is the percentage of ripe, unripe and overripe cherries in the field. It is of vital importance to devise a robust and reliable "intelligent "algorithm capable of predicting the amount of ripe cherries present in any digital image coming from the onboard cameras. During the campaign, the two UAV camera systems produced digital images that contain information about the down-looking plantation field. These images need to be processed to extract information concerning the percentage of ripe (yellow) cherries. To date, no robust automated algorithm has been developed to perform this task. Currently, every image is viewed by human eyes on a case by case basis. We propose a neural network algorithm that can automate the process in an intelligent way. Biologically inspired Neural Networks are made of elements called "neurons" that can simulate the brain activity during a learning process. The idea is to design an appropriate neural network that learns the relation between the reflectance coming from an image and the percentage of cherries present in a coffee field. We envision a situation in which reflectance from digital images at different wavebands is processed by a trained neural network and the percentage of the different cherries estimated. The key factor is training the network to recognize the reflectance/cherry percentage relation. Over the past few
NASA Astrophysics Data System (ADS)
Goren, L.; Fox, M.; Willett, S.
2012-12-01
A major forcing agent responsible for the evolution of fluvial landscapes is the rate of tectonic uplift. Recently, a growing body of evidence has shown that temporal changes in uplift rate affect the longitudinal profiles of rivers by introducing variations of river steepness and migrating knickpoints. An important and challenging question is thus whether and how the inverse problem may be solved, i.e. can tectonic uplift rates, U, be extracted reliably from the longitudinal profiles of rivers? A well-established formulation relates the change of landscape elevation through time to tectonic uplift rate and to erosion rate, which is described as a power law function of the upstream drainage area, Am, and the local slope, Sn, where m and n are positive exponents. In this work, we present a close form integral solution to the above formulation for the linear case, where n=1. The integral solution is formulated as an inverse problem and used to extract U/K as a function of K-scaled time, where K, the erodibility, depends on geological and climatic conditions. The inversion algorithm is unexpectedly simple and computationally efficient. We apply the inversion procedure to several tilted blocks in the western Basin and Range area that form part of the eastern California shear zone. The blocks are bounded by faults, and we analyze rivers that drain toward fault segments with significant dip-slip component. Each group of rivers draining to a particular fault segment is treated separately, in order to resolve the independent uplift history of each of the tilted blocks. For each group of rivers, we first demonstrate that n=1 is a suitable assumption, and we constrain the best-fit value of m, which is found to be similar for the different tilted blocks. Then, we simultaneously invert the rivers' long profiles of each group to find the tectonic uplift history of its block. This represents the time dependent dip-slip component of velocity along the normal fault that bounds the
A weak-lensing analysis of the Abell 383 cluster
NASA Astrophysics Data System (ADS)
Huang, Z.; Radovich, M.; Grado, A.; Puddu, E.; Romano, A.; Limatola, L.; Fu, L.
2011-05-01
Aims: We use deep CFHT and SUBARU uBVRIz archival images of the Abell 383 cluster (z = 0.187) to estimate its mass by weak-lensing. Methods: To this end, we first use simulated images to check the accuracy provided by our Kaiser-Squires-Broadhurst (KSB) pipeline. These simulations include shear testing programme (STEP) 1 and 2 simulations, as well as more realistic simulations of the distortion of galaxy shapes by a cluster with a Navarro-Frenk-White (NFW) profile. From these simulations we estimate the effect of noise on shear measurement and derive the correction terms. The R-band image is used to derive the mass by fitting the observed tangential shear profile with an NFW mass profile. Photometric redshifts are computed from the uBVRIz catalogs. Different methods for the foreground/background galaxy selection are implemented, namely selection by magnitude, color, and photometric redshifts, and the results are compared. In particular, we developed a semi-automatic algorithm to select the foreground galaxies in the color-color diagram, based on the observed colors. Results: Using color selection or photometric redshifts improves the correction of dilution from foreground galaxies: this leads to higher signals in the inner parts of the cluster. We obtain a cluster mass Mvir = 7.5+2.7_{-1.9 × 1014} M⊙: this value is 20% higher than previous estimates and is more consistent the mass expected from X-ray data. The R-band luminosity function of the cluster is computed and gives a total luminosity Ltot = (2.14 ± 0.5) × 1012 L⊙ and a mass-to-luminosity ratio M/L 300 M⊙/L⊙. Based on: data collected with the Subaru Telescope (University of Tokyo) and obtained from the SMOKA, which is operated by the Astronomy Data Center, National Astronomical Observatory of Japan; observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council (NRC) of Canada
NASA Astrophysics Data System (ADS)
Gentry, R. W.
2002-12-01
The Shelby Farms test site in Shelby County, Tennessee is being developed to better understand recharge hydraulics to the Memphis aquifer in areas where leakage through an overlying aquitard occurs. The site is unique in that it demonstrates many opportunities for interdisciplinary research regarding environmental tracers, anthropogenic impacts and inverse modeling. The objective of the research funding the development of the test site is to better understand the groundwater hydrology and hydraulics between a shallow alluvial aquifer and the Memphis aquifer given an area of leakage, defined as an aquitard window. The site is situated in an area on the boundary of a highly developed urban area and is currently being used by an agricultural research agency and a local recreational park authority. Also, an abandoned landfill is situated to the immediate south of the window location. Previous research by the USGS determined the location of the aquitard window subsequent to the landfill closure. Inverse modeling using a genetic algorithm approach has identified the likely extents of the area of the window given an interaquifer accretion rate. These results, coupled with additional fieldwork, have been used to guide the direction of the field studies and the overall design of the research project. This additional work has encompassed the drilling of additional monitoring wells in nested groups by rotasonic drilling methods. The core collected during the drilling will provide additional constraints to the physics of the problem that may provide additional help in redefining the conceptual model. The problem is non-unique with respect to the leakage area and accretion rate and further research is being performed to provide some idea of the advective flow paths using a combination of tritium and 3He analyses and geochemistry. The outcomes of the research will result in a set of benchmark data and physical infrastructure that can be used to evaluate other environmental
NASA Astrophysics Data System (ADS)
Fiorucci, I.; Muscari, G.; de Zafra, R. L.
2011-07-01
The Ground-Based Millimeter-wave Spectrometer (GBMS) was designed and built at the State University of New York at Stony Brook in the early 1990s and since then has carried out many measurement campaigns of stratospheric O3, HNO3, CO and N2O at polar and mid-latitudes. Its HNO3 data set shed light on HNO3 annual cycles over the Antarctic continent and contributed to the validation of both generations of the satellite-based JPL Microwave Limb Sounder (MLS). Following the increasing need for long-term data sets of stratospheric constituents, we resolved to establish a long-term GMBS observation site at the Arctic station of Thule (76.5° N, 68.8° W), Greenland, beginning in January 2009, in order to track the long- and short-term interactions between the changing climate and the seasonal processes tied to the ozone depletion phenomenon. Furthermore, we updated the retrieval algorithm adapting the Optimal Estimation (OE) method to GBMS spectral data in order to conform to the standard of the Network for the Detection of Atmospheric Composition Change (NDACC) microwave group, and to provide our retrievals with a set of averaging kernels that allow more straightforward comparisons with other data sets. The new OE algorithm was applied to GBMS HNO3 data sets from 1993 South Pole observations to date, in order to produce HNO3 version 2 (v2) profiles. A sample of results obtained at Antarctic latitudes in fall and winter and at mid-latitudes is shown here. In most conditions, v2 inversions show a sensitivity (i.e., sum of column elements of the averaging kernel matrix) of 100 ± 20 % from 20 to 45 km altitude, with somewhat worse (better) sensitivity in the Antarctic winter lower (upper) stratosphere. The 1σ uncertainty on HNO3 v2 mixing ratio vertical profiles depends on altitude and is estimated at ~15 % or 0.3 ppbv, whichever is larger. Comparisons of v2 with former (v1) GBMS HNO3 vertical profiles, obtained employing the constrained matrix inversion method, show that
X-Ray Imaging-Spectroscopy of Abell 1835
NASA Technical Reports Server (NTRS)
Peterson, J. R.; Paerels, F. B. S.; Kaastra, J. S.; Arnaud, M.; Reiprich T. H.; Fabian, A. C.; Mushotzky, R. F.; Jernigan, J. G.; Sakelliou, I.
2000-01-01
We present detailed spatially-resolved spectroscopy results of the observation of Abell 1835 using the European Photon Imaging Cameras (EPIC) and the Reflection Grating Spectrometers (RGS) on the XMM-Newton observatory. Abell 1835 is a luminous (10(exp 46)ergs/s), medium redshift (z = 0.2523), X-ray emitting cluster of galaxies. The observations support the interpretation that large amounts of cool gas are present in a multi-phase medium surrounded by a hot (kT(sub e) = 8.2 keV) outer envelope. We detect O VIII Ly(alpha) and two Fe XXIV complexes in the RGS spectrum. The emission measure of the cool gas below kT(sub e) = 2.7 keV is much lower than expected from standard cooling-flow models, suggesting either a more complicated cooling process than simple isobaric radiative cooling or differential cold absorption of the cooler gas.
The GenABEL Project for statistical genomics.
Karssen, Lennart C; van Duijn, Cornelia M; Aulchenko, Yurii S
2016-01-01
Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the "core team", facilitating agile statistical omics methodology development and fast dissemination. PMID:27347381
The GenABEL Project for statistical genomics
Karssen, Lennart C.; van Duijn, Cornelia M.; Aulchenko, Yurii S.
2016-01-01
Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the “core team”, facilitating agile statistical omics methodology development and fast dissemination. PMID:27347381
NASA Astrophysics Data System (ADS)
Ansari, Hamid Reza
2014-09-01
In this paper we propose a new method for predicting rock porosity based on a combination of several artificial intelligence systems. The method focuses on one of the Iranian carbonate fields in the Persian Gulf. Because there is strong heterogeneity in carbonate formations, estimation of rock properties experiences more challenge than sandstone. For this purpose, seismic colored inversion (SCI) and a new approach of committee machine are used in order to improve porosity estimation. The study comprises three major steps. First, a series of sample-based attributes is calculated from 3D seismic volume. Acoustic impedance is an important attribute that is obtained by the SCI method in this study. Second, porosity log is predicted from seismic attributes using common intelligent computation systems including: probabilistic neural network (PNN), radial basis function network (RBFN), multi-layer feed forward network (MLFN), ε-support vector regression (ε-SVR) and adaptive neuro-fuzzy inference system (ANFIS). Finally, a power law committee machine (PLCM) is constructed based on imperial competitive algorithm (ICA) to combine the results of all previous predictions in a single solution. This technique is called PLCM-ICA in this paper. The results show that PLCM-ICA model improved the results of neural networks, support vector machine and neuro-fuzzy system.
RADIO AND DEEP CHANDRA OBSERVATIONS OF THE DISTURBED COOL CORE CLUSTER ABELL 133
Randall, S. W.; Nulsen, P. E. J.; Forman, W. R.; Murray, S. S.; Clarke, T. E.; Owers, M. S.; Sarazin, C. L.
2010-10-10
We present results based on new Chandra and multi-frequency radio observations of the disturbed cool core cluster Abell 133. The diffuse gas has a complex bird-like morphology, with a plume of emission extending from two symmetric wing-like features. The plume is capped with a filamentary radio structure that has been previously classified as a radio relic. X-ray spectral fits in the region of the relic indicate the presence of either high-temperature gas or non-thermal emission, although the measured photon index is flatter than would be expected if the non-thermal emission is from inverse Compton scattering of the cosmic microwave background by the radio-emitting particles. We find evidence for a weak elliptical X-ray surface brightness edge surrounding the core, which we show is consistent with a sloshing cold front. The plume is consistent with having formed due to uplift by a buoyantly rising radio bubble, now seen as the radio relic, and has properties consistent with buoyantly lifted plumes seen in other systems (e.g., M87). Alternatively, the plume may be a gas sloshing spiral viewed edge-on. Results from spectral analysis of the wing-like features are inconsistent with the previous suggestion that the wings formed due to the passage of a weak shock through the cool core. We instead conclude that the wings are due to X-ray cavities formed by displacement of X-ray gas by the radio relic. The central cD galaxy contains two small-scale cold gas clumps that are slightly offset from their optical and UV counterparts, suggestive of a galaxy-galaxy merger event. On larger scales, there is evidence for cluster substructure in both optical observations and the X-ray temperature map. We suggest that the Abell 133 cluster has recently undergone a merger event with an interloping subgroup, initialing gas sloshing in the core. The torus of sloshed gas is seen close to edge-on, leading to the somewhat ragged appearance of the elliptical surface brightness edge. We show
NASA Astrophysics Data System (ADS)
Edwards, L. O. V.; Alpert, H. S.; Trierweiler, I. L.; Abraham, T.; Beizer, V. G.
2016-09-01
We present the first results from an integral field unit (IFU) spectroscopic survey of a ˜75 kpc region around three brightest cluster galaxies (BCGs), combining over 100 IFU fibres to study the intracluster light (ICL). We fit population synthesis models to estimate age and metallicity. For Abell 85 and Abell 2457, the ICL is best-fit with a fraction of old, metal-rich stars like in the BCG, but requires 30-50 per cent young and metal-poor stars, a component not found in the BCGs. This is consistent with the ICL having been formed by a combination of interactions with less massive, younger, more metal-poor cluster members in addition to stars that form the BCG. We find that the three galaxies are in different stages of evolution and may be the result of different formation mechanisms. The BCG in Abell 85 is near a relatively young, metal-poor galaxy, but the dynamical friction time-scale is long and the two are unlikely to be undergoing a merger. The outer regions of Abell 2457 show a higher relative fraction of metal-poor stars, and we find one companion, with a higher fraction of young, metal-poor stars than the BCG, which is likely to merge within a gigayear. Several luminous red galaxies are found at the centre of the cluster IIZw108, with short merger time-scales, suggesting that the system is about to embark on a series of major mergers to build up a dominant BCG. The young, metal-poor component found in the ICL is not found in the merging galaxies.
NASA Astrophysics Data System (ADS)
Edwards, L. O. V.; Alpert, H. S.; Trierweiler, I. L.; Abraham, T.; Beizer, V. G.
2016-06-01
We present the first results from an integral field (IFU) spectroscopic survey of a ˜75 kpc region around three Brightest Cluster Galaxies (BCGs), combining over 100 IFU fibres to study the intracluster light (ICL). We fit population synthesis models to estimate age and metallicity. For Abell 85 and Abell 2457, the ICL is best-fit with a fraction of old, metal-rich stars like in the BCG, but requires 30-50% young and metal-poor stars, a component not found in the BCGs. This is consistent with the ICL having been formed by a combination of interactions with less massive, younger, more metal-poor cluster members in addition to stars that form the BCG. We find that the three galaxies are in different stages of evolution and may be the result of different formation mechanisms. The BCG in Abell 85 is near a relatively young, metal-poor galaxy, but the dynamical friction timescale is long and the two are unlikely to be undergoing a merger. The outer regions of Abell 2457 show a higher relative fraction of metal-poor stars, and we find one companion, with a higher fraction of young, metal-poor stars than the BCG, which is likely to merge within a gigayear. Several luminous red galaxies are found at the centre of the cluster IIZw108, with short merger timescales, suggesting the system is about to embark on a series of major mergers to build up a dominant BCG. The young, metal-poor component found in the ICL is not found in the merging galaxies.
NASA Astrophysics Data System (ADS)
Azuma, Hiroo
In this paper, we give an analytical treatment to study the behavior of the collapse and the revival of the Rabi oscillations in the Jaynes-Cummings model (JCM). The JCM is an exactly soluble quantum mechanical model, which describes the interaction between a two-level atom and a single cavity mode of the electromagnetic field. If we prepare the atom in the ground state and the cavity mode in a coherent state initially, the JCM causes the collapse and the revival of the Rabi oscillations many times in a complicated pattern in its time-evolution. In this phenomenon, the atomic population inversion is described with an intractable infinite series. (When the electromagnetic field is resonant with the atom, the nth term of this infinite series is given by a trigonometric function for √ {n} t, where t is a variable of the time.) According to Klimov and Chumakov's method, using the Abel-Plana formula, we rewrite this infinite series as a sum of two integrals. We examine the physical meanings of these two integrals and find that the first one represents the initial collapse (the semi-classical limit) and the second one represents the revival (the quantum correction) in the JCM. Furthermore, we evaluate the first- and second-order perturbations for the time-evolution of the JCM with an initial thermal coherent state for the cavity mode at low temperature, and write down their correction terms as sums of integrals by making use of the Abel-Plana formula.
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmüller, U.; Strozzi, T.
2012-12-01
The Lost-Hills oil field located in Kern County,California ranks sixth in total remaining reserves in California. Hundreds of densely packed wells characterize the field with one well every 5000 to 20000 square meters. Subsidence due to oil extraction can be grater than 10 cm/year and is highly variable both in space and time. The RADARSAT-1 SAR satellite collected data over this area with a 24-day repeat during a 2 year period spanning 2002-2004. Relatively high interferometric correlation makes this an excellent region for development and test of deformation time-series inversion algorithms. Errors in deformation time series derived from a stack of differential interferograms are primarily due to errors in the digital terrain model, interferometric baselines, variability in tropospheric delay, thermal noise and phase unwrapping errors. Particularly challenging is separation of non-linear deformation from variations in troposphere delay and phase unwrapping errors. In our algorithm a subset of interferometric pairs is selected from a set of N radar acquisitions based on criteria of connectivity, time interval, and perpendicular baseline. When possible, the subset consists of temporally connected interferograms, otherwise the different groups of interferograms are selected to overlap in time. The maximum time interval is constrained to be less than a threshold value to minimize phase gradients due to deformation as well as minimize temporal decorrelation. Large baselines are also avoided to minimize the consequence of DEM errors on the interferometric phase. Based on an extension of the SVD based inversion described by Lee et al. ( USGS Professional Paper 1769), Schmidt and Burgmann (JGR, 2003), and the earlier work of Berardino (TGRS, 2002), our algorithm combines estimation of the DEM height error with a set of finite difference smoothing constraints. A set of linear equations are formulated for each spatial point that are functions of the deformation velocities
The inverse electroencephalography pipeline
NASA Astrophysics Data System (ADS)
Weinstein, David Michael
The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.
Disentangling Structures in the Cluster of Galaxies Abell 133
NASA Technical Reports Server (NTRS)
Way, Michael J.; DeVincenzi, Donald (Technical Monitor)
2002-01-01
A dynamical analysis of the structure of the cluster of galaxies Abell 133 will be presented using multi-wavelength data combined from multiple space and earth based observations. New and familiar statistical clustering techniques are used in combination in an attempt to gain a fully consistent picture of this interesting nearby cluster of galaxies. The type of analysis presented should be typical of cluster studies in the future, especially those to come from the surveys like the Sloan Digital Sky Survey and the 2DF.
Buoyant Bubbles and the Disturbed Cool Core of Abell 133
NASA Astrophysics Data System (ADS)
Randall, Scott W.; Clarke, T.; Nulsen, P.; Owers, M.; Sarazin, C.; Forman, W.; Jones, C.; Murray, S.
2010-03-01
X-ray cavities, often filled with radio-emitting plasma, are routinely observed in the intracluster medium of clusters of galaxies. These cavities, or "bubbles", are evacuated by jets from central AGN and subsequently rise buoyantly, playing a vital role in the "AGN feedback" model now commonly evoked to explain the balance between heating and radiative cooling in cluster cores. As the bubbles rise, they can displace cool central gas, promoting mixing and the redistribution of metals. I will show a few examples of buoyant bubbles, then argue that the peculiar morphology of the Abell 133 is due to buoyant lifting of cool central gas by a radio-filled bubble.
The discovery of diffuse steep spectrum sources in Abell 2256
NASA Astrophysics Data System (ADS)
van Weeren, R. J.; Intema, H. T.; Oonk, J. B. R.; Röttgering, H. J. A.; Clarke, T. E.
2009-12-01
Context: Hierarchical galaxy formation models indicate that during their lifetime galaxy clusters undergo several mergers. An example of such a merging cluster is Abell 2256. Here we report on the discovery of three diffuse radio sources in the periphery of Abell 2256, using the Giant Metrewave Radio Telescope (GMRT). Aims: The aim of the observations was to search for diffuse ultra-steep spectrum radio sources within the galaxy cluster Abell 2256. Methods: We have carried out GMRT 325 MHz radio continuum observations of Abell 2256. V, R and I band images of the cluster were taken with the 4.2 m William Herschel Telescope (WHT). Results: We have discovered three diffuse elongated radio sources located about 1 Mpc from the cluster center. Two are located to the west of the cluster center, and one to the southeast. The sources have a measured physical extent of 170, 140 and 240 kpc, respectively. The two western sources are also visible in deep low-resolution 115-165 MHz Westerbork Synthesis Radio Telescope (WSRT) images, although they are blended into a single source. For the combined emission of the blended source we find an extreme spectral index (α) of -2.05 ± 0.14 between 140 and 351 MHz. The extremely steep spectral index suggests these two sources are most likely the result of adiabatic compression of fossil radio plasma due to merger shocks. For the source to the southeast, we find that {α < -1.45} between 1369 and 325 MHz. We did not find any clear optical counterparts to the radio sources in the WHT images. Conclusions: The discovery of the steep spectrum sources implies the existence of a population of faint diffuse radio sources in (merging) clusters with such steep spectra that they have gone unnoticed in higher frequency (⪆1 GHz) observations. Simply considering the timescales related to the AGN activity, synchrotron losses, and the presence of shocks, we find that most massive clusters should possess similar sources. An exciting possibility
NASA Astrophysics Data System (ADS)
Beucher, R.; Brown, R. W.
2013-12-01
One of the most significant advances in interpreting thermochronological data is arguably our ability to extract information about the rate and trajectory of cooling over a range of temperatures, rather than having to rely on the veracity of the simplification of assuming a single closure temperature specified by a rate of monotonic cooling. Modern thermochronometry data, such as apatite fission track and (U-Th)/He analysis, are particularly good examples of data amenable to this treatment as acceptably well calibrated kinetic models now exist for both systems. With ever larger data sets of this type being generated over ever larger areas the prospect of inverting very large amounts of such data distributed spatially over large areas offers new possibilities for constraining the thermal and erosional histories over length scales approximating whole orogens and sub-continents. The challenge though is in how to properly deal with joint inversion of multiple samples in a self-consistent manner while also utilising all the available information contained in the data. We describe a new approach to this problem, called the Community of Family Circles (CFC) algorithm, which extracts information from spatially distributed apatite fission track ages (AFT) and track length distributions (TLD). The method is based on the rationale that the 3D geothermal field of the crust varies smoothly through space and time because of the efficiency of thermal diffusion. Our approach consists of seeking groups of spatially adjacent samples, or families, within a given circular radius for which a common thermal history is appropriate. The temperature offsets between individual time-temperature paths are determined relative to a low-pass filtered topographic surface, whose shape is assumed to mimic the shape of the isotherms in the partial annealing zone. This enables a single common thermal history to be shared, or interpolated, between the family members while still honouring the
The Noble-Abel Stiffened-Gas equation of state
NASA Astrophysics Data System (ADS)
Le Métayer, Olivier; Saurel, Richard
2016-04-01
Hyperbolic two-phase flow models have shown excellent ability for the resolution of a wide range of applications ranging from interfacial flows to fluid mixtures with several velocities. These models account for waves propagation (acoustic and convective) and consist in hyperbolic systems of partial differential equations. In this context, each phase is compressible and needs an appropriate convex equation of state (EOS). The EOS must be simple enough for intensive computations as well as boundary conditions treatment. It must also be accurate, this being challenging with respect to simplicity. In the present approach, each fluid is governed by a novel EOS named "Noble Abel stiffened gas," this formulation being a significant improvement of the popular "Stiffened Gas (SG)" EOS. It is a combination of the so-called "Noble-Abel" and "stiffened gas" equations of state that adds repulsive effects to the SG formulation. The determination of the various thermodynamic functions and associated coefficients is the aim of this article. We first use thermodynamic considerations to determine the different state functions such as the specific internal energy, enthalpy, and entropy. Then we propose to determine the associated coefficients for a liquid in the presence of its vapor. The EOS parameters are determined from experimental saturation curves. Some examples of liquid-vapor fluids are examined and associated parameters are computed with the help of the present method. Comparisons between analytical and experimental saturation curves show very good agreement for wide ranges of temperature for both liquid and vapor.