Abel Inversion of Deflectometric Measurements in Dynamic Flows
NASA Technical Reports Server (NTRS)
Agrawal, Ajay K.; Albers, Burt W.; Griffin, DeVon W.
1999-01-01
We present an Abel-inversion algorithm to reconstruct mean and rms refractive-index profiles from spatially resolved statistical measurements of the beam-deflection angle in time-dependent, axisymmetric flows. An oscillating gas-jet diffusion flame was investigated as a test case for applying the algorithm. Experimental data were obtained across the whole field by a rainbow schlieren apparatus. Results show that simultaneous multipoint measurements are necessary to reconstruct the rms refractive index accurately.
Abel inversion method for cometary atmospheres.
NASA Astrophysics Data System (ADS)
Hubert, Benoit; Opitom, Cyrielle; Hutsemekers, Damien; Jehin, Emmanuel; Munhoven, Guy; Manfroid, Jean; Bisikalo, Dmitry V.; Shematovich, Valery I.
2016-04-01
Remote observation of cometary atmospheres produces a measurement of the cometary emissions integrated along the line of sight joining the observing instrument and the gas of the coma. This integration is the so-called Abel transform of the local emission rate. We develop a method specifically adapted to the inversion of the Abel transform of cometary emissions, that retrieves the radial profile of the emission rate of any unabsorbed emission, under the hypothesis of spherical symmetry of the coma. The method uses weighted least squares fitting and analytical results. A Tikhonov regularization technique is applied to reduce the possible effects of noise and ill-conditioning, and standard error propagation techniques are implemented. Several theoretical tests of the inversion techniques are carried out to show its validity and robustness, and show that the method is only weakly dependent on any constant offset added to the data, which reduces the dependence of the retrieved emission rate on the background subtraction. We apply the method to observations of three different comets observed using the TRAPPIST instrument: 103P/ Hartley 2, F6/ Lemmon and A1/ Siding spring. We show that the method retrieves realistic emission rates, and that characteristic lengths and production rates can be derived from the emission rate for both CN and C2 molecules. We show that the emission rate derived from the observed flux of CN emission at 387 nm and from the C2 emission at 514.1 nm of comet Siding Spring both present an easily-identifiable shoulder that corresponds to the separation between pre- and post-outburst gas. As a general result, we show that diagnosing properties and features of the coma using the emission rate is easier than directly using the observed flux. We also determine the parameters of a Haser model fitting the inverted data and fitting the line-of-sight integrated observation, for which we provide the exact analytical expression of the line-of-sight integration
NASA Astrophysics Data System (ADS)
Chou, Min Yang; Lin, Charles C. H.; Tsai, Ho Fang; Lin, Chi Yen
2017-01-01
The Abel inversion of ionospheric electron density profiles with the assumption of spherical symmetry applied for radio occultation soundings could introduce a greater systematic error or sometimes artifacts if the occultation rays trespass regions with larger horizontal gradients in electron density. The aided Abel inversions have been proposed by considering the asymmetry ratio derived from ionospheric total electron content (TEC) or peak density (NmF2) of reconstructed observation maps since knowledge of the horizontal asymmetry in ambient ionospheric density could mitigate the inversion error. Here we propose a new aided Abel inversion using three-dimensional time-dependent electron density (Ne) based on the climatological maps constructed from previous observations, as it has an advantage of providing altitudinal information on the horizontal asymmetry. Improvement of proposed Ne-aided Abel inversion and comparisons with electron density profiles inverted from the NmF2- and TEC-aided inversions are studied using observation system simulation experiments. Comparison results show that all three aided Abel inversions improve the ionospheric profiling by mitigating the artificial plasma caves and negative electron density in the daytime E region. The equatorial ionization anomaly crests in the F region become more distinct. The statistical results show that the Ne-aided Abel inversion has less mean and RMS error of error percentage above 250 km altitudes, and the performances for all aided Abel inversions are similar below 250 km altitudes.
Bayesian Abel Inversion in Quantitative X-Ray Radiography
Howard, Marylesa; Fowler, Michael; Luttman, Aaron; Mitchell, Stephen E.; Hock, Margaret C.
2016-05-19
A common image formation process in high-energy X-ray radiography is to have a pulsed power source that emits X-rays through a scene, a scintillator that absorbs X-rays and uoresces in the visible spectrum in response to the absorbed photons, and a CCD camera that images the visible light emitted from the scintillator. The intensity image is related to areal density, and, for an object that is radially symmetric about a central axis, the Abel transform then gives the object's volumetric density. Two of the primary drawbacks to classical variational methods for Abel inversion are their sensitivity to the type and scale of regularization chosen and the lack of natural methods for quantifying the uncertainties associated with the reconstructions. In this work we cast the Abel inversion problem within a statistical framework in order to compute volumetric object densities from X-ray radiographs and to quantify uncertainties in the reconstruction. A hierarchical Bayesian model is developed with a likelihood based on a Gaussian noise model and with priors placed on the unknown density pro le, the data precision matrix, and two scale parameters. This allows the data to drive the localization of features in the reconstruction and results in a joint posterior distribution for the unknown density pro le, the prior parameters, and the spatial structure of the precision matrix. Results of the density reconstructions and pointwise uncertainty estimates are presented for both synthetic signals and real data from a U.S. Department of Energy X-ray imaging facility.
Bayesian Abel Inversion in Quantitative X-Ray Radiography
Howard, Marylesa; Fowler, Michael; Luttman, Aaron; ...
2016-05-19
A common image formation process in high-energy X-ray radiography is to have a pulsed power source that emits X-rays through a scene, a scintillator that absorbs X-rays and uoresces in the visible spectrum in response to the absorbed photons, and a CCD camera that images the visible light emitted from the scintillator. The intensity image is related to areal density, and, for an object that is radially symmetric about a central axis, the Abel transform then gives the object's volumetric density. Two of the primary drawbacks to classical variational methods for Abel inversion are their sensitivity to the type andmore » scale of regularization chosen and the lack of natural methods for quantifying the uncertainties associated with the reconstructions. In this work we cast the Abel inversion problem within a statistical framework in order to compute volumetric object densities from X-ray radiographs and to quantify uncertainties in the reconstruction. A hierarchical Bayesian model is developed with a likelihood based on a Gaussian noise model and with priors placed on the unknown density pro le, the data precision matrix, and two scale parameters. This allows the data to drive the localization of features in the reconstruction and results in a joint posterior distribution for the unknown density pro le, the prior parameters, and the spatial structure of the precision matrix. Results of the density reconstructions and pointwise uncertainty estimates are presented for both synthetic signals and real data from a U.S. Department of Energy X-ray imaging facility.« less
Improved Abel transform inversion: First application to COSMIC/FORMOSAT-3
NASA Astrophysics Data System (ADS)
Aragon-Angel, A.; Hernandez-Pajares, M.; Juan, J.; Sanz, J.
2007-05-01
In this paper the first results of Ionospheric Tomographic inversion are presented, using the Improved Abel Transform on the COSMIC/FORMOSAT-3 constellation of 6 LEO satellites, carrying on-board GPS receivers.[- 4mm] The Abel transform inversion is a wide used technique which in the ionospheric context makes it possible to retrieve electron densities as a function of height based of STEC (Slant Total Electron Content) data gathered from GPS receivers on board of LEO (Low Earth Orbit) satellites. Within this precise use, the classical approach of the Abel inversion is based on the assumption of spherical symmetry of the electron density in the vicinity of an occultation, meaning that the electron content varies in height but not horizontally. In particular, one implication of this assumption is that the VTEC (Vertical Total Electron Content) is a constant value for the occultation region. This assumption may not always be valid since horizontal ionospheric gradients (a very frequent feature in some ionosphere problematic areas such as the Equatorial region) could significantly affect the electron profiles. [- 4mm] In order to overcome this limitation/problem of the classical Abel inversion, a studied improvement of this technique can be obtained by assuming separability in the electron density (see Hernández-Pajares et al. 2000). This means that the electron density can be expressed by the multiplication of VTEC data and a shape function which assumes all the height dependency in it while the VTEC data keeps the horizontal dependency. Actually, it is more realistic to assume that this shape fuction depends only on the height and to use VTEC information to take into account the horizontal variation rather than considering spherical symmetry in the electron density function as it has been carried out in the classical approach of the Abel inversion.[-4mm] Since the above mentioned improved Abel inversion technique has already been tested and proven to be a useful
Serre duality, Abel's theorem, and Jacobi inversion for supercurves over a thick superpoint
NASA Astrophysics Data System (ADS)
Rothstein, Mitchell J.; Rabin, Jeffrey M.
2015-04-01
The principal aim of this paper is to extend Abel's theorem to the setting of complex supermanifolds of dimension 1 | q over a finite-dimensional local supercommutative C-algebra. The theorem is proved by establishing a compatibility of Serre duality for the supercurve with Poincaré duality on the reduced curve. We include an elementary algebraic proof of the requisite form of Serre duality, closely based on the account of the reduced case given by Serre in Algebraic groups and class fields, combined with an invariance result for the topology on the dual of the space of répartitions. Our Abel map, taking Cartier divisors of degree zero to the dual of the space of sections of the Berezinian sheaf, modulo periods, is defined via Penkov's characterization of the Berezinian sheaf as the cohomology of the de Rham complex of the sheaf D of differential operators. We discuss the Jacobi inversion problem for the Abel map and give an example demonstrating that if n is an integer sufficiently large that the generic divisor of degree n is linearly equivalent to an effective divisor, this need not be the case for all divisors of degree n.
Inverting ion images without Abel inversion: maximum entropy reconstruction of velocity maps.
Dick, Bernhard
2014-01-14
A new method for the reconstruction of velocity maps from ion images is presented, which is based on the maximum entropy concept. In contrast to other methods used for Abel inversion the new method never applies an inversion or smoothing to the data. Instead, it iteratively finds the map which is the most likely cause for the observed data, using the correct likelihood criterion for data sampled from a Poissonian distribution. The entropy criterion minimizes the information content in this map, which hence contains no information for which there is no evidence in the data. Two implementations are proposed, and their performance is demonstrated with simulated and experimental data: Maximum Entropy Velocity Image Reconstruction (MEVIR) obtains a two-dimensional slice through the velocity distribution and can be compared directly to Abel inversion. Maximum Entropy Velocity Legendre Reconstruction (MEVELER) finds one-dimensional distribution functions Q(l)(v) in an expansion of the velocity distribution in Legendre polynomials P((cos θ) for the angular dependence. Both MEVIR and MEVELER can be used for the analysis of ion images with intensities as low as 0.01 counts per pixel, with MEVELER performing significantly better than MEVIR for images with low intensity. Both methods perform better than pBASEX, in particular for images with less than one average count per pixel.
Inversion Algorithms for Geophysical Problems
1987-12-16
ktdud* Sccumy Oass/Kjoon) Inversion Algorithms for Geophysical Problems (U) 12. PERSONAL AUTHOR(S) Lanzano, Paolo 13 «. TYPE OF REPORT Final 13b...spectral density. 20. DISTRIBUTION/AVAILABILITY OF ABSTRACT 13 UNCLASSIFIED/UNLIMITED D SAME AS RPT n OTIC USERS 22a. NAME OF RESPONSIBLE...Research Laboratory ’^^ SSZ ’.Washington. DC 20375-5000 NRLrMemorandum Report-6138 Inversion Algorithms for Geophysical Problems p. LANZANO Space
An efficient and flexible Abel-inversion method for noisy data
NASA Astrophysics Data System (ADS)
Antokhin, Igor I.
2016-12-01
We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.
Multichannel algorithms for seismic reflectivity inversion
NASA Astrophysics Data System (ADS)
Wang, Ruo; Wang, Yanghua
2017-02-01
Seismic reflectivity inversion is a deconvolution process for quantitatively extracting the reflectivity series and depicting the layered subsurface structure. The conventional method is a single channel inversion and cannot clearly characterise stratified structures, especially from seismic data with low signal-to-noise ratio. Because it is implemented on a trace-by-trace basis, the continuity along reflections in the original seismic data is deteriorated in the inversion results. We propose here multichannel inversion algorithms that apply the information of adjacent traces during seismic reflectivity inversion. Explicitly, we incorporate a spatial prediction filter into the conventional Cauchy-constrained inversion method. We verify the validity and feasibility of the method using field data experiments and find an improved lateral continuity and clearer structures achieved by the multichannel algorithms. Finally, we compare the performance of three multichannel algorithms and merit the effectiveness based on the lateral coherency and structure characterisation of the inverted reflectivity profiles, and the residual energy of the seismic data at the same time.
Magnetotelluric inversion via reverse time migration algorithm of seismic data
Ha, Taeyoung . E-mail: tyha@math.snu.ac.kr; Shin, Changsoo . E-mail: css@model.snu.ac.kr
2007-07-01
We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.
Magnetotelluric inversion via reverse time migration algorithm of seismic data
NASA Astrophysics Data System (ADS)
Ha, Taeyoung; Shin, Changsoo
2007-07-01
We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nédélec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.
SAGE II inversion algorithm. [Stratospheric Aerosol and Gas Experiment
NASA Technical Reports Server (NTRS)
Chu, W. P.; Mccormick, M. P.; Lenoble, J.; Brogniez, C.; Pruvost, P.
1989-01-01
The operational Stratospheric Aerosol and Gas Experiment II multichannel data inversion algorithm is described. Aerosol and ozone retrievals obtained with the algorithm are discussed. The algorithm is compared to an independently developed algorithm (Lenoble, 1989), showing that the inverted aerosol and ozone profiles from the two algorithms are similar within their respective uncertainties.
Index Theory-Based Algorithm for the Gradiometer Inverse Problem
2015-03-28
Index Theory-Based Algorithm for the Gradiometer Inverse Problem Robert C. Anderson and Jonathan W. Fitton Abstract: We present an Index Theory...based gravity gradiometer inverse problem algorithm. This algorithm relates changes in the index value computed on a closed curve containing a line...field generated by the positive eigenvector of the gradiometer tensor to the closeness of fit of the proposed inverse solution to the mass and
A quantitative comparison of soil moisture inversion algorithms
NASA Technical Reports Server (NTRS)
Zyl, J. J. van; Kim, Y.
2001-01-01
This paper compares the performance of four bare surface radar soil moisture inversion algorithms in the presence of measurement errors. The particular errors considered include calibration errors, system thermal noise, local topography and vegetation cover.
Tissue elasticity measurement method using forward and inversion algorithms
NASA Astrophysics Data System (ADS)
Lee, Jong-Ha; Won, Chang-Hee; Park, Hee-Jun; Ku, Jeonghun; Heo, Yun Seok; Kim, Yoon-Nyun
2013-03-01
Elasticity is an important indicator of tissue health, with increased stiffness pointing to an increased risk of cancer. We investigated a tissue elasticity measurement method using forward and inversion algorithms for the application of early breast tumor identification. An optical based elasticity measurement system is developed to capture images of the embedded lesions using total internal reflection principle. From elasticity images, we developed a novel method to estimate the elasticity of the embedded lesion using 3-D finite-element-model-based forward algorithm, and neural-network-based inversion algorithm. The experimental results showed that the proposed characterization method can be diffierentiate the benign and malignant breast lesions.
Some properties of probability inversion algorithms to elicit expert opinion.
NASA Astrophysics Data System (ADS)
Lark, Murray
2015-04-01
Probability inversion methods have been developed to infer underlying expert utility functions from rankings that experts offer of subsets of scenarios. The method assumes that the expert ranking reflects an underlying utility, which can be modelled as a function of predictive covariates. This is potentially useful as a method for the extraction of expert opinions for prediction in new scenarios. Two particular algorithms are considered here, the IPF algorithm and the PURE algorithm. The former always converges for consistent sets of rankings and finds a solution which minimizes the mutual information of the estimated utilities and an initial random sample of proposed utilities drawn in the algorithm. In this poster I report some empirical studies on the probability inversion procedure, investigating the effects of the size of the expert panel, the consistency and quality of the expert panel and the validity of the predictive covariates. These results have practical implications for the design of elicitation by probability inversion methods.
Rayleigh wave inversion using heat-bath simulated annealing algorithm
NASA Astrophysics Data System (ADS)
Lu, Yongxu; Peng, Suping; Du, Wenfeng; Zhang, Xiaoyang; Ma, Zhenyuan; Lin, Peng
2016-11-01
The dispersion of Rayleigh waves can be used to obtain near-surface shear (S)-wave velocity profiles. This is performed mainly by inversion of the phase velocity dispersion curves, which has been proven to be a highly nonlinear and multimodal problem, and it is unsuitable to use local search methods (LSMs) as the inversion algorithm. In this study, a new strategy is proposed based on a variant of simulated annealing (SA) algorithm. SA, which simulates the annealing procedure of crystalline solids in nature, is one of the global search methods (GSMs). There are many variants of SA, most of which contain two steps: the perturbation of model and the Metropolis-criterion-based acceptance of the new model. In this paper we propose a one-step SA variant known as heat-bath SA. To test the performance of the heat-bath SA, two models are created. Both noise-free and noisy synthetic data are generated. Levenberg-Marquardt (LM) algorithm and a variant of SA, known as the fast simulated annealing (FSA) algorithm, are also adopted for comparison. The inverted results of the synthetic data show that the heat-bath SA algorithm is a reasonable choice for Rayleigh wave dispersion curve inversion. Finally, a real-world inversion example from a coal mine in northwestern China is shown, which proves that the scheme we propose is applicable.
Non-thermal Hard X-Ray Emission from Coma and Several Abell Clusters
Correa, C
2004-02-05
We report results of hard X-Ray observations of the clusters Coma, Abell 496, Abell754, Abell 1060, Abell 1367, Abell2256 and Abell3558 using RXTE data from the NASA HEASARC public archive. Specifically we searched for clusters with hard x-ray emission that can be fitted by a power law because this would indicate that the cluster is a source of non-thermal emission. We are assuming the emission mechanism proposed by Vahk Petrosian where the inter cluster space contains clouds of relativistic electrons that by themselves create a magnetic field and emit radio synchrotron radiation. These relativistic electrons Inverse-Compton scatter Microwave Background photons up to hard x-ray energies. The clusters that were found to be sources of non-thermal hard x-rays are Coma, Abell496, Abell754 and Abell 1060.
1-Dimension magnetotelluric data inversion using MOEA/D algorithm
NASA Astrophysics Data System (ADS)
Pramudiana; Sungkono
2017-01-01
Magnetotelluric (MT) data is used to derive resistivity imaging of subsurface. The subsurface resistivity is obtained by inversion of MT data. Generally, MT data contains two parts, namely: apparent resistivity and phase or real and imaginary parts. Inversion of MT data for reconstructing resistivity value of each layer is to minimize single objective (combination two parameters MT data) which used global or local optimization method. Nerveless, single objective optimization method has several disadvantages, such as; (1) weight value to combine two parameters of MT data is needed, where this weigh value depend on the amplitude of both MT data; (2) there is no validation of the inversion results. In this research, Inversion MT data to estimate 1D resistivity of subsurface uses multi-objective evolutionary algorithm based on decomposition (MOEA/D)to minimize root mean square error (RMSE) of calculated and observed data for apparent resistivity and phase data simultaneously. The algorithm has applied to synthetic and field data. This result shows that MOEA/D algorithm is robust and accurate to determine subsurface resistivity and lithology.
[MEG inverse solution using Gauss-Newton algorithm modified by Moore-Penrose inversion].
Li, J
2001-06-01
In magnetoencephalogram(MEG) basic studies, it is an important issue to estimate magnetic source parameters by inverse solution. It is known that the magnetic field equations are nonlinear, thus explicit solutions are difficult to obtain. However optimization methods are available to this parameter estimation. In many usually used nonlinear local optimization algorithms, Gauss-Newton's is of fast convergent speed. When this algorithm is used, the singularity of the Jacobien matrix about the minimum least square error must be considered carefully. If the matrix is singular, the equation for searching direction has no general solution. One way to overcome this problem is to use negative gradient as searching direction, but it may cause descent of convergent speed. Another way is known as Levenberg-Marquardt algorithm which makes the matrix non-singular by adding some improved factors to it. In this paper we utilize Moore-Penrose inversion for the solution of iterative searching direction equation. In appendix we demonstrate that the searching direction obtained by the proposed method is successful. Computer simulation also demonstrates that by reasonable selection of initial iterative values, the modified Gauss-Newton algorithm is effective for MEG inverse solution in the case with one or two source dipoles.
Application of SAGE III inversion algorithm to SALOMON measurements
NASA Astrophysics Data System (ADS)
Bazureau, A.; Brogniez, Colette; Renard, J.-B.
2001-01-01
The SAGE II, whose first flight is planned to be launched in Winter 2000-2001 on the polar orbit spacecraft METEOR 3M, is a part of NASA's Earth Observing Systems (EOS). A preliminary outcome for the LOA inversion has been obtained for resolved channel for the retrieval of daytime and nighttime constituents such as O3, NO2, NO3, OCIO and aerosols, with good quality, from simulated transmission profiles. An opportunity for testing the inversion algorithm on real measurements is offered by the SALOMON team of the LPCE to validate the method. The purpose of this paper is to present the LOA inversion of SALOMON real measurements, performed in February 2000 from Kiruna, Sweden. The retrieved gas densities and aerosol extinction profiles are compared to the corresponding retrieved values by the SALOMON team.
A comparison of three inverse treatment planning algorithms.
Holmes, T; Mackie, T R
1994-01-01
Three published inverse treatment planning algorithms for physical optimization of external beam radiotherapy are compared. All three algorithms attempt to minimize a quadratic objective function of the dose distribution. It is shown that the algorithms are based on the common framework of Newton's method of multi-dimensional function minimization. The approximations used within this framework to obtain the different algorithms are described. The use of these algorithms requires that the number of weights of elemental dose distributions be equal to the number of sample points taken in the dose volume. The primary factor in determining how the algorithms are implemented is the dose computation model. Two of the algorithms use pencil beam dose models and therefore directly optimize individual pencil beam weights, whereas the third algorithm is implemented to optimize groups of pencil beams, each group converging upon a common point. All dose computation models assume that the irradiated medium is homogeneous. It is shown that the two different implementations produce similar results for the simple optimization problem of conforming dose to a convex target shape. Complex optimization problems consisting of non-convex target shapes and dose limiting structures are shown to require a pencil beam optimization method.
Inverse transport calculations in optical imaging with subspace optimization algorithms
NASA Astrophysics Data System (ADS)
Ding, Tian; Ren, Kui
2014-09-01
Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analytically recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.
Eddy-current NDE inverse problem with sparse grid algorithm
NASA Astrophysics Data System (ADS)
Zhou, Liming; Sabbagh, Harold A.; Sabbagh, Elias H.; Murphy, R. Kim; Bernacchi, William; Aldrin, John C.; Forsyth, David; Lindgren, Eric
2016-02-01
In model-based inverse problems, the unknown parameters (such as length, width, depth) need to be estimated. When the unknown parameters are few, the conventional mathematical methods are suitable. But the increasing number of unknown parameters will make the computation become heavy. To reduce the burden of computation, the sparse grid algorithm was used in our work. As a result, we obtain a powerful interpolation method that requires significantly fewer support nodes than conventional interpolation on a full grid.
A fast algorithm for sparse matrix computations related to inversion
Li, S.; Wu, W.; Darve, E.
2013-06-01
We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green’s functions G{sup r} and G{sup <} for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round
Inverse problem of HIV cell dynamics using Genetic Algorithms
NASA Astrophysics Data System (ADS)
González, J. A.; Guzmán, F. S.
2017-01-01
In order to describe the cell dynamics of T-cells in a patient infected with HIV, we use a flavour of Perelson's model. This is a non-linear system of Ordinary Differential Equations that describes the evolution of healthy, latently infected, infected T-cell concentrations and the free viral cells. Different parameters in the equations give different dynamics. Considering the concentration of these types of cells is known for a particular patient, the inverse problem consists in estimating the parameters in the model. We solve this inverse problem using a Genetic Algorithm (GA) that minimizes the error between the solutions of the model and the data from the patient. These errors depend on the parameters of the GA, like mutation rate and population, although a detailed analysis of this dependence will be described elsewhere.
The genetic algorithm: A robust method for stress inversion
NASA Astrophysics Data System (ADS)
Thakur, Prithvi; Srivastava, Deepak C.; Gupta, Pravin K.
2017-01-01
The stress inversion of geological or geophysical observations is a nonlinear problem. In most existing methods, it is solved by linearization, under certain assumptions. These linear algorithms not only oversimplify the problem but also are vulnerable to entrapment of the solution in a local optimum. We propose the use of a nonlinear heuristic technique, the genetic algorithm, which searches the global optimum without making any linearizing assumption or simplification. The algorithm mimics the natural evolutionary processes of selection, crossover and mutation and, minimizes a composite misfit function for searching the global optimum, the fittest stress tensor. The validity and efficacy of the algorithm are demonstrated by a series of tests on synthetic and natural fault-slip observations in different tectonic settings and also in situations where the observations are noisy. It is shown that the genetic algorithm is superior to other commonly practised methods, in particular, in those tectonic settings where none of the principal stresses is directed vertically and/or the given data set is noisy.
Development of an Inverse Algorithm for Resonance Inspection
Lai, Canhai; Xu, Wei; Sun, Xin
2012-10-01
Resonance inspection (RI), which employs the natural frequency spectra shift between the good and the anomalous part populations to detect defects, is a non-destructive evaluation (NDE) technique with many advantages such as low inspection cost, high testing speed, and broad applicability to structures with complex geometry compared to other contemporary NDE methods. It has already been widely used in the automobile industry for quality inspections of safety critical parts. Unlike some conventionally used NDE methods, the current RI technology is unable to provide details, i.e. location, dimension, or types, of the flaws for the discrepant parts. Such limitation severely hinders its wide spread applications and further development. In this study, an inverse RI algorithm based on maximum correlation function is proposed to quantify the location and size of flaws for a discrepant part. A dog-bone shaped stainless steel sample with and without controlled flaws are used for algorithm development and validation. The results show that multiple flaws can be accurately pinpointed back using the algorithms developed, and the prediction accuracy decreases with increasing flaw numbers and decreasing distance between flaws.
Stochastic optimization algorithm for inverse modeling of air pollution
NASA Astrophysics Data System (ADS)
Yeo, Kyongmin; Hwang, Youngdeok; Liu, Xiao; Kalagnanam, Jayant
2016-11-01
A stochastic optimization algorithm to estimate a smooth source function from a limited number of observations is proposed in the context of air pollution, where the source-receptor relation is given by an advection-diffusion equation. First, a smooth source function is approximated by a set of Gaussian kernels on a rectangular mesh system. Then, the generalized polynomial chaos (gPC) expansion is used to represent the model uncertainty due to the choice of the mesh system. It is shown that the convolution of gPC basis and the Gaussian kernel provides hierarchical basis functions for a spectral function estimation. The spectral inverse model is formulated as a stochastic optimization problem. We propose a regularization strategy based on the hierarchical nature of the basis polynomials. It is shown that the spectral inverse model is capable of providing a good estimate of the source function even when the number of unknown parameters (m) is much larger the number of data (n), m/n > 50.
Aerosol Models for the CALIPSO Lidar Inversion Algorithms
NASA Technical Reports Server (NTRS)
Omar, Ali H.; Winker, David M.; Won, Jae-Gwang
2003-01-01
We use measurements and models to develop aerosol models for use in the inversion algorithms for the Cloud Aerosol Lidar and Imager Pathfinder Spaceborne Observations (CALIPSO). Radiance measurements and inversions of the AErosol RObotic NETwork (AERONET1, 2) are used to group global atmospheric aerosols using optical and microphysical parameters. This study uses more than 105 records of radiance measurements, aerosol size distributions, and complex refractive indices to generate the optical properties of the aerosol at more 200 sites worldwide. These properties together with the radiance measurements are then classified using classical clustering methods to group the sites according to the type of aerosol with the greatest frequency of occurrence at each site. Six significant clusters are identified: desert dust, biomass burning, urban industrial pollution, rural background, marine, and dirty pollution. Three of these are used in the CALIPSO aerosol models to characterize desert dust, biomass burning, and polluted continental aerosols. The CALIPSO aerosol model also uses the coarse mode of desert dust and the fine mode of biomass burning to build a polluted dust model. For marine aerosol, the CALIPSO aerosol model uses measurements from the SEAS experiment 3. In addition to categorizing the aerosol types, the cluster analysis provides all the column optical and microphysical properties for each cluster.
Direct Fourier Inversion Reconstruction Algorithm for Computed Laminography.
Voropaev, Alexey; Myagotin, Anton; Helfen, Lukas; Baumbach, Tilo
2016-05-01
Synchrotron radiation computed laminography (CL) was developed to complement the conventional computed tomography as a non-destructive 3D imaging method for the inspection of flat thin objects. Recent progress in hardware at synchrotron sources allows one to record internal evolution of specimens at the micrometer scale and sub-second range but also requires increased reconstruction speed to follow structural changes online. A 3D image of the sample interior is usually reconstructed by the well-established filtered backprojection (FBP) approach. Despite of a great success in the reduction of reconstruction time via parallel computations, the FBP algorithm still remains a time-consuming procedure. A promising way to significantly shorten computation time is to directly perform backprojection in frequency domain (a direct Fourier inversion approach). The corresponding algorithms are rarely considered in the literature because of a poor performance or inferior reconstruction quality resulted from inaccurate interpolation in Fourier domain. In this paper, we derive a Fourier-based reconstruction equation designed for the CL scanning geometry. Furthermore, we outline the translation of the continuous solution to a discrete version, which utilizes 3D sinc interpolation. A projection resampling technique allowing for the reduction of the expensive interpolation to its 1D version is proposed. A series of numerical experiments confirms that the resulting image quality is well comparable with the FBP approach while reconstruction time is drastically reduced.
New inverse synthetic aperture radar algorithm for translational motion compensation
NASA Astrophysics Data System (ADS)
Bocker, Richard P.; Henderson, Thomas B.; Jones, Scott A.; Frieden, B. R.
1991-10-01
Inverse synthetic aperture radar (ISAR) is an imaging technique that shows real promise in classifying airborne targets in real time under all weather conditions. Over the past few years a large body of ISAR data has been collected and considerable effort has been expended to develop algorithms to form high-resolution images from this data. One important goal of workers in this field is to develop software that will do the best job of imaging under the widest range of conditions. The success of classifying targets using ISAR is predicated upon forming highly focused radar images of these targets. Efforts to develop highly focused imaging computer software have been challenging, mainly because the imaging depends on and is affected by the motion of the target, which in general is not precisely known. Specifically, the target generally has both rotational motion about some axis and translational motion as a whole with respect to the radar. The slant-range translational motion kinematic quantities must be first accurately estimated from the data and compensated before the image can be focused. Following slant-range motion compensation, the image is further focused by determining and correcting for target rotation. The use of the burst derivative measure is proposed as a means to improve the computational efficiency of currently used ISAR algorithms. The use of this measure in motion compensation ISAR algorithms for estimating the slant-range translational motion kinematic quantities of an uncooperative target is described. Preliminary tests have been performed on simulated as well as actual ISAR data using both a Sun 4 workstation and a parallel processing transputer array. Results indicate that the burst derivative measure gives significant improvement in processing speed over the traditional entropy measure now employed.
An Algorithm for image removals and decompositions without inverse matrices
NASA Astrophysics Data System (ADS)
Yi, Dokkyun
2009-03-01
Partial Differential Equation (PDE) based methods in image processing have been actively studied in the past few years. One of the effective methods is the method based on a total variation introduced by Rudin, Oshera and Fatemi (ROF) [L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D 60 (1992) 259-268]. This method is a well known edge preserving model and an useful tool for image removals and decompositions. Unfortunately, this method has a nonlinear term in the equation which may yield an inaccurate numerical solution. To overcome the nonlinearity, a fixed point iteration method has been widely used. The nonlinear system based on the total variation is induced from the ROF model and the fixed point iteration method to solve the ROF model is introduced by Dobson and Vogel [D.C. Dobson, C.R. Vogel, Convergence of an iterative method for total variation denoising, SIAM J. Numer. Anal. 34 (5) (1997) 1779-1791]. However, some methods had to compute inverse matrices which led to roundoff error. To address this problem, we developed an efficient method for solving the ROF model. We make a sequence like Richardson's method by using a fixed point iteration to evade the nonlinear equation. This approach does not require the computation of inverse matrices. The main idea is to make a direction vector for reducing the error at each iteration step. In other words, we make the next iteration to reduce the error from the computed error and the direction vector. We describe that our method works well in theory. In numerical experiments, we show the results of the proposed method and compare them with the results by D. Dobson and C. Vogel and then we confirm the superiority of our method.
NASA Technical Reports Server (NTRS)
Tsao, Nai-Kuan
1989-01-01
A class of direct inverse decomposition algorithms for solving systems of linear equations is presented. Their behavior in the presence of round-off errors is analyzed. It is shown that under some mild restrictions on their implementation, the class of direct inverse decomposition algorithms presented are equivalent in terms of the error complexity measures.
NASA Astrophysics Data System (ADS)
Hou, Zhen-Long; Wei, Xiao-Hui; Huang, Da-Nian; Sun, Xu
2015-09-01
We apply reweighted inversion focusing to full tensor gravity gradiometry data using message-passing interface (MPI) and compute unified device architecture (CUDA) parallel computing algorithms, and then combine MPI with CUDA to formulate a hybrid algorithm. Parallel computing performance metrics are introduced to analyze and compare the performance of the algorithms. We summarize the rules for the performance evaluation of parallel algorithms. We use model and real data from the Vinton salt dome to test the algorithms. We find good match between model and real density data, and verify the high efficiency and feasibility of parallel computing algorithms in the inversion of full tensor gravity gradiometry data.
Modelling and genetic algorithm based optimisation of inverse supply chain
NASA Astrophysics Data System (ADS)
Bányai, T.
2009-04-01
(Recycling of household appliances with emphasis on reuse options). The purpose of this paper is the presentation of a possible method for avoiding the unnecessary environmental risk and landscape use through unprovoked large supply chain of collection systems of recycling processes. In the first part of the paper the author presents the mathematical model of recycling related collection systems (applied especially for wastes of electric and electronic products) and in the second part of the work a genetic algorithm based optimisation method will be demonstrated, by the aid of which it is possible to determine the optimal structure of the inverse supply chain from the point of view economical, ecological and logistic objective functions. The model of the inverse supply chain is based on a multi-level, hierarchical collection system. In case of this static model it is assumed that technical conditions are permanent. The total costs consist of three parts: total infrastructure costs, total material handling costs and environmental risk costs. The infrastructure-related costs are dependent only on the specific fixed costs and the specific unit costs of the operation points (collection, pre-treatment, treatment, recycling and reuse plants). The costs of warehousing and transportation are represented by the material handling related costs. The most important factors determining the level of environmental risk cost are the number of out of time recycled (treated or reused) products, the number of supply chain objects and the length of transportation routes. The objective function is the minimization of the total cost taking into consideration the constraints. However a lot of research work discussed the design of supply chain [8], but most of them concentrate on linear cost functions. In the case of this model non-linear cost functions were used. The non-linear cost functions and the possible high number of objects of the inverse supply chain leaded to the problem of choosing a
Numerical Laplace Transform Inversion Employing the Gaver-Stehfest Algorithm.
ERIC Educational Resources Information Center
Jacquot, Raymond G.; And Others
1985-01-01
Presents a technique for the numerical inversion of Laplace Transforms and several examples employing this technique. Limitations of the method in terms of available computer word length and the effects of these limitations on approximate inverse functions are also discussed. (JN)
New Type Continuities via Abel Convergence
Albayrak, Mehmet
2014-01-01
We investigate the concept of Abel continuity. A function f defined on a subset of ℝ, the set of real numbers, is Abel continuous if it preserves Abel convergent sequences. Some other types of continuities are also studied and interesting result is obtained. It turned out that uniform limit of a sequence of Abel continuous functions is Abel continuous and the set of Abel continuous functions is a closed subset of continuous functions. PMID:24883393
NASA Astrophysics Data System (ADS)
Wang, Qian; Li, Xingwen; Song, Haoyong; Rong, Mingzhe
2010-04-01
Non-contact magnetic measurement method is an effective way to study the air arc behavior experimentally One of the crucial techniques is to solve an inverse problem for the electromagnetic field. This study is devoted to investigating different algorithms for this kind of inverse problem preliminarily, including the preconditioned conjugate gradient method, penalty function method and genetic algorithm. The feasibility of each algorithm is analyzed. It is shown that the preconditioned conjugate gradient method is valid only for few arc segments, the estimation accuracy of the penalty function method is dependent on the initial conditions, and the convergence of genetic algorithm should be studied further for more segments in an arc current.
A repeatable inverse kinematics algorithm with linear invariant subspaces for mobile manipulators.
Tchoń, Krzysztof; Jakubiak, Janusz
2005-10-01
On the basis of a geometric characterization of repeatability we present a repeatable extended Jacobian inverse kinematics algorithm for mobile manipulators. The algorithm's dynamics have linear invariant subspaces in the configuration space. A standard Ritz approximation of platform controls results in a band-limited version of this algorithm. Computer simulations involving an RTR manipulator mounted on a kinematic car-type mobile platform are used in order to illustrate repeatability and performance of the algorithm.
Advancing x-ray scattering metrology using inverse genetic algorithms
NASA Astrophysics Data System (ADS)
Hannon, Adam F.; Sunday, Daniel F.; Windover, Donald; Joseph Kline, R.
2016-07-01
We compare the speed and effectiveness of two genetic optimization algorithms to the results of statistical sampling via a Markov chain Monte Carlo algorithm to find which is the most robust method for determining real-space structure in periodic gratings measured using critical dimension small-angle x-ray scattering. Both a covariance matrix adaptation evolutionary strategy and differential evolution algorithm are implemented and compared using various objective functions. The algorithms and objective functions are used to minimize differences between diffraction simulations and measured diffraction data. These simulations are parameterized with an electron density model known to roughly correspond to the real-space structure of our nanogratings. The study shows that for x-ray scattering data, the covariance matrix adaptation coupled with a mean-absolute error log objective function is the most efficient combination of algorithm and goodness of fit criterion for finding structures with little foreknowledge about the underlying fine scale structure features of the nanograting.
Advancing X-ray scattering metrology using inverse genetic algorithms.
Hannon, Adam F; Sunday, Daniel F; Windover, Donald; Kline, R Joseph
2016-01-01
We compare the speed and effectiveness of two genetic optimization algorithms to the results of statistical sampling via a Markov chain Monte Carlo algorithm to find which is the most robust method for determining real space structure in periodic gratings measured using critical dimension small angle X-ray scattering. Both a covariance matrix adaptation evolutionary strategy and differential evolution algorithm are implemented and compared using various objective functions. The algorithms and objective functions are used to minimize differences between diffraction simulations and measured diffraction data. These simulations are parameterized with an electron density model known to roughly correspond to the real space structure of our nanogratings. The study shows that for X-ray scattering data, the covariance matrix adaptation coupled with a mean-absolute error log objective function is the most efficient combination of algorithm and goodness of fit criterion for finding structures with little foreknowledge about the underlying fine scale structure features of the nanograting.
Goldman, S.P.; Chen, J.Z.; Battista, J.J.
2005-09-15
A fast optimization algorithm is very important for inverse planning of intensity modulated radiation therapy (IMRT), and for adaptive radiotherapy of the future. Conventional numerical search algorithms such as the conjugate gradient search, with positive beam weight constraints, generally require numerous iterations and may produce suboptimal dose results due to trapping in local minima. A direct solution of the inverse problem using conventional quadratic objective functions without positive beam constraints is more efficient but will result in unrealistic negative beam weights. We present here a direct solution of the inverse problem that does not yield unphysical negative beam weights. The objective function for the optimization of a large number of beamlets is reformulated such that the optimization problem is reduced to a linear set of equations. The optimal set of intensities is found through a matrix inversion, and negative beamlet intensities are avoided without the need for externally imposed ad-hoc constraints. The method has been demonstrated with a test phantom and a few clinical radiotherapy cases, using primary dose calculations. We achieve highly conformal primary dose distributions with very rapid optimization times. Typical optimization times for a single anatomical slice (two dimensional) (head and neck) using a LAPACK matrix inversion routine in a single processor desktop computer, are: 0.03 s for 500 beamlets; 0.28 s for 1000 beamlets; 3.1 s for 2000 beamlets; and 12 s for 3000 beamlets. Clinical implementation will require the additional time of a one-time precomputation of scattered radiation for all beamlets, but will not impact the optimization speed. In conclusion, the new method provides a fast and robust technique to find a global minimum that yields excellent results for the inverse planning of IMRT.
Goldman, S P; Chen, J Z; Battista, J J
2005-09-01
A fast optimization algorithm is very important for inverse planning of intensity modulated radiation therapy (IMRT), and for adaptive radiotherapy of the future. Conventional numerical search algorithms such as the conjugate gradient search, with positive beam weight constraints, generally require numerous iterations and may produce suboptimal dose results due to trapping in local minima. A direct solution of the inverse problem using conventional quadratic objective functions without positive beam constraints is more efficient but will result in unrealistic negative beam weights. We present here a direct solution of the inverse problem that does not yield unphysical negative beam weights. The objective function for the optimization of a large number of beamlets is reformulated such that the optimization problem is reduced to a linear set of equations. The optimal set of intensities is found through a matrix inversion, and negative beamlet intensities are avoided without the need for externally imposed ad-hoc constraints. The method has been demonstrated with a test phantom and a few clinical radiotherapy cases, using primary dose calculations. We achieve highly conformal primary dose distributions with very rapid optimization times. Typical optimization times for a single anatomical slice (two dimensional) (head and neck) using a LAPACK matrix inversion routine in a single processor desktop computer, are: 0.03 s for 500 beamlets; 0.28 s for 1000 beamlets; 3.1 s for 2000 beamlets; and 12 s for 3000 beamlets. Clinical implementation will require the additional time of a one-time precomputation of scattered radiation for all beamlets, but will not impact the optimization speed. In conclusion, the new method provides a fast and robust technique to find a global minimum that yields excellent results for the inverse planning of IMRT.
Lü, Li-hui; Liu, Wen-qing; Zhang, Tian-shu; Lu, Yi-huai; Dong, Yun-sheng; Chen, Zhen-yi; Fan, Guang-qiang; Qi, Shao-shuai
2015-07-01
Atmospheric aerosols have important impacts on human health, the environment and the climate system. Micro Pulse Lidar (MPL) is a new effective tool for detecting atmosphere aerosol horizontal distribution. And the extinction coefficient inversion and error analysis are important aspects of data processing. In order to detect the horizontal distribution of atmospheric aerosol near the ground, slope and Fernald algorithms were both used to invert horizontal MPL data and then the results were compared. The error analysis showed that the error of the slope algorithm and Fernald algorithm were mainly from theoretical model and some assumptions respectively. Though there still some problems exist in those two horizontal extinction coefficient inversions, they can present the spatial and temporal distribution of aerosol particles accurately, and the correlations with the forward-scattering visibility sensor are both high with the value of 95%. Furthermore relatively speaking, Fernald algorithm is more suitable for the inversion of horizontal extinction coefficient.
Parametric inversion of viscoelastic media from VSP data using a genetic algorithm
NASA Astrophysics Data System (ADS)
Bin, Hu; Gang, Tang; Jianwei, Ma; Huizhu, Yang
2007-09-01
Viscoelastic parameters are becoming more important and their inversion algorithms are studied by many researchers. Genetic algorithms are random, self-adaptive, robust, and heuristic with global search and convergence abilities. Based on the direct VSP wave equation, a genetic algorithm (GA) is introduced to determine the viscoelastic parameters. First, the direct wave equation in frequency is expressed as a function of complex velocity and then the complex velocities estimated by GA inversion. Since the phase velocity and Q-factor both are functions of complex velocity, their values can be computed easily. However, there are so many complex velocities that it is difficult to invert them directly. They can be rewritten as a function of c 0 and c ∞ to reduce the number of parameters during the inversion process. Finally, a theoretical model experiment proves that our algorithm is exact and effective.
Application of Large-Scale Inversion Algorithms to Hydraulic Tomography in an Alluvial Aquifer.
Fischer, P; Jardani, A; Soueid Ahmed, A; Abbas, M; Wang, X; Jourde, H; Lecoq, N
2017-03-01
Large-scale inversion methods have been recently developed and permitted now to considerably reduce the computation time and memory needed for inversions of models with a large amount of parameters and data. In this work, we have applied a deterministic geostatistical inversion algorithm to a hydraulic tomography investigation conducted in an experimental field site situated within an alluvial aquifer in Southern France. This application aims to achieve a 2-D large-scale modeling of the spatial transmissivity distribution of the site. The inversion algorithm uses a quasi-Newton iterative process based on a Bayesian approach. We compared the results obtained by using three different methodologies for sensitivity analysis: an adjoint-state method, a finite-difference method, and a principal component geostatistical approach (PCGA). The PCGA is a large-scale adapted method which was developed for inversions with a large number of parameters by using an approximation of the covariance matrix, and by avoiding the calculation of the full Jacobian sensitivity matrix. We reconstructed high-resolution transmissivity fields (composed of up to 25,600 cells) which generated good correlations between the measured and computed hydraulic heads. In particular, we show that, by combining the PCGA inversion method and the hydraulic tomography method, we are able to substantially reduce the computation time of the inversions, while still producing high-quality inversion results as those obtained from the other sensitivity analysis methodologies.
Chen, Shanshan; Wang, Hongzhi; Yang, Peiqiang; Zhang, Xuelong
2014-06-01
It is difficult to reflect the properties of samples from the signal directly collected by the low field nuclear magnetic resonance (NMR) analyzer. People must obtain the relationship between the relaxation time and the original signal amplitude of every relaxation component by inversion algorithm. Consequently, the technology of T2 spectrum inversion is crucial to the application of NMR data. This study optimized the regularization factor selection method and presented the regularization algorithm for inversion of low field NMR relaxation distribution, which is based on the regularization theory of ill-posed inverse problem. The results of numerical simulation experiments by Matlab7.0 showed that this method could effectively analyze and process the NMR relaxation data.
Data inversion algorithm development for the hologen occultation experiment
NASA Technical Reports Server (NTRS)
Gordley, Larry L.; Mlynczak, Martin G.
1986-01-01
The successful retrieval of atmospheric parameters from radiometric measurement requires not only the ability to do ideal radiometric calculations, but also a detailed understanding of instrument characteristics. Therefore a considerable amount of time was spent in instrument characterization in the form of test data analysis and mathematical formulation. Analyses of solar-to-reference interference (electrical cross-talk), detector nonuniformity, instrument balance error, electronic filter time-constants and noise character were conducted. A second area of effort was the development of techniques for the ideal radiometric calculations required for the Halogen Occultation Experiment (HALOE) data reduction. The computer code for these calculations must be extremely complex and fast. A scheme for meeting these requirements was defined and the algorithms needed form implementation are currently under development. A third area of work included consulting on the implementation of the Emissivity Growth Approximation (EGA) method of absorption calculation into a HALOE broadband radiometer channel retrieval algorithm.
NASA Astrophysics Data System (ADS)
Lin, Y.; O'Malley, D.; Vesselinov, V. V.
2015-12-01
Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a
An inverse source location algorithm for radiation portal monitor applications
Miller, Karen A; Charlton, William S
2010-01-01
Radiation portal monitors are being deployed at border crossings throughout the world to prevent the smuggling of nuclear and radiological materials; however, a tension exists between security and the free-flow of commerce. Delays at ports-of-entry have major economic implications, so it is imperative to minimize portal monitor screening time. We have developed an algorithm to locate a radioactive source using a distributed array of detectors, specifically for use at border crossings. To locate the source, we formulated an optimization problem where the objective function describes the least-squares difference between the actual and predicted detector measurements. The predicted measurements are calculated by solving the 3-D deterministic neutron transport equation given an estimated source position. The source position is updated using the steepest descent method, where the gradient of the objective function with respect to the source position is calculated using adjoint transport calculations. If the objective function is smaller than the convergence criterion, then the source position has been identified. This paper presents the derivation of the underlying equations in the algorithm as well as several computational test cases used to characterize its accuracy.
Mixed-radix Algorithm for the Computation of Forward and Inverse MDCT
Wu, Jiasong; Shu, Huazhong; Senhadji, Lotfi; Luo, Limin
2008-01-01
The modified discrete cosine transform (MDCT) and inverse MDCT (IMDCT) are two of the most computational intensive operations in MPEG audio coding standards. A new mixed-radix algorithm for efficient computing the MDCT/IMDCT is presented. The proposed mixed-radix MDCT algorithm is composed of two recursive algorithms. The first algorithm, called the radix-2 decimation in frequency (DIF) algorithm, is obtained by decomposing an N-point MDCT into two MDCTs with the length N/2. The second algorithm, called the radix-3 decimation in time (DIT) algorithm, is obtained by decomposing an N-point MDCT into three MDCTs with the length N/3. Since the proposed MDCT algorithm is also expressed in the form of a simple sparse matrix factorization, the corresponding IMDCT algorithm can be easily derived by simply transposing the matrix factorization. Comparison of the proposed algorithm with some existing ones shows that our proposed algorithm is more suitable for parallel implementation and especially suitable for the layer III of MPEG-1 and MPEG-2 audio encoding and decoding. Moreover, the proposed algorithm can be easily extended to the multidimensional case by using the vector-radix method. PMID:21258639
NASA Astrophysics Data System (ADS)
Sun, Jiajia; Li, Yaoguo
2017-02-01
Joint inversion that simultaneously inverts multiple geophysical data sets to recover a common Earth model is increasingly being applied to exploration problems. Petrophysical data can serve as an effective constraint to link different physical property models in such inversions. There are two challenges, among others, associated with the petrophysical approach to joint inversion. One is related to the multimodality of petrophysical data because there often exist more than one relationship between different physical properties in a region of study. The other challenge arises from the fact that petrophysical relationships have different characteristics and can exhibit point, linear, quadratic, or exponential forms in a crossplot. The fuzzy c-means (FCM) clustering technique is effective in tackling the first challenge and has been applied successfully. We focus on the second challenge in this paper and develop a joint inversion method based on variations of the FCM clustering technique. To account for the specific shapes of petrophysical relationships, we introduce several different fuzzy clustering algorithms that are capable of handling different shapes of petrophysical relationships. We present two synthetic and one field data examples and demonstrate that, by choosing appropriate distance measures for the clustering component in the joint inversion algorithm, the proposed joint inversion method provides an effective means of handling common petrophysical situations we encounter in practice. The jointly inverted models have both enhanced structural similarity and increased petrophysical correlation, and better represent the subsurface in the spatial domain and the parameter domain of physical properties.
NASA Astrophysics Data System (ADS)
Sun, Jiajia; Li, Yaoguo
2016-11-01
Joint inversion that simultaneously inverts multiple geophysical data sets to recover a common Earth model is increasingly being applied to exploration problems. Petrophysical data can serve as an effective constraint to link different physical property models in such inversions. There are two challenges, among others, associated with the petrophysical approach to joint inversion. One is related to the multi-modality of petrophysical data because there often exist more than one relationship between different physical properties in a region of study. The other challenge arises from the fact that petrophysical relationships have different characteristics and can exhibit point, linear, quadratic, or exponential forms in a crossplot. The fuzzy c-means (FCM) clustering technique is effective in tackling the first challenge and has been applied successfully. We focus on the second challenge in this paper and develop a joint inversion method based on variations of the FCM clustering technique. To account for the specific shapes of petrophysical relationships, we introduce several different fuzzy clustering algorithms that are capable of handling different shapes of petrophysical relationships. We present two synthetic and one field data examples and demonstrate that, by choosing appropriate distance measures for the clustering component in the joint inversion algorithm, the proposed joint inversion method provides an effective means of handling common petrophysical situations we encounter in practice. The jointly inverted models have both enhanced structural similarity and increased petrophysical correlation, and better represent the subsurface in the spatial domain and the parameter domain of physical properties.
SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters
NASA Technical Reports Server (NTRS)
McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.
2014-01-01
Ocean color remote sensing provides synoptic-scale, near-daily observations of marine inherent optical properties (IOPs). Whilst contemporary ocean color algorithms are known to perform well in deep oceanic waters, they have difficulty operating in optically clear, shallow marine environments where light reflected from the seafloor contributes to the water-leaving radiance. The effect of benthic reflectance in optically shallow waters is known to adversely affect algorithms developed for optically deep waters [1, 2]. Whilst adapted versions of optically deep ocean color algorithms have been applied to optically shallow regions with reasonable success [3], there is presently no approach that directly corrects for bottom reflectance using existing knowledge of bathymetry and benthic albedo.To address the issue of optically shallow waters, we have developed a semi-analytical ocean color inversion algorithm: the Shallow Water Inversion Model (SWIM). SWIM uses existing bathymetry and a derived benthic albedo map to correct for bottom reflectance using the semi-analytical model of Lee et al [4]. The algorithm was incorporated into the NASA Ocean Biology Processing Groups L2GEN program and tested in optically shallow waters of the Great Barrier Reef, Australia. In-lieu of readily available in situ matchup data, we present a comparison between SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Property Algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA).
ERIC Educational Resources Information Center
Brown, Malcolm
2009-01-01
Inversions are fascinating phenomena. They are reversals of the normal or expected order. They occur across a wide variety of contexts. What do inversions have to do with learning spaces? The author suggests that they are a useful metaphor for the process that is unfolding in higher education with respect to education. On the basis of…
Subsurface sensing with acoustic and electromagnetic waves using a nonlinear inversion algorithm
NASA Astrophysics Data System (ADS)
Abubakar, Aria; van den Berg, Peter M.; Budko, Neil V.; Fokkema, Jacob T.
2001-11-01
In this paper the nonlinear iterative algorithm, the so-called Extended Contrast Source Inversion is applied to subsurface sensing problem where the number of measured data are very limited and the unknown objects/layers are illuminated from only one side. Some numerical results obtained from synthetic and real data are presented to illustrate the strengths and the weakness of the method.
NASA Astrophysics Data System (ADS)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10^{1} to ~10^{2} in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm
Chen, C.; Xia, J.; Liu, J.; Feng, G.
2006-01-01
Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant
Advanced model of eddy-current NDE inverse problem with sparse grid algorithm
NASA Astrophysics Data System (ADS)
Zhou, Liming; Sabbagh, Harold A.; Sabbagh, Elias H.; Murphy, R. Kim; Bernacchi, William
2017-02-01
In model-based inverse problem, some unknown parameters need to be estimated. These parameters are used not only to characterize the physical properties of cracks, but also to describe the position of the probes (such as lift off and angles) in the calibration. After considering the effect of the position of the probes in the inverse problem, the accuracy of the inverse result will be improved. With increasing the number of the parameters in the inverse problems, the burden of calculations will increase exponentially in the traditional full grid method. The sparse grid algorithm, which was introduced by Sergey A. Smolyak, was used in our work. With this algorithm, we obtain a powerful interpolation method that requires significantly fewer support nodes than conventional interpolation on a full grid. In this work, we combined sparse grid toolbox TASMANIAN, which is produced by Oak Ridge National Laboratory, and professional eddy-current NDE software, VIC-3D R◯, to solve a specific inverse problem. An advanced model based on our previous one is used to estimate length and depth of the crack, lift off and two angles of the position of probes. Considering the calibration process, pseudorandom noise is considered in the model and statistical behavior is discussed.
NASA Astrophysics Data System (ADS)
Liu, B.; Li, S. C.; Nie, L. C.; Wang, J.; L, X.; Zhang, Q. S.
2012-12-01
Traditional inversion method is the most commonly used procedure for three-dimensional (3D) resistivity inversion, which usually takes the linearization of the problem and accomplish it by iterations. However, its accuracy is often dependent on the initial model, which can make the inversion trapped in local optima, even cause a bad result. Non-linear method is a feasible way to eliminate the dependence on the initial model. However, for large problems such as 3D resistivity inversion with inversion parameters exceeding a thousand, main challenges of non-linear method are premature and quite low search efficiency. To deal with these problems, we present an improved Genetic Algorithm (GA) method. In the improved GA method, smooth constraint and inequality constraint are both applied on the object function, by which the degree of non-uniqueness and ill-conditioning is decreased. Some measures are adopted from others by reference to maintain the diversity and stability of GA, e.g. real-coded method, and the adaptive adjustment of crossover and mutation probabilities. Then a generation method of approximately uniform initial population is proposed in this paper, with which uniformly distributed initial generation can be produced and the dependence on initial model can be eliminated. Further, a mutation direction control method is presented based on the joint algorithm, in which the linearization method is embedded in GA. The update vector produced by linearization method is used as mutation increment to maintain a better search direction compared with the traditional GA with non-controlled mutation operation. By this method, the mutation direction is optimized and the search efficiency is improved greatly. The performance of improved GA is evaluated by comparing with traditional inversion results in synthetic example or with drilling columnar sections in practical example. The synthetic and practical examples illustrate that with the improved GA method we can eliminate
NASA Astrophysics Data System (ADS)
Egbert, Gary D.
2012-07-01
We describe novel hybrid algorithms for inversion of electromagnetic geophysical data, combining the computational and storage efficiency of a conjugate gradient approach with an Occam scheme for regularization and step-length control. The basic algorithm is based on the observation that iterative solution of the symmetric (Gauss-Newton) normal equations with conjugate gradients effectively generates a sequence of sensitivities for different linear combinations of the data, allowing construction of the Jacobian for a projection of the original full data space. The Occam scheme can then be applied to this projected problem, with the tradeoff parameter chosen by assessing fit to the full data set. For EM geophysical problems with multiple transmitters (either multiple frequencies or source geometries) an extension of the basic hybrid algorithm is possible. In this case multiple forward and adjoint solutions (one each for each transmitter) are required for each step in the iterative normal equation solver, and each corresponds to the sensitivity for a separate linear combination of data. From the perspective of the hybrid approach, with conjugate gradients generating an approximation to the full Jacobian, it is advantageous to save all of the component sensitivities, and use these to solve the projected problem in a larger subspace. We illustrate the algorithms on a simple problem, 2-D magnetotelluric inversion, using synthetic data. Both the basic and modified hybrid schemes produce essentially the same result as an Occam inversion based on a full calculation of the Jacobian, and the modified scheme requires significantly fewer steps (relative to the basic hybrid scheme) to converge to an adequate solution to the normal equations. The algorithms are expected to be useful primarily for 3-D inverse problems for which the computational burden is heavily dominated by solution to the forward and adjoint problems.
NASA Astrophysics Data System (ADS)
Gharsalli, Leila; Mohammad-Djafari, Ali; Fraysse, Aurélia; Rodet, Thomas
2013-08-01
Our aim is to solve a linear inverse problem using various methods based on the Variational Bayesian Approximation (VBA). We choose to take sparsity into account via a scale mixture prior, more precisely a student-t model. The joint posterior of the unknown and hidden variable of the mixtures is approximated via the VBA. To do this approximation, classically the alternate algorithm is used. But this method is not the most efficient. Recently other optimization algorithms have been proposed; indeed classical iterative algorithms of optimization such as the steepest descent method and the conjugate gradient have been studied in the space of the probability densities involved in the Bayesian methodology to treat this problem. The main object of this work is to present these three algorithms and a numerical comparison of their performances.
3D Motion Planning Algorithms for Steerable Needles Using Inverse Kinematics
Duindam, Vincent; Xu, Jijie; Alterovitz, Ron; Sastry, Shankar; Goldberg, Ken
2010-01-01
Steerable needles can be used in medical applications to reach targets behind sensitive or impenetrable areas. The kinematics of a steerable needle are nonholonomic and, in 2D, equivalent to a Dubins car with constant radius of curvature. In 3D, the needle can be interpreted as an airplane with constant speed and pitch rate, zero yaw, and controllable roll angle. We present a constant-time motion planning algorithm for steerable needles based on explicit geometric inverse kinematics similar to the classic Paden-Kahan subproblems. Reachability and path competitivity are analyzed using analytic comparisons with shortest path solutions for the Dubins car (for 2D) and numerical simulations (for 3D). We also present an algorithm for local path adaptation using null-space results from redundant manipulator theory. Finally, we discuss several ways to use and extend the inverse kinematics solution to generate needle paths that avoid obstacles. PMID:21359051
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chen, C. L.
1989-01-01
Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.
Bayesian inversion of marine CSEM data with a trans-dimensional self parametrizing algorithm
NASA Astrophysics Data System (ADS)
Ray, Anandaroop; Key, Kerry
2012-12-01
The posterior distribution of earth models that fit observed geophysical data convey information on the uncertainty with which they are resolved. From another perspective, the non-uniqueness inherent in most geophysical inverse problems of interest can be quantified by examining the posterior model distribution converged upon by a Bayesian inversion. In this work we apply a reversible jump Markov chain Monte Carlo method to sample the posterior model distribution for the anisotropic 1-D seafloor conductivity constrained by marine controlled source electromagnetic data. Unlike conventional gradient based inversion approaches, our algorithm does not require any subjective choice of regularization parameter, and it is self parametrizing and trans-dimensional in that the number of interfaces with a resistivity contrast at depth is variable, as are their positions. A synthetic example demonstrates how the algorithm can be used to appraise the resolution capabilities of various electromagnetic field components for mapping a thin resistive reservoir buried beneath anisotropic conductive sediments. A second example applies the method to survey data collected over the Pluto gas field on the Northwest Australian shelf. A benefit of our Bayesian approach is that subsets of the posterior model probabilities can be selected to test various hypotheses about the model structure, without requiring further inversions. As examples, the subset of model probabilities can be viewed for models only containing a certain number of layers, or for models where resistive layers are present between a certain interval as suggested by other geological constraints such as seismic stratigraphy or nearby well logs.
NASA Astrophysics Data System (ADS)
Zhang, Zhi-Yong; Tan, Han-Dong; Wang, Kun-Peng; Lin, Chang-Hong; Zhang, Bin; Xie, Mao-Bi
2016-03-01
Traditional two-dimensional (2D) complex resistivity forward modeling is based on Poisson's equation but spectral induced polarization (SIP) data are the coproducts of the induced polarization (IP) and the electromagnetic induction (EMI) effects. This is especially true under high frequencies, where the EMI effect can exceed the IP effect. 2D inversion that only considers the IP effect reduces the reliability of the inversion data. In this paper, we derive differential equations using Maxwell's equations. With the introduction of the Cole-Cole model, we use the finite-element method to conduct 2D SIP forward modeling that considers the EMI and IP effects simultaneously. The data-space Occam method, in which different constraints to the model smoothness and parametric boundaries are introduced, is then used to simultaneously obtain the four parameters of the Cole—Cole model using multi-array electric field data. This approach not only improves the stability of the inversion but also significantly reduces the solution ambiguity. To improve the computational efficiency, message passing interface programming was used to accelerate the 2D SIP forward modeling and inversion. Synthetic datasets were tested using both serial and parallel algorithms, and the tests suggest that the proposed parallel algorithm is robust and efficient.
Estimates of the trace of the inverse of a symmetric matrix using the modified Chebyshev algorithm
NASA Astrophysics Data System (ADS)
Meurant, Gérard
2009-07-01
In this paper we study how to compute an estimate of the trace of the inverse of a symmetric matrix by using Gauss quadrature and the modified Chebyshev algorithm. As auxiliary polynomials we use the shifted Chebyshev polynomials. Since this can be too costly in computer storage for large matrices we also propose to compute the modified moments with a stochastic approach due to Hutchinson (Commun Stat Simul 18:1059-1076, 1989).
NASA Technical Reports Server (NTRS)
Mckinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.
2015-01-01
A semianalytical ocean color inversion algorithm was developed for improving retrievals of inherent optical properties (IOPs) in optically shallow waters. In clear, geometrically shallow waters, light reflected off the seafloor can contribute to the water-leaving radiance signal. This can have a confounding effect on ocean color algorithms developed for optically deep waters, leading to an overestimation of IOPs. The algorithm described here, the Shallow Water Inversion Model (SWIM), uses pre-existing knowledge of bathymetry and benthic substrate brightness to account for optically shallow effects. SWIM was incorporated into the NASA Ocean Biology Processing Group's L2GEN code and tested in waters of the Great Barrier Reef, Australia, using the Moderate Resolution Imaging Spectroradiometer (MODIS) Aqua time series (2002-2013). SWIM-derived values of the total non-water absorption coefficient at 443 nm, at(443), the particulate backscattering coefficient at 443 nm, bbp(443), and the diffuse attenuation coefficient at 488 nm, Kd(488), were compared with values derived using the Generalized Inherent Optical Properties algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA). The results indicated that in clear, optically shallow waters SWIM-derived values of at(443), bbp(443), and Kd(443) were realistically lower than values derived using GIOP and QAA, in agreement with radiative transfer modeling. This signified that the benthic reflectance correction was performing as expected. However, in more optically complex waters, SWIM had difficulty converging to a solution, a likely consequence of internal IOP parameterizations. Whilst a comprehensive study of the SWIM algorithm's behavior was conducted, further work is needed to validate the algorithm using in situ data.
NASA Astrophysics Data System (ADS)
McKinna, Lachlan I. W.; Fearns, Peter R. C.; Weeks, Scarla J.; Werdell, P. Jeremy; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.
2015-03-01
A semianalytical ocean color inversion algorithm was developed for improving retrievals of inherent optical properties (IOPs) in optically shallow waters. In clear, geometrically shallow waters, light reflected off the seafloor can contribute to the water-leaving radiance signal. This can have a confounding effect on ocean color algorithms developed for optically deep waters, leading to an overestimation of IOPs. The algorithm described here, the Shallow Water Inversion Model (SWIM), uses pre-existing knowledge of bathymetry and benthic substrate brightness to account for optically shallow effects. SWIM was incorporated into the NASA Ocean Biology Processing Group's L2GEN code and tested in waters of the Great Barrier Reef, Australia, using the Moderate Resolution Imaging Spectroradiometer (MODIS) Aqua time series (2002-2013). SWIM-derived values of the total non-water absorption coefficient at 443 nm, at(443), the particulate backscattering coefficient at 443 nm, bbp(443), and the diffuse attenuation coefficient at 488 nm, Kd(488), were compared with values derived using the Generalized Inherent Optical Properties algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA). The results indicated that in clear, optically shallow waters SWIM-derived values of at(443), bbp(443), and Kd(443) were realistically lower than values derived using GIOP and QAA, in agreement with radiative transfer modeling. This signified that the benthic reflectance correction was performing as expected. However, in more optically complex waters, SWIM had difficulty converging to a solution, a likely consequence of internal IOP parameterizations. Whilst a comprehensive study of the SWIM algorithm's behavior was conducted, further work is needed to validate the algorithm using in situ data.
A hybrid algorithm for solving the EEG inverse problem from spatio-temporal EEG data.
Crevecoeur, Guillaume; Hallez, Hans; Van Hese, Peter; D'Asseler, Yves; Dupré, Luc; Van de Walle, Rik
2008-08-01
Epilepsy is a neurological disorder caused by intense electrical activity in the brain. The electrical activity, which can be modelled through the superposition of several electrical dipoles, can be determined in a non-invasive way by analysing the electro-encephalogram. This source localization requires the solution of an inverse problem. Locally convergent optimization algorithms may be trapped in local solutions and when using global optimization techniques, the computational effort can become expensive. Fast recovery of the electrical sources becomes difficult that way. Therefore, there is a need to solve the inverse problem in an accurate and fast way. This paper performs the localization of multiple dipoles using a global-local hybrid algorithm. Global convergence is guaranteed by using space mapping techniques and independent component analysis in a computationally efficient way. The accuracy is locally obtained by using the Recursively Applied and Projected-MUltiple Signal Classification (RAP-MUSIC) algorithm. When using this hybrid algorithm, a four times faster solution is obtained.
NASA Astrophysics Data System (ADS)
Partheepan, G.; Sehgal, D. K.; Pandey, R. K.
2006-12-01
An inverse finite element algorithm is established to extract the tensile constitutive properties such as Young's modulus, yield strength and true stress-true strain diagram of a material in a virtually non-destructive manner. Standard test methods for predicting mechanical properties require the removal of large size material samples from the in-service component, which is impractical. To circumvent this situation, a new dumb-bell shaped miniature specimen has been designed and fabricated which can be used for evaluation of properties for a material or component. Also test fixtures were developed to perform a tension test on this proposed miniature specimen in a testing machine. The studies have been conducted in low carbon steel, die steel and medium carbon steel. The output from the miniature test, namely, load-elongation diagram, is obtained and used for the proposed inverse finite element algorithm to find the material properties. Inverse finite element modelling is carried out using a 2D plane stress analysis. The predicted results are found to be in good agreement with the experimental results.
LOTOS code for local earthquake tomographic inversion: benchmarks for testing tomographic algorithms
NASA Astrophysics Data System (ADS)
Koulakov, I. Yu.
2009-04-01
We present the LOTOS-07 code for performing local earthquake tomographic (LET) inversion, which is freely available at www.ivan-art.com/science/LOTOS_07. The initial data for the code are the arrival times from local seismicity and coordinates of the stations. It does not require any information about the sources. The calculations start from absolute location of sources and estimates of an optimal 1D velocity model. Then the sources are relocated simultaneously with the 3D velocity distribution during iterative coupled tomographic inversions. The code allows results to be compared based on node or cell parameterizations. Both Vp-Vs and Vp - Vp/Vs inversion schemes can be performed by the LOTOS code. The working ability of the LOTOS code is illustrated with different real and synthetic datasets. Some of the tests are used to disprove existing stereotypes of LET schemes such as using trade-off curves for evaluation of damping parameters and GAP criterion for selection of events. We also present a series of synthetic datasets with unknown sources and velocity models (www.ivan-art.com/science/benchmark) that can be used as blind benchmarks for testing different tomographic algorithms. We encourage other users of tomography algorithms to join the program on creating benchmarks that can be used to check existing codes. The program codes and testing datasets will be freely distributed during the poster presentation.
Orthogonal Coordinates and Hyperquantization Algorithm. The NH3 and H3O+ Umbrella Inversion Levels
NASA Astrophysics Data System (ADS)
Ragni, M.; Lombardi, A.; Pereira Barreto, P. R.; Peixoto Bitencourt, A. C.
2009-09-01
In order to describe the umbrella inversion mode, which is characteristic of AB3-type molecules, we have introduced an alternative hyperspherical coordinate set based on a parametrization of Radau-Smith orthogonal vectors and have considered constraints which allow us to enforce the C3v symmetry. Structural properties and electronic energies at equilibrium and barrier configurations have been obtained at MP2 and CCSD(T) levels of theory. Energy profiles have been calculated using the CCSD(T) method with an aug-cc-pVQZ basis set. The NH3 and H3O+ umbrella inversion levels are obtained by the hyperquantization algorithm for a one-dimensional calculation, using a specially defined hyperangle as the inversion coordinate. The results are compared with experimental and theoretical energy levels, in particular, with those obtained by calculations based on two-dimensional models. The emerging picture of the umbrella inversion based on this hyperangular coordinate compares favorably with respect to the usual valence-type description.
A new stochastic algorithm for inversion of dust aerosol size distribution
NASA Astrophysics Data System (ADS)
Wang, Li; Li, Feng; Yang, Ma-ying
2015-08-01
Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-08-01
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-03-21
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-03-21
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less
Inversion of oceanic constituents in case I and II waters with genetic programming algorithms.
Chami, Malik; Robilliard, Denis
2002-10-20
A stochastic inverse technique based on agenetic programming (GP) algorithm was developed toinvert oceanic constituents from simulated data for case I and case II water applications. The simulations were carried out with the Ordre Successifs Ocean Atmosphere (OSOA) radiative transfer model. They include the effects of oceanic substances such as algal-related chlorophyll, nonchlorophyllous suspended matter, and dissolved organic matter. The synthetic data set also takes into account the directional effects of particles through a variation of their phase function that makes the simulated data realistic. It is shown that GP can be successfully applied to the inverse problem with acceptable stability in the presence of realistic noise in the data. GP is compared with neural network methodology for case I waters; GP exhibits similar retrieval accuracy, which is greater than for traditional techniques such as band ratio algorithms. The application of GP to real satellite data [a Sea-viewing Wide Field-of-view Sensor (SeaWiFS)] was also carried out for case I waters as a validation. Good agreement was obtained when GP results were compared with the SeaWiFS empirical algorithm. For case II waters the accuracy of GP is less than 33%, which remains satisfactory, at the present time, for remote-sensing purposes.
NASA Astrophysics Data System (ADS)
Kitaura, F. S.; Enßlin, T. A.
2008-09-01
We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood and numerical inverse extraregularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener filtering, Tikhonov regularization, ridge regression, maximum entropy and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton-Raphson, Landweber-Fridman and both linear and non-linear Krylov methods based on Fletcher-Reeves, Polak-Ribière and Hestenes-Stiefel conjugate gradients. The structures of the up-to-date highest performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener filter in the novel ARGO software package, the different numerical schemes are benchmarked with one-, two- and three-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark matter density field, the peculiar velocity field and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.
A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem
NASA Astrophysics Data System (ADS)
Park, Taehoon; Park, Won-Kwang
2015-09-01
Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation.
The application of inverse Broyden's algorithm for modeling of crack growth in iron crystals.
Telichev, Igor; Vinogradov, Oleg
2011-07-01
In the present paper we demonstrate the use of inverse Broyden's algorithm (IBA) in the simulation of fracture in single iron crystals. The iron crystal structure is treated as a truss system, while the forces between the atoms situated at the nodes are defined by modified Morse inter-atomic potentials. The evolution of lattice structure is interpreted as a sequence of equilibrium states corresponding to the history of applied load/deformation, where each equilibrium state is found using an iterative procedure based on IBA. The results presented demonstrate the success of applying the IBA technique for modeling the mechanisms of elastic, plastic and fracture behavior of single iron crystals.
NASA Astrophysics Data System (ADS)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.
Wang, X
1999-05-01
In this paper, the computational problem of inverse kinematics of arm prehension movements was investigated. How motions of each joint involved in arm movements can be used to control the end-effector (hand) position and orientation was first examined. It is shown that the inverse kinematics problem due to the kinematic redundancy in joint space is ill-posed only at the control of hand orientation but not at the control of hand position. Based upon this analysis, a previously proposed inverse kinematics algorithm (Wang et Verriest, 1998a) to predict arm reach postures was extended to a seven-DOF arm model to predict arm prehension postures using a separate control of hand position and orientation. The algorithm can be either in rule-based form or by optimization through appropriate choice of weight coefficients. Compared to the algebraic inverse kinematics algorithm, the proposed algorithm can handle the non-linearity of joint limits in a straightforward way. In addition, no matrix inverse calculation is needed, thus avoiding the stability and convergence problems often occurring near a singularity of the Jacobian. Since an end-effector motion-oriented method is used to describe joint movements, observed behaviors of arm movements can be easily implemented in the algorithm. The proposed algorithm provides a general frame for arm postural control and can be used as an efficient postural manipulation tool for computer-aided ergonomic evaluation.
Joint inversions of two VTEM surveys using quasi-3D TDEM and 3D magnetic inversion algorithms
NASA Astrophysics Data System (ADS)
Kaminski, Vlad; Di Massa, Domenico; Viezzoli, Andrea
2016-05-01
In the current paper, we present results of a joint quasi-three-dimensional (quasi-3D) inversion of two versatile time domain electromagnetic (VTEM) datasets, as well as a joint 3D inversion of associated aeromagnetic datasets, from two surveys flown six years apart from one another (2007 and 2013) over a volcanogenic massive sulphide gold (VMS-Au) prospect in northern Ontario, Canada. The time domain electromagnetic (TDEM) data were inverted jointly using the spatially constrained inversion (SCI) approach. In order to increase the coherency in the model space, a calibration parameter was added. This was followed by a joint inversion of the total magnetic intensity (TMI) data extracted from the two surveys. The results of the inversions have been studied and matched with the known geology, adding some new valuable information to the ongoing mineral exploration initiative.
An improved pulse sequence and inversion algorithm of T2 spectrum
NASA Astrophysics Data System (ADS)
Ge, Xinmin; Chen, Hua; Fan, Yiren; Liu, Juntao; Cai, Jianchao; Liu, Jianyu
2017-03-01
The nuclear magnetic resonance transversal relaxation time is widely applied in geological prospecting, both in laboratory and downhole environments. However, current methods used for data acquisition and inversion should be reformed to characterize geological samples with complicated relaxation components and pore size distributions, such as samples of tight oil, gas shale, and carbonate. We present an improved pulse sequence to collect transversal relaxation signals based on the CPMG (Carr, Purcell, Meiboom, and Gill) pulse sequence. The echo spacing is not constant but varies in different windows, depending on prior knowledge or customer requirements. We use the entropy based truncated singular value decomposition (TSVD) to compress the ill-posed matrix and discard small singular values which cause the inversion instability. A hybrid algorithm combining the iterative TSVD and a simultaneous iterative reconstruction technique is implemented to reach the global convergence and stability of the inversion. Numerical simulations indicate that the improved pulse sequence leads to the same result as CPMG, but with lower echo numbers and computational time. The proposed method is a promising technique for geophysical prospecting and other related fields in future.
Hesford, Andrew J.; Chew, Weng C.
2010-01-01
The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438
Hesford, Andrew J; Chew, Weng C
2010-08-01
The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths.
Genetic algorithms-based inversion of multimode guided waves for cortical bone characterization
NASA Astrophysics Data System (ADS)
Bochud, N.; Vallet, Q.; Bala, Y.; Follet, H.; Minonzio, J.-G.; Laugier, P.
2016-10-01
Recent progress in quantitative ultrasound has exploited the multimode waveguide response of long bones. Measurements of the guided modes, along with suitable waveguide modeling, have the potential to infer strength-related factors such as stiffness (mainly determined by cortical porosity) and cortical thickness. However, the development of such model-based approaches is challenging, in particular because of the multiparametric nature of the inverse problem. Current estimation methods in the bone field rely on a number of assumptions for pairing the incomplete experimental data with the theoretical guided modes (e.g. semi-automatic selection and classification of the data). The availability of an alternative inversion scheme that is user-independent is highly desirable. Thus, this paper introduces an efficient inversion method based on genetic algorithms using multimode guided waves, in which the mode-order is kept blind. Prior to its evaluation on bone, our proposal is validated using laboratory-controlled measurements on isotropic plates and bone-mimicking phantoms. The results show that the model parameters (i.e. cortical thickness and porosity) estimated from measurements on a few ex vivo human radii are in good agreement with the reference values derived from x-ray micro-computed tomography. Further, the cortical thickness estimated from in vivo measurements at the third from the distal end of the radius is in good agreement with the values delivered by site-matched high-resolution x-ray peripheral computed tomography.
NASA Astrophysics Data System (ADS)
Müller, Detlef; Böckmann, Christine; Kolgotin, Alexei; Schneidenbach, Lars; Chemyakin, Eduard; Rosemann, Julia; Znak, Pavel; Romanov, Anton
2016-10-01
We present a summary on the current status of two inversion algorithms that are used in EARLINET (European Aerosol Research Lidar Network) for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on a manually controlled inversion of optical data which allows for detailed sensitivity studies. The algorithms allow us to derive particle effective radius as well as volume and surface area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light absorption needs to be known with high accuracy. It is an extreme challenge to retrieve the real part with an accuracy better than 0.05 and the imaginary part with accuracy better than 0.005-0.1 or ±50 %. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high- and low-absorbing aerosols. On the basis of a few exemplary simulations with synthetic optical data we discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work. One algorithm was used with the purpose of testing how well microphysical parameters can be derived if the real part of the complex refractive index is known to at least 0.05 or 0.1. The other algorithm was used to find out how well microphysical parameters can be derived if this constraint for the real part is not applied. The optical data used in our study cover a range of Ångström exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested
Inverse problems with Poisson data: statistical regularization theory, applications and algorithms
NASA Astrophysics Data System (ADS)
Hohage, Thorsten; Werner, Frank
2016-09-01
Inverse problems with Poisson data arise in many photonic imaging modalities in medicine, engineering and astronomy. The design of regularization methods and estimators for such problems has been studied intensively over the last two decades. In this review we give an overview of statistical regularization theory for such problems, the most important applications, and the most widely used algorithms. The focus is on variational regularization methods in the form of penalized maximum likelihood estimators, which can be analyzed in a general setup. Complementing a number of recent convergence rate results we will establish consistency results. Moreover, we discuss estimators based on a wavelet-vaguelette decomposition of the (necessarily linear) forward operator. As most prominent applications we briefly introduce Positron emission tomography, inverse problems in fluorescence microscopy, and phase retrieval problems. The computation of a penalized maximum likelihood estimator involves the solution of a (typically convex) minimization problem. We also review several efficient algorithms which have been proposed for such problems over the last five years.
NASA Astrophysics Data System (ADS)
Al-Ma'shumah, Fathimah; Permana, Dony; Sidarto, Kuntjoro Adji
2015-12-01
Customer Lifetime Value is an important and useful concept in marketing. One of its benefits is to help a company for budgeting marketing expenditure for customer acquisition and customer retention. Many mathematical models have been introduced to calculate CLV considering the customer retention/migration classification scheme. A fairly new class of these models which will be described in this paper uses Markov Chain Models (MCM). This class of models has the major advantage for its flexibility to be modified to several different cases/classification schemes. In this model, the probabilities of customer retention and acquisition play an important role. From Pfeifer and Carraway, 2000, the final formula of CLV obtained from MCM usually contains nonlinear form of the transition probability matrix. This nonlinearity makes the inverse problem of CLV difficult to solve. This paper aims to solve this inverse problem, yielding the approximate transition probabilities for the customers, by applying metaheuristic optimization algorithm developed by Yang, 2013, Flower Pollination Algorithm. The major interpretation of obtaining the transition probabilities are to set goals for marketing teams in keeping the relative frequencies of customer acquisition and customer retention.
An implementation of differential evolution algorithm for inversion of geoelectrical data
NASA Astrophysics Data System (ADS)
Balkaya, Çağlayan
2013-11-01
Differential evolution (DE), a population-based evolutionary algorithm (EA) has been implemented to invert self-potential (SP) and vertical electrical sounding (VES) data sets. The algorithm uses three operators including mutation, crossover and selection similar to genetic algorithm (GA). Mutation is the most important operator for the success of DE. Three commonly used mutation strategies including DE/best/1 (strategy 1), DE/rand/1 (strategy 2) and DE/rand-to-best/1 (strategy 3) were applied together with a binomial type crossover. Evolution cycle of DE was realized without boundary constraints. For the test studies performed with SP data, in addition to both noise-free and noisy synthetic data sets two field data sets observed over the sulfide ore body in the Malachite mine (Colorado) and over the ore bodies in the Neem-Ka Thana cooper belt (India) were considered. VES test studies were carried out using synthetically produced resistivity data representing a three-layered earth model and a field data set example from Gökçeada (Turkey), which displays a seawater infiltration problem. Mutation strategies mentioned above were also extensively tested on both synthetic and field data sets in consideration. Of these, strategy 1 was found to be the most effective strategy for the parameter estimation by providing less computational cost together with a good accuracy. The solutions obtained by DE for the synthetic cases of SP were quite consistent with particle swarm optimization (PSO) which is a more widely used population-based optimization algorithm than DE in geophysics. Estimated parameters of SP and VES data were also compared with those obtained from Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing (SA) without cooling to clarify uncertainties in the solutions. Comparison to the M-H algorithm shows that DE performs a fast approximate posterior sampling for the case of low-dimensional inverse geophysical problems.
NASA Astrophysics Data System (ADS)
Belkebir, Kamal; Saillard, Marc
2005-12-01
This special section deals with the reconstruction of scattering objects from experimental data. A few years ago, inspired by the Ipswich database [1 4], we started to build an experimental database in order to validate and test inversion algorithms against experimental data. In the special section entitled 'Testing inversion algorithms against experimental data' [5], preliminary results were reported through 11 contributions from several research teams. (The experimental data are free for scientific use and can be downloaded from the web site.) The success of this previous section has encouraged us to go further and to design new challenges for the inverse scattering community. Taking into account the remarks formulated by several colleagues, the new data sets deal with inhomogeneous cylindrical targets and transverse electric (TE) polarized incident fields have also been used. Among the four inhomogeneous targets, three are purely dielectric, while the last one is a `hybrid' target mixing dielectric and metallic cylinders. Data have been collected in the anechoic chamber of the Centre Commun de Ressources Micro-ondes in Marseille. The experimental setup as well as the layout of the files containing the measurements are presented in the contribution by J-M Geffrin, P Sabouroux and C Eyraud. The antennas did not change from the ones used previously [5], namely wide-band horn antennas. However, improvements have been achieved by refining the mechanical positioning devices. In order to enlarge the scope of applications, both TE and transverse magnetic (TM) polarizations have been carried out for all targets. Special care has been taken not to move the target under test when switching from TE to TM measurements, ensuring that TE and TM data are available for the same configuration. All data correspond to electric field measurements. In TE polarization the measured component is orthogonal to the axis of invariance. Contributions A Abubakar, P M van den Berg and T M
NASA Technical Reports Server (NTRS)
Petty, Grant W.; Stettner, David R.
1994-01-01
This paper discusses certain aspects of a new inversion based algorithm for the retrieval of rain rate over the open ocean from the special sensor microwave/imager (SSM/I) multichannel imagery. This algorithm takes a more detailed physical approach to the retrieval problem than previously discussed algorithms that perform explicit forward radiative transfer calculations based on detailed model hydrometer profiles and attempt to match the observations to the predicted brightness temperature.
An algorithmic framework for Mumford-Shah regularization of inverse problems in imaging
NASA Astrophysics Data System (ADS)
Hohm, Kilian; Storath, Martin; Weinmann, Andreas
2015-11-01
The Mumford-Shah model is a very powerful variational approach for edge preserving regularization of image reconstruction processes. However, it is algorithmically challenging because one has to deal with a non-smooth and non-convex functional. In this paper, we propose a new efficient algorithmic framework for Mumford-Shah regularization of inverse problems in imaging. It is based on a splitting into specific subproblems that can be solved exactly. We derive fast solvers for the subproblems which are key for an efficient overall algorithm. Our method neither requires a priori knowledge of the gray or color levels nor of the shape of the discontinuity set. We demonstrate the wide applicability of the method for different modalities. In particular, we consider the reconstruction from Radon data, inpainting, and deconvolution. Our method can be easily adapted to many further imaging setups. The relevant condition is that the proximal mapping of the data fidelity can be evaluated a within reasonable time. In other words, it can be used whenever classical Tikhonov regularization is possible.
Mallick, S.
1999-03-01
In this paper, a prestack inversion method using a genetic algorithm (GA) is presented, and issues relating to the implementation of prestack GA inversion in practice are discussed. GA is a Monte-Carlo type inversion, using a natural analogy to the biological evolution process. When GA is cast into a Bayesian framework, a priori information of the model parameters and the physics of the forward problem are used to compute synthetic data. These synthetic data can then be matched with observations to obtain approximate estimates of the marginal a posteriori probability density (PPD) functions in the model space. Plots of these PPD functions allow an interpreter to choose models which best describe the specific geologic setting and lead to an accurate prediction of seismic lithology. Poststack inversion and prestack GA inversion were applied to a Woodbine gas sand data set from East Texas. A comparison of prestack inversion with poststack inversion demonstrates that prestack inversion shows detailed stratigraphic features of the subsurface which are not visible on the poststack inversion.
Combinatorial Algorithms for Computing Column Space Bases ThatHave Sparse Inverses
Pinar, Ali; Chow, Edmond; Pothen, Alex
2005-03-18
This paper presents a combinatorial study on the problem ofconstructing a sparse basis forthe null-space of a sparse, underdetermined, full rank matrix, A. Such a null-space is suitable forsolving solving many saddle point problems. Our approach is to form acolumn space basis of A that has a sparse inverse, by selecting suitablecolumns of A. This basis is then used to form a sparse null-space basisin fundamental form. We investigate three different algorithms forcomputing the column space basis: Two greedy approaches that rely onmatching, and a third employing a divide and conquer strategy implementedwith hypergraph partitioning followed by the greedy approach. We alsodiscuss the complexity of selecting a column basis when it is known thata block diagonal basis exists with a small given block size.
Self-potential data inversion through a Genetic-Price algorithm
NASA Astrophysics Data System (ADS)
Di Maio, R.; Rani, P.; Piegari, E.; Milano, L.
2016-09-01
A global optimization method based on a Genetic-Price hybrid Algorithm (GPA) is proposed for identifying the source parameters of self-potential (SP) anomalies. The effectiveness of the proposed approach is tested on synthetic SP data generated by simple polarized structures, like sphere, vertical cylinder, horizontal cylinder and inclined sheet. An extensive numerical analysis on signals affected by different percentage of white Gaussian random noise shows that the GPA is able to provide fast and accurate estimations of the true parameters in all tested examples. In particular, the calculation of the root-mean squared error between the true and inverted SP parameter sets is found to be crucial for the identification of the source anomaly shape. Finally, applications of the GPA to self-potential field data are presented and discussed in light of the results provided by other sophisticated inversion methods.
Inversion Algorithms for Water Vapor Radiometers Operating at 20.7 and 31.4 Ghz
NASA Technical Reports Server (NTRS)
Resch, G. M.
1984-01-01
Eight water vapor radiometers (WVRs) were constructed as research and development tools to support the Advanced System Programs in the Deep Space Network and the Crustal Dynamics Project. These instruments are intended to operate at the stations of the Deep Space Network (DSN), various radio observatories, and obile facilities that participate in very long baseline interferometric (VLBI) experiments. It is expected that the WVRs will operate in a wide range of meteorological conditions. Several algorithms are discussed that are used to estimate the line-of-sight path delay due to water vapor and columnar liquid water rom the observed microwave brightness temperatures provided by the WVRs. In particular, systematic effects due to site and seasonal variations are examined. The accuracy of the estimation as indicated by a simulation calculation is approximately 0.3 cm for a noiseless WVR in clear and moderately cloudy weather. With a realistic noise model of WVR behavior, the inversion accuracy is approximately 0.6 cm.
NASA Astrophysics Data System (ADS)
Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk; Shin, Jong-Jin
2016-09-01
Infrared signals are widely used to discriminate objects against the background. Prediction of infrared signal from an object surface is essential in evaluating the detectability of the object. Appropriate and easy method of procurement of the radiative properties such as the surface emissivity, bidirectional reflectivity is important in estimating infrared signals. Direct measurement can be a good choice but a costly and time consuming way of obtaining the radiative properties for surfaces coated with many different newly developed paints. Especially measurement of the bidirectional reflectivity usually expressed by the bidirectional reflectance distribution function (BRDF) is the most costly job. In this paper we are presenting an inverse estimation method of the radiative properties by using the directional radiances from the surface of concern. The inverse estimation method used in this study is the statistical repulsive particle swarm optimization (RPSO) algorithm which uses the randomly picked directional radiance data emitted and reflected from the surface. In this paper, we test the proposed inverse method by considering the radiation from a steel plate surface coated with different paints at a clear sunny day condition. For convenience, the directional radiance data from the steel plate within a spectral band of concern are obtained from the simulation using the commercial software, RadthermIR, instead of the field measurement. A widely used BRDF model called as the Sandford-Robertson(S-R) model is considered and the RPSO process is then used to find the best fitted model parameters for the S-R model. The results obtained from this study show an excellent agreement with the reference property data used for the simulation for directional radiances. The proposed process can be a useful way of obtaining the radiative properties from field measured directional radiance data for surfaces coated with or without various kinds of paints of unknown radiative
Liu, Tian; Xu, Weiyu; Spincemaille, Pascal; Avestimehr, A. Salman
2013-01-01
Determining the susceptibility distribution from the magnetic field measured in a magnetic resonance (MR) scanner is an ill-posed inverse problem, because of the presence of zeroes in the convolution kernel in the forward problem. An algorithm called morphology enabled dipole inversion (MEDI), which incorporates spatial prior information, has been proposed to generate a quantitative susceptibility map (QSM). The accuracy of QSM can be validated experimentally. However, there is not yet a rigorous mathematical demonstration of accuracy for a general regularized approach or for MEDI specifically. The error in the susceptibility map reconstructed by MEDI is expressed in terms of the acquisition noise and the error in the spatial prior information. A detailed analysis demonstrates that the error in the susceptibility map reconstructed by MEDI is bounded by a linear function of these two error sources. Numerical analysis confirms that the error of the susceptibility map reconstructed by MEDI is on the same order of the noise in the original MRI data, and comprehensive edge detection will lead to reduced model error in MEDI. Additional phantom validation and human brain imaging demonstrated the practicality of the MEDI method. PMID:22231170
NASA Astrophysics Data System (ADS)
Ying, Sibin; Ai, Jianliang; Luo, Changhang; Wang, Peng
2006-11-01
Non-linear Dynamic Inversion (NDI) is a technique for control law design, which is based on the feedback linearization and achieving desired dynamic response characteristics. NDI requires an ideal and precise model, however, there must be some errors due to the modeling error or actuator faults, therefore the control law designed by NDI has less robustness. Combining with structured singular value μ synthesis method, the system's robustness can be improved notably. The designed controller, which uses the structured singular value μ synthesis method, has high dimensions, and the dimensions must be reduced when we calculate it. This paper presents a new method for the design of robust flight control, which uses structured singular value μ synthesis based on genetic algorithm. The designed controller, which uses this method, can reduce the dimensions obviously compared with the normal method of structured singular value synthesis, so it is easier for application. The presented method is applied to robustness controller design of some super maneuverable fighter. The simulation results show that the dynamic inversion control law achieves a high level of performance in post-stall maneuver condition, and the whole control system has perfect robustness and anti-disturbance ability.
Three-dimensional inverse modelling of magnetic anomaly sources based on a genetic algorithm
NASA Astrophysics Data System (ADS)
Montesinos, Fuensanta G.; Blanco-Montenegro, Isabel; Arnoso, José
2016-04-01
We present a modelling method to estimate the 3-D geometry and location of homogeneously magnetized sources from magnetic anomaly data. As input information, the procedure needs the parameters defining the magnetization vector (intensity, inclination and declination) and the Earth's magnetic field direction. When these two vectors are expected to be different in direction, we propose to estimate the magnetization direction from the magnetic map. Then, using this information, we apply an inversion approach based on a genetic algorithm which finds the geometry of the sources by seeking the optimum solution from an initial population of models in successive iterations through an evolutionary process. The evolution consists of three genetic operators (selection, crossover and mutation), which act on each generation, and a smoothing operator, which looks for the best fit to the observed data and a solution consisting of plausible compact sources. The method allows the use of non-gridded, non-planar and inaccurate anomaly data and non-regular subsurface partitions. In addition, neither constraints for the depth to the top of the sources nor an initial model are necessary, although previous models can be incorporated into the process. We show the results of a test using two complex synthetic anomalies to demonstrate the efficiency of our inversion method. The application to real data is illustrated with aeromagnetic data of the volcanic island of Gran Canaria (Canary Islands).
An inverse kinematics algorithm for a highly redundant variable-geometry-truss manipulator
NASA Technical Reports Server (NTRS)
Naccarato, Frank; Hughes, Peter
1989-01-01
A new class of robotic arm consists of a periodic sequence of truss substructures, each of which has several variable-length members. Such variable-geometry-truss manipulator (VGTMs) are inherently highly redundant and promise a significant increase in dexterity over conventional anthropomorphic manipulators. This dexterity may be exploited for both obstacle avoidance and controlled deployment in complex workspaces. The inverse kinematics problem for such unorthodox manipulators, however, becomes complex because of the large number of degrees of freedom, and conventional solutions to the inverse kinematics problem become inefficient because of the high degree of redundancy. A solution is presented to this problem based on a spline-like reference curve for the manipulator's shape. Such an approach has a number of advantages: (1) direct, intuitive manipulation of shape; (2) reduced calculation time; and (3) direct control over the effective degree of redundancy of the manipulator. Furthermore, although the algorithm was developed primarily for variable-geometry-truss manipulators, it is general enough for application to a number of manipulator designs.
NASA Astrophysics Data System (ADS)
Xiang, Shiming; Zhang, Haijiang
2016-11-01
It is known full-waveform inversion (FWI) is generally ill-conditioned and various strategies including pre-conditioning and regularizing the inversion system have been proposed to obtain a reliable estimation of the velocity model. Here, we propose a new edge-guided strategy for FWI in frequency domain to efficiently and reliably estimate velocity models with structures of the size similar to the seismic wavelength. The edges of the velocity model at the current iteration are first detected by the Canny edge detection algorithm that is widely used in image processing. Then, the detected edges are used for guiding the calculation of FWI gradient as well as enforcing edge-preserving total variation (TV) regularization for next iteration of FWI. Bilateral filtering is further applied to remove noise but keep edges of the FWI gradient. The proposed edge-guided FWI in the frequency domain with edge-guided TV regularization and bilateral filtering is designed to preserve model edges that are recovered from previous iterations as well as from lower frequency waveforms when FWI is conducted from lower to higher frequencies. The new FWI method is validated using the complex Marmousi model that contains several steeply dipping fault zones and hundreds of horizons. Compared to FWI without edge guidance, our proposed edge-guided FWI recovers velocity model anomalies and edges much better. Unlike previous image-guided FWI or edge-guided TV regularization strategies, our method does not require migrating seismic data, thus is more efficient for real applications.
NASA Astrophysics Data System (ADS)
Monnier, J.; Couderc, F.; Dartus, D.; Larnier, K.; Madec, R.; Vila, J.-P.
2016-11-01
The 2D shallow water equations adequately model some geophysical flows with wet-dry fronts (e.g. flood plain or tidal flows); nevertheless deriving accurate, robust and conservative numerical schemes for dynamic wet-dry fronts over complex topographies remains a challenge. Furthermore for these flows, data are generally complex, multi-scale and uncertain. Robust variational inverse algorithms, providing sensitivity maps and data assimilation processes may contribute to breakthrough shallow wet-dry front dynamics modelling. The present study aims at deriving an accurate, positive and stable finite volume scheme in presence of dynamic wet-dry fronts, and some corresponding inverse computational algorithms (variational approach). The schemes and algorithms are assessed on classical and original benchmarks plus a real flood plain test case (Lèze river, France). Original sensitivity maps with respect to the (friction, topography) pair are performed and discussed. The identification of inflow discharges (time series) or friction coefficients (spatially distributed parameters) demonstrate the algorithms efficiency.
Identify Structural Flaw Location and Type with an Inverse Algorithm of Resonance Inspection
Xu, Wei; Lai, Canhai; Sun, Xin
2015-10-20
To evaluate the fitness-for-service of a structural component and to quantify its remaining useful life, aging and service-induced structural flaws must be quantitatively determined in service or during scheduled maintenance shutdowns. Resonance inspection (RI), a non-destructive evaluation (NDE) technique, distinguishes the anomalous parts from the good parts based on changes in the natural frequency spectra. Known for its numerous advantages, i.e., low inspection cost, high testing speed, and broad applicability to complex structures, RI has been widely used in the automobile industry for quality inspection. However, compared to other contemporary direct visualization-based NDE methods, a more widespread application of RI faces a fundamental challenge because such technology is unable to quantify the flaw details, e.g. location, dimensions, and types. In this study, the applicability of a maximum correlation-based inverse RI algorithm developed by the authors is further studied for various flaw cases. It is demonstrated that a variety of common structural flaws, i.e. stiffness degradation, voids, and cracks, can be accurately retrieved by this algorithm even when multiple different types of flaws coexist. The quantitative relations between the damage identification results and the flaw characteristics are also developed to assist the evaluation of the actual state of health of the engineering structures.
NASA Astrophysics Data System (ADS)
Zhao, Fengjun; Qu, Xiaochao; Zhang, Xing; Poon, Ting-Chung; Kim, Taegeun; Kim, You Seok; Liang, Jimin
2012-03-01
The optical imaging takes advantage of coherent optics and has promoted the development of visualization of biological application. Based on the temporal coherence, optical coherence tomography can deliver three-dimensional optical images with superior resolutions, but the axial and lateral scanning is a time-consuming process. Optical scanning holography (OSH) is a spatial coherence technique which integrates three-dimensional object into a two-dimensional hologram through a two-dimensional optical scanning raster. The advantages of high lateral resolution and fast image acquisition offer it a great potential application in three-dimensional optical imaging, but the prerequisite is the accurate and practical reconstruction algorithm. Conventional method was first adopted to reconstruct sectional images and obtained fine results, but some drawbacks restricted its practicality. An optimization method based on 2 l norm obtained more accurate results than that of the conventional methods, but the intrinsic smooth of 2 l norm blurs the reconstruction results. In this paper, a hard-threshold based sparse inverse imaging algorithm is proposed to improve the sectional image reconstruction. The proposed method is characterized by hard-threshold based iterating with shrinkage threshold strategy, which only involves lightweight vector operations and matrix-vector multiplication. The performance of the proposed method has been validated by real experiment, which demonstrated great improvement on reconstruction accuracy at appropriate computational cost.
MASS SUBSTRUCTURE IN ABELL 3128
McCleary, J.; Dell’Antonio, I.; Huwe, P.
2015-05-20
We perform a detailed two-dimensional weak gravitational lensing analysis of the nearby (z = 0.058) galaxy cluster Abell 3128 using deep ugrz imaging from the Dark Energy Camera (DECam). We have designed a pipeline to remove instrumental artifacts from DECam images and stack multiple dithered observations without inducing a spurious ellipticity signal. We develop a new technique to characterize the spatial variation of the point-spread function that enables us to circularize the field to better than 0.5% and thereby extract the intrinsic galaxy ellipticities. By fitting photometric redshifts to sources in the observation, we are able to select a sample of background galaxies for weak-lensing analysis free from low-redshift contaminants. Photometric redshifts are also used to select a high-redshift galaxy subsample with which we successfully isolate the signal from an interloping z = 0.44 cluster. We estimate the total mass of Abell 3128 by fitting the tangential ellipticity of background galaxies with the weak-lensing shear profile of a Navarro–Frenk–White (NFW) halo and also perform NFW fits to substructures detected in the 2D mass maps of the cluster. This study yields one of the highest resolution mass maps of a low-z cluster to date and is the first step in a larger effort to characterize the redshift evolution of mass substructures in clusters.
NASA Astrophysics Data System (ADS)
Harker, Brian J.
The measurement of vector magnetic fields on the sun is one of the most important diagnostic tools for characterizing solar activity. The ubiquitous solar wind is guided into interplanetary space by open magnetic field lines in the upper solar atmosphere. Highly-energetic solar flares and Coronal Mass Ejections (CMEs) are triggered in lower layers of the solar atmosphere by the driving forces at the visible "surface" of the sun, the photosphere. The driving forces there tangle and interweave the vector magnetic fields, ultimately leading to an unstable field topology with large excess magnetic energy, and this excess energy is suddenly and violently released by magnetic reconnection, emitting intense broadband radiation that spans the electromagnetic spectrum, accelerating billions of metric tons of plasma away from the sun, and finally relaxing the magnetic field to lower-energy states. These eruptive flaring events can have severe impacts on the near-Earth environment and the human technology that inhabits it. This dissertation presents a novel inversion method for inferring the properties of the vector magnetic field from telescopic measurements of the polarization states (Stokes vector) of the light received from the sun, in an effort to develop a method that is fast, accurate, and reliable. One of the long-term goals of this work is to develop such a method that is capable of rapidly-producing characterizations of the magnetic field from time-sequential data, such that near real-time projections of the complexity and flare- productivity of solar active regions can be made. This will be a boon to the field of solar flare forecasting, and should help mitigate the harmful effects of space weather on mankind's space-based endeavors. To this end, I have developed an inversion method based on genetic algorithms (GA) that have the potential for achieving such high-speed analysis.
Wang, Hong; Wang, Xi-cheng
2014-02-21
Metabolism is a very important cellular process and its malfunction contributes to human disease. Therefore, building dynamic models for metabolic networks with experimental data in order to analyze biological process rationally has attracted a lot of attention. Owing to the technical limitations, some unknown parameters contained in models need to be estimated effectively by means of the computational method. Generally, problems of parameter estimation of nonlinear biological network are known to be ill condition and multimodal. In particular, with the increasing amount and enlarging the scope of parameters, many optimization algorithms often fail to find a global solution. In this paper, two-stage variable factor Bregman regularization homotopy method is proposed. Discrete homotopy is used to identify the possible extreme region and continuous homotopy is executed for the purpose of stability of path tracing in the special region. Meanwhile, Latin hypercube sampling is introduced to get the good initial guess value and a perturbation strategy is developed to jump out of the local optimum. Three metabolic network inverse problems are investigated to demonstrate the effectiveness of the proposed method.
Improving excitation and inversion accuracy by optimized RF pulse using genetic algorithm.
Pang, Yong; Shen, Gary X
2007-05-01
In this study, a Genetic Algorithm (GA) is introduced to optimize the multidimensional spatial selective RF pulse to reduce the passband and stopband errors of excitation profile while limiting the transition width. This method is also used to diminish the nonlinearity effect of the Bloch equation for large tip angle excitation pulse design. The RF pulse is first designed by the k-space method and then coded into float strings to form an initial population. GA operators are then applied to this population to perform evolution, which is an optimization process. In this process, an evaluation function defined as the sum of the reciprocal of passband and stopband errors is used to assess the fitness value of each individual, so as to find the best individual in current generation. It is possible to optimize the RF pulse after a number of iterations. Simulation results of the Bloch equation show that in a 90 degrees excitation pulse design, compared with the k-space method, a GA-optimized RF pulse can reduce the passband and stopband error by 12% and 3%, respectively, while maintaining the transition width within 2 cm (about 12% of the whole 32 cm FOV). In a 180 degrees inversion pulse design, the passband error can be reduced by 43%, while the transition is also kept at 2 cm in a whole 32 cm FOV.
NASA Astrophysics Data System (ADS)
Bao, Xingxian; Cao, Aixia; Zhang, Jing
2016-07-01
Modal parameters estimation plays an important role for structural health monitoring. Accurately estimating the modal parameters of structures is more challenging as the measured vibration response signals are contaminated with noise. This study develops a mathematical algorithm of solving the partially described inverse singular value problem (PDISVP) combined with the complex exponential (CE) method to estimate the modal parameters. The PDISVP solving method is to reconstruct an L2-norm optimized (filtered) data matrix from the measured (noisy) data matrix, when the prescribed data constraints are one or several sets of singular triplets of the matrix. The measured data matrix is Hankel structured, which is constructed based on the measured impulse response function (IRF). The reconstructed matrix must maintain the Hankel structure, and be lowered in rank as well. Once the filtered IRF is obtained, the CE method can be applied to extract the modal parameters. Two physical experiments, including a steel cantilever beam with 10 accelerometers mounted, and a steel plate with 30 accelerometers mounted, excited by an impulsive load, respectively, are investigated to test the applicability of the proposed scheme. In addition, the consistency diagram is proposed to exam the agreement among the modal parameters estimated from those different accelerometers. Results indicate that the PDISVP-CE method can significantly remove noise from measured signals and accurately estimate the modal frequencies and damping ratios.
NASA Astrophysics Data System (ADS)
Brossier, R.
2011-04-01
Full waveform inversion (FWI) is an appealing seismic data-fitting procedure for the derivation of high-resolution quantitative models of the subsurface at various scales. Full modelling and inversion of visco-elastic waves from multiple seismic sources allow for the recovering of different physical parameters, although they remain computationally challenging tasks. An efficient massively parallel, frequency-domain FWI algorithm is implemented here on large-scale distributed-memory platforms for imaging two-dimensional visco-elastic media. The resolution of the elastodynamic equations, as the forward problem of the inversion, is performed in the frequency domain on unstructured triangular meshes, using a low-order finite element discontinuous Galerkin method. The linear system resulting from discretization of the forward problem is solved with a parallel direct solver. The inverse problem, which is presented as a non-linear local optimization problem, is solved in parallel with a quasi-Newton method, and this allows for reliable estimation of multiple classes of visco-elastic parameters. Two levels of parallelism are implemented in the algorithm, based on message passing interfaces and multi-threading, for optimal use of computational time and the core-memory resources available on modern distributed-memory multi-core computational platforms. The algorithm allows for imaging of realistic targets at various scales, ranging from near-surface geotechnic applications to crustal-scale exploration.
Inversion of Airborne Passive Microwave Data for Snow Properties using the Metropolis Algorithm
NASA Astrophysics Data System (ADS)
Vander Jagt, B.; Durand, M. T.; Margulis, S. A.; Molotch, N. P.; Kim, E. J.
2012-12-01
Passive microwave (PM) remote sensing of snow is based on the fact that microwave brightness temperatures contain information about different snow properties, some of which include depth, grain size, and density. These different snow properties are highly spatially heterogeneous, and often prove difficult to invert using traditional algorithms. This is mainly due the dynamic, many-to-one nature of the relationship between the PM signal and the different snow properties, the coarse resolution of the observations as compared to the fine spatial scale at which snow properties vary, and the masking of the PM signal by varying amounts and types of vegetation. While multi-frequency PM observations can help reduce the many-to-one nature associated with the snow states by constraining the amount of potential solutions, the vertical heterogeneity and layering of snow properties often leads to errors in the inversion process when little a priori information exists on the vertical structure of the snowpack. Using a new algorithm, specifically a Bayesian Markov Chain Monte Carlo scheme solved using the Metropolis algorithm, we attempt to invert the airborne passive microwave data collected during the Cold Land Processes Experiment (CLPX) to estimate the spatial snow properties within the different study areas, with virtually no a priori information. We allowed the number of snowpack layers itself to be unknown by generating different chains for each possible number of layers (up to a maximum of four), then selecting the optimal chain using a model selection criterion. We then evaluate our accuracy using real datasets, specifically the measured in-situ snow properties that were collected from snow pits during CLPX, and compare our results across a large range of different snow and climactic environments. Synthetic results show that an accurate solution to number of layers, layer thickness, density, grain size, snow temperature and ground temperature from microwave measurements
NASA Astrophysics Data System (ADS)
Li, Zhanhui; Huang, Qinghua; Xie, Xingbing; Tang, Xingong; Chang, Liao
2016-08-01
We present a generic 1D forward modeling and inversion algorithm for transient electromagnetic (TEM) data with an arbitrary horizontal transmitting loop and receivers at any depth in a layered earth. Both the Hankel and sine transforms required in the forward algorithm are calculated using the filter method. The adjoint-equation method is used to derive the formulation of data sensitivity at any depth in non-permeable media. The inversion algorithm based on this forward modeling algorithm and sensitivity formulation is developed using the Gauss-Newton iteration method combined with the Tikhonov regularization. We propose a new data-weighting method to minimize the initial model dependence that enhances the convergence stability. On a laptop with a CPU of i7-5700HQ@3.5 GHz, the inversion iteration of a 200 layered input model with a single receiver takes only 0.34 s, while it increases to only 0.53 s for the data from four receivers at a same depth. For the case of four receivers at different depths, the inversion iteration runtime increases to 1.3 s. Modeling the data with an irregular loop and an equal-area square loop indicates that the effect of the loop geometry is significant at early times and vanishes gradually along the diffusion of TEM field. For a stratified earth, inversion of data from more than one receiver is useful in noise reducing to get a more credible layered earth. However, for a resistive layer shielded below a conductive layer, increasing the number of receivers on the ground does not have significant improvement in recovering the resistive layer. Even with a down-hole TEM sounding, the shielded resistive layer cannot be recovered if all receivers are above the shielded resistive layer. However, our modeling demonstrates remarkable improvement in detecting the resistive layer with receivers in or under this layer.
NASA Astrophysics Data System (ADS)
Hetmaniok, Edyta
2016-07-01
In this paper the procedure for solving the inverse problem for the binary alloy solidification in the casting mould is presented. Proposed approach is based on the mathematical model suitable for describing the investigated solidification process, the lever arm model describing the macrosegregation process, the finite element method for solving the direct problem and the artificial bee colony algorithm for minimizing the functional expressing the error of approximate solution. Goal of the discussed inverse problem is the reconstruction of heat transfer coefficient and distribution of temperature in investigated region on the basis of known measurements of temperature.
Honarvar, Mohammad; Sahebjavaher, Ramin; Rohling, Robert; Salcudean, Septimiu
2017-03-22
In quantitative elastography, maps of the mechanical properties of soft tissue, or elastograms, are calculated from the measured displacement data by solving an inverse problem. The model assumptions have a significant effect on elastograms. Motivated by the high sensitivity of imaging results to the model assumptions for in-vivo Magnetic Resonance Elastography (MRE) of the prostate, we compared elastograms obtained with four different methods. Two FEM-based methods developed by our group were compared with two other commonly used methods, Local Frequency Estimator (LFE) and curl-based Direct Inversion (c-DI). All the methods assume a linear isotropic elastic model, but the methods vary in their assumptions, such as local homogeneity or incompressibility, and in the specific approach used. We report results using simulations, phantom, ex-vivo and in-vivo data. The simulation and phantom studies show, for regions with an inclusion, the contrast to noise ratio (CNR) for the FEM methods is about 3-5 times higher than the CNR for the LFE and c-DI and the RMS error is about half. The LFE method produces very smooth results (i.e. low CNR) and is fast. c-DI is faster than the FEM methods but it's only accurate in areas where elasticity variations are small. The artifacts resulting from the homogeneity assumption in c-DI is detrimental in regions with large variations. The ex-vivo and in-vivo results also show similar trends as the simulation and phantom studies. The c-FEM method is more sensitive to noise compared to the mixed-FEM due to higher orders derivatives. This is especially evident at lower frequencies where the wave curvature is smaller and it is more prone to such error, causing a discrepancy in the absolute values between the mixed-FEM and c-FEM in our in-vivo results. In general, the proposed finite element methods use fewer simplifying assumptions and outperform the other methods but they are computationally more expensive.
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
A new damping factor algorithm based on line search of the local minimum point for inverse approach
NASA Astrophysics Data System (ADS)
Zhang, Yaqi; Liu, Weijie; Lu, Fang; Zhang, Xiangkui; Hu, Ping
2013-05-01
The influence of damping factor on the convergence and computational efficiency of the inverse approach was studied through a series of practical examples. A new selection algorithm of the damping (relaxation) factor which takes into account of both robustness and calculation efficiency is proposed, then the computer program is implemented and tested on Siemens PLM NX | One-Step. The result is compared with the traditional Armijo rule through six examples such as U-beam, square box and cylindrical cup et al, confirming the effectiveness of proposed algorithm.
Kinugawa, Tohru
2014-02-15
This paper presents a simple but nontrivial generalization of Abel's mechanical problem, based on the extended isochronicity condition and the superposition principle. There are two primary aims. The first one is to reveal the linear relation between the transit-time T and the travel-length X hidden behind the isochronicity problem that is usually discussed in terms of the nonlinear equation of motion (d{sup 2}X)/(dt{sup 2}) +(dU)/(dX) =0 with U(X) being an unknown potential. Second, the isochronicity condition is extended for the possible Abel-transform approach to designing the isochronous trajectories of charged particles in spectrometers and/or accelerators for time-resolving experiments. Our approach is based on the integral formula for the oscillatory motion by Landau and Lifshitz [Mechanics (Pergamon, Oxford, 1976), pp. 27–29]. The same formula is used to treat the non-periodic motion that is driven by U(X). Specifically, this unknown potential is determined by the (linear) Abel transform X(U) ∝ A[T(E)], where X(U) is the inverse function of U(X), A=(1/√(π))∫{sub 0}{sup E}dU/√(E−U) is the so-called Abel operator, and T(E) is the prescribed transit-time for a particle with energy E to spend in the region of interest. Based on this Abel-transform approach, we have introduced the extended isochronicity condition: typically, τ = T{sub A}(E) + T{sub N}(E) where τ is a constant period, T{sub A}(E) is the transit-time in the Abel type [A-type] region spanning X > 0 and T{sub N}(E) is that in the Non-Abel type [N-type] region covering X < 0. As for the A-type region in X > 0, the unknown inverse function X{sub A}(U) is determined from T{sub A}(E) via the Abel-transform relation X{sub A}(U) ∝ A[T{sub A}(E)]. In contrast, the N-type region in X < 0 does not ensure this linear relation: the region is covered with a predetermined potential U{sub N}(X) of some arbitrary choice, not necessarily obeying the Abel-transform relation. In discussing
Ultra-Scalable Algorithms for Large-Scale Uncertainty Quantification in Inverse Wave Propagation
2016-03-04
associated uncertainty, the heterogeneity of a medium or shape of a scatterer from reflected/transmitted waves (acoustic, elastic, electromagnetic ) at very...elastic, and electromagnetic wave propagation; discontinuous Petrov Galerkin method; volume integral equations; fast multipole method; FFT; inverse...reflected/transmitted waves (acoustic, elastic, electromagnetic ) at very large scale. The resulting Bayesian wave inverse propagation problem has been
Zeng, C.; Xia, J.; Miller, R.D.; Tsoflias, G.P.
2011-01-01
Conventional surface wave inversion for shallow shear (S)-wave velocity relies on the generation of dispersion curves of Rayleigh waves. This constrains the method to only laterally homogeneous (or very smooth laterally heterogeneous) earth models. Waveform inversion directly fits waveforms on seismograms, hence, does not have such a limitation. Waveforms of Rayleigh waves are highly related to S-wave velocities. By inverting the waveforms of Rayleigh waves on a near-surface seismogram, shallow S-wave velocities can be estimated for earth models with strong lateral heterogeneity. We employ genetic algorithm (GA) to perform waveform inversion of Rayleigh waves for S-wave velocities. The forward problem is solved by finite-difference modeling in the time domain. The model space is updated by generating offspring models using GA. Final solutions can be found through an iterative waveform-fitting scheme. Inversions based on synthetic records show that the S-wave velocities can be recovered successfully with errors no more than 10% for several typical near-surface earth models. For layered earth models, the proposed method can generate one-dimensional S-wave velocity profiles without the knowledge of initial models. For earth models containing lateral heterogeneity in which case conventional dispersion-curve-based inversion methods are challenging, it is feasible to produce high-resolution S-wave velocity sections by GA waveform inversion with appropriate priori information. The synthetic tests indicate that the GA waveform inversion of Rayleigh waves has the great potential for shallow S-wave velocity imaging with the existence of strong lateral heterogeneity. ?? 2011 Elsevier B.V.
NASA Technical Reports Server (NTRS)
Bayo, Eduardo; Ledesma, Ragnar
1993-01-01
A technique is presented for solving the inverse dynamics of flexible planar multibody systems. This technique yields the non-causal joint efforts (inverse dynamics) as well as the internal states (inverse kinematics) that produce a prescribed nominal trajectory of the end effector. A non-recursive global Lagrangian approach is used in formulating the equations for motion as well as in solving the inverse dynamics equations. Contrary to the recursive method previously presented, the proposed method solves the inverse problem in a systematic and direct manner for both open-chain as well as closed-chain configurations. Numerical simulation shows that the proposed procedure provides an excellent tracking of the desired end effector trajectory.
Rapid Inversion of Angular Deflection Data for Certain Axisymmetric Refractive Index Distributions
NASA Technical Reports Server (NTRS)
Rubinstein, R.; Greenberg, P. S.
1994-01-01
Certain functions useful for representing axisymmetric refractive-index distributions are shown to have exact solutions for Abel transformation of the resulting angular deflection data. An advantage of this procedure over direct numerical Abel inversion is that least-squares curve fitting is a smoothing process that reduces the noise sensitivity of the computation
NASA Astrophysics Data System (ADS)
Adhikari, Loknath; Xie, Feiqin; Haase, Jennifer S.
2016-10-01
With a GPS receiver on board an airplane, the airborne radio occultation (ARO) technique provides dense lower-tropospheric soundings over target regions. Large variations in water vapor in the troposphere cause strong signal multipath, which could lead to systematic errors in RO retrievals with the geometric optics (GO) method. The spaceborne GPS RO community has successfully developed the full-spectrum inversion (FSI) technique to solve the multipath problem. This paper is the first to adapt the FSI technique to retrieve atmospheric properties (bending and refractivity) from ARO signals, where it is necessary to compensate for the receiver traveling on a non-circular trajectory inside the atmosphere, and its use is demonstrated using an end-to-end simulation system. The forward-simulated GPS L1 (1575.42 MHz) signal amplitude and phase are used to test the modified FSI algorithm. The ARO FSI method is capable of reconstructing the fine vertical structure of the moist lower troposphere in the presence of severe multipath, which otherwise leads to large retrieval errors in the GO retrieval. The sensitivity of the modified FSI-retrieved bending angle and refractivity to errors in signal amplitude and errors in the measured refractivity at the receiver is presented. Accurate bending angle retrievals can be obtained from the surface up to ˜ 250 m below the receiver at typical flight altitudes above the tropopause, above which the retrieved bending angle becomes highly sensitive to the phase measurement noise. Abrupt changes in the signal amplitude that are a challenge for receiver tracking and geometric optics bending angle retrieval techniques do not produce any systematic bias in the FSI retrievals when the SNR is high. For very low SNR, the FSI performs as expected from theoretical considerations. The 1 % in situ refractivity measurement errors at the receiver height can introduce a maximum refractivity retrieval error of 0.5 % (1 K) near the receiver, but
NASA Astrophysics Data System (ADS)
Balogh, Michael L.; Morris, Simon L.
2000-11-01
We present the results of a search for strong Hα emission line galaxies (rest frame equivalent widths greater than 50Å) in the z~0.23 cluster Abell 2390. The survey contains 1189galaxies over 270arcmin2, and is 50per cent complete at Mr~-17.5+5logh. The fraction of galaxies in which Hα is detected at the 2σ level rises from 0.0 in the central regions (excluding the cD galaxy) to 12.5+/-8per cent at R200. For 165 of the galaxies in our catalogue, we compare the Hα equivalent widths with their [Oii] λ3727 equivalent widths, from the Canadian Network for Observational Cosmology (CNOC1) spectra. The fraction of strong Hα emission line galaxies is consistent with the fraction of strong [Oii] emission galaxies in the CNOC1 sample: only 2+/-1per cent have no detectable [Oii] emission and yet significant (>2σ) Hα equivalent widths. Dust obscuration, non-thermal ionization, and aperture effects are all likely to contribute to this non-correspondence of emission lines. We identify six spectroscopically `secure' k+a galaxies [W0(Oii)<5Å and W0(Hδ)>~5Å] at least two of these show strong signs in Hα of star formation in regions that are covered by the slit from which the spectra were obtained. Thus, some fraction of galaxies classified as k+a based on spectra shortward of 6000Å are likely to be undergoing significant star formation. These results are consistent with a `strangulation' model for cluster galaxy evolution, in which star formation in cluster galaxies is gradually decreased, and is neither enhanced nor abruptly terminated by the cluster environment.
Harada, Ryuhei; Takano, Yu; Shigeta, Yasuteru
2016-05-10
The TaBoo SeArch (TBSA) algorithm [ Harada et al. J. Comput. Chem. 2015 , 36 , 763 - 772 and Harada et al. Chem. Phys. Lett. 2015 , 630 , 68 - 75 ] was recently proposed as an enhanced conformational sampling method for reproducing biologically relevant rare events of a given protein. In TBSA, an inverse histogram of the original distribution, mapped onto a set of reaction coordinates, is constructed from trajectories obtained by multiple short-time molecular dynamics (MD) simulations. Rarely occurring states of a given protein are statistically selected as new initial states based on the inverse histogram, and resampling is performed by restarting the MD simulations from the new initial states to promote the conformational transition. In this process, the definition of the inverse histogram, which characterizes the rarely occurring states, is crucial for the efficiency of TBSA. In this study, we propose a simple modification of the inverse histogram to further accelerate the convergence of TBSA. As demonstrations of the modified TBSA, we applied it to (a) hydrogen bonding rearrangements of Met-enkephalin, (b) large-amplitude domain motions of Glutamine-Binding Protein, and (c) folding processes of the B domain of Staphylococcus aureus Protein A. All demonstrations numerically proved that the modified TBSA reproduced these biologically relevant rare events with nanosecond-order simulation times, although a set of microsecond-order, canonical MD simulations failed to reproduce the rare events, indicating the high efficiency of the modified TBSA.
Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang
2016-01-01
Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter.
Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang
2016-01-01
Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938
NASA Astrophysics Data System (ADS)
Boukabara, S. A.; Garrett, K.
2014-12-01
A one-dimensional variational retrieval system has been developed, capable of producing temperature and water vapor profiles in clear, cloudy and precipitating conditions. The algorithm, known as the Microwave Integrated Retrieval System (MiRS), is currently running operationally at the National Oceanic and Atmospheric Administration (NOAA) National Environmental Satellite Data and Information Service (NESDIS), and is applied to a variety of data from the AMSU-A/MHS sensors on board the NOAA-18, NOAA-19, and MetOp-A/B polar satellite platforms, as well as SSMI/S on board both DMSP F-16 and F18, and from the NPP ATMS sensor. MiRS inverts microwave brightness temperatures into atmospheric temperature and water vapor profiles, along with hydrometeors and surface parameters, simultaneously. This atmosphere/surface coupled inversion allows for more accurate retrievals in the lower tropospheric layers by accounting for the surface emissivity impact on the measurements. It also allows the inversion of the soundings in all-weather conditions thanks to the incorporation of the hydrometeors parameters in the inverted state vector as well as to the inclusion of the emissivity in the same state vector, which is accounted for dynamically for the highly variable surface conditions found under precipitating atmospheres. The inversion is constrained in precipitating conditions by the inclusion of covariances for hydrometeors, to take advantage of the natural correlations that exist between temperature and water vapor with liquid and ice cloud along with rain water. In this study, we present a full assessment of temperature and water vapor retrieval performances in all-weather conditions and over all surface types (ocean, sea-ice, land, and snow) using matchups with radiosonde as well as Numerical Weather Prediction and other satellite retrieval algorithms as references. An emphasis is placed on retrievals in cloudy and precipitating atmospheres, including extreme weather events
Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm
NASA Technical Reports Server (NTRS)
Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.
1991-01-01
The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.
Use of inverse theory algorithms in the analysis of biomembrane NMR data.
Sternin, Edward
2007-01-01
Treating the analysis of experimental spectroscopic data as an inverse problem and using regularization techniques to obtain stable pseudoinverse solutions, allows access to previously unavailable level of spectroscopic detail. The data is mapped into an appropriate physically relevant parameter space, leading to better qualitative and quantitative understanding of the underlying physics, and in turn, to better and more detailed models. A brief survey of relevant inverse methods is illustrated by several successful applications to the analysis of nuclear magnetic resonance data, yielding new insight into the structure and dynamics of biomembrane lipids.
Abel's Theorem Simplifies Reduction of Order
ERIC Educational Resources Information Center
Green, William R.
2011-01-01
We give an alternative to the standard method of reduction or order, in which one uses one solution of a homogeneous, linear, second order differential equation to find a second, linearly independent solution. Our method, based on Abel's Theorem, is shorter, less complex and extends to higher order equations.
NASA Astrophysics Data System (ADS)
Beretta, Elena; Manzoni, Andrea; Ratti, Luca
2017-03-01
In this paper we develop a reconstruction algorithm for the solution of an inverse boundary value problem dealing with a semilinear elliptic partial differential equation of interest in cardiac electrophysiology. The goal is the detection of small inhomogeneities located inside a domain Ω , where the coefficients of the equation are altered, starting from observations of the solution of the equation on the boundary \\partial Ω . Exploiting theoretical results recently achieved in [13], we implement a reconstruction procedure based on the computation of the topological gradient of a suitable cost functional. Numerical results obtained for several test cases finally assess the feasibility and the accuracy of the proposed technique.
NASA Astrophysics Data System (ADS)
Ryzhikov, I. S.; Semenkin, E. S.
2017-02-01
This study is focused on solving an inverse mathematical modelling problem for dynamical systems based on observation data and control inputs. The mathematical model is being searched in the form of a linear differential equation, which determines the system with multiple inputs and a single output, and a vector of the initial point coordinates. The described problem is complex and multimodal and for this reason the proposed evolutionary-based optimization technique, which is oriented on a dynamical system identification problem, was applied. To improve its performance an algorithm restart operator was implemented.
NASA Astrophysics Data System (ADS)
Dubovik, O.; Herman, M.; Holdak, A.; Lapyonok, T.; Tanré, D.; Deuzé, J. L.; Ducos, F.; Sinyuk, A.; Lopatin, A.
2010-11-01
The proposed development is an attempt to enhance aerosol retrieval by emphasizing statistical optimization in inversion of advanced satellite observations. This optimization concept improves retrieval accuracy relying on the knowledge of measurement error distribution. Efficient application of such optimization requires pronounced data redundancy (excess of the measurements number over number of unknowns) that is not common in satellite observations. The POLDER imager on board of the PARASOL micro-satellite registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. The completeness of such observations is notably higher than for most currently operating passive satellite aerosol sensors. This provides an opportunity for profound utilization of statistical optimization principles in satellite data inversion. The proposed retrieval scheme is designed as statistically optimized multi-variable fitting of the all available angular observations of total and polarized radiances obtained by POLDER sensor in the window spectral channels where absorption by gaseous is minimal. The total number of such observations by PARASOL always exceeds a hundred over each pixel and the statistical optimization concept promises to be efficient even if the algorithm retrieves several tens of aerosol parameters. Based on this idea, the proposed algorithm uses a large number of unknowns and is aimed on retrieval of extended set of parameters affecting measured radiation. The algorithm is designed to retrieve complete aerosol properties globally. Over land, the algorithm retrieves the parameters of underlying surface simultaneously with aerosol. In all situations, the approach is anticipated to achieve a robust retrieval of complete aerosol properties including information about aerosol particle sizes, shape, absorption and composition (refractive index). In order to achieve reliable retrieval from PARASOL
NASA Astrophysics Data System (ADS)
Dubovik, O.; Herman, M.; Holdak, A.; Lapyonok, T.; Tanré, D.; Deuzé, J. L.; Ducos, F.; Sinyuk, A.; Lopatin, A.
2011-05-01
The proposed development is an attempt to enhance aerosol retrieval by emphasizing statistical optimization in inversion of advanced satellite observations. This optimization concept improves retrieval accuracy relying on the knowledge of measurement error distribution. Efficient application of such optimization requires pronounced data redundancy (excess of the measurements number over number of unknowns) that is not common in satellite observations. The POLDER imager on board the PARASOL micro-satellite registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. The completeness of such observations is notably higher than for most currently operating passive satellite aerosol sensors. This provides an opportunity for profound utilization of statistical optimization principles in satellite data inversion. The proposed retrieval scheme is designed as statistically optimized multi-variable fitting of all available angular observations obtained by the POLDER sensor in the window spectral channels where absorption by gas is minimal. The total number of such observations by PARASOL always exceeds a hundred over each pixel and the statistical optimization concept promises to be efficient even if the algorithm retrieves several tens of aerosol parameters. Based on this idea, the proposed algorithm uses a large number of unknowns and is aimed at retrieval of extended set of parameters affecting measured radiation. The algorithm is designed to retrieve complete aerosol properties globally. Over land, the algorithm retrieves the parameters of underlying surface simultaneously with aerosol. In all situations, the approach is anticipated to achieve a robust retrieval of complete aerosol properties including information about aerosol particle sizes, shape, absorption and composition (refractive index). In order to achieve reliable retrieval from PARASOL observations even over very reflective
Comparison of four stable numerical methods for Abel's integral equation
NASA Technical Reports Server (NTRS)
Murio, Diego A.; Mejia, Carlos E.
1991-01-01
The 3-D image reconstruction from cone-beam projections in computerized tomography leads naturally, in the case of radial symmetry, to the study of Abel-type integral equations. If the experimental information is obtained from measured data, on a discrete set of points, special methods are needed in order to restore continuity with respect to the data. A new combined Regularized-Adjoint-Conjugate Gradient algorithm, together with two different implementations of the Mollification Method (one based on a data filtering technique and the other on the mollification of the kernal function) and a regularization by truncation method (initially proposed for 2-D ray sample schemes and more recently extended to 3-D cone-beam image reconstruction) are extensively tested and compared for accuracy and numerical stability as functions of the level of noise in the data.
Bakhos, Tania; Saibaba, Arvind K.; Kitanidis, Peter K.
2015-10-15
We consider the problem of estimating parameters in large-scale weakly nonlinear inverse problems for which the underlying governing equations is a linear, time-dependent, parabolic partial differential equation. A major challenge in solving these inverse problems using Newton-type methods is the computational cost associated with solving the forward problem and with repeated construction of the Jacobian, which represents the sensitivity of the measurements to the unknown parameters. Forming the Jacobian can be prohibitively expensive because it requires repeated solutions of the forward and adjoint time-dependent parabolic partial differential equations corresponding to multiple sources and receivers. We propose an efficient method based on a Laplace transform-based exponential time integrator combined with a flexible Krylov subspace approach to solve the resulting shifted systems of equations efficiently. Our proposed solver speeds up the computation of the forward and adjoint problems, thus yielding significant speedup in total inversion time. We consider an application from Transient Hydraulic Tomography (THT), which is an imaging technique to estimate hydraulic parameters related to the subsurface from pressure measurements obtained by a series of pumping tests. The algorithms discussed are applied to a synthetic example taken from THT to demonstrate the resulting computational gains of this proposed method.
NASA Astrophysics Data System (ADS)
Jesús Moral García, Francisco; Rebollo Castillo, Francisco Javier; Monteiro Santos, Fernando
2016-04-01
Maps of apparent electrical conductivity of the soil are commonly used in precision agriculture to indirectly characterize some important properties like salinity, water, and clay content. Traditionally, these studies are made through an empirical relationship between apparent electrical conductivity and properties measured in soil samples collected at a few locations in the experimental area and at a few selected depths. Recently, some authors have used not the apparent conductivity values but the soil bulk conductivity (in 2D or 3D) calculated from measured apparent electrical conductivity through the application of an inversion method. All the published works used data collected with electromagnetic (EM) instruments. We present a new software to invert the apparent electrical conductivity data collected with VERIS 3100 and 3150 (or the more recent version with three pairs of electrodes) using the 1D spatially constrained inversion method (1D SCI). The software allows the calculation of the distribution of the bulk electrical conductivity in the survey area till a depth of 1 m. The algorithm is applied to experimental data and correlations with clay and water content have been established using soil samples collected at some boreholes. Keywords: Digital soil mapping; inversion modelling; VERIS; soil apparent electrical conductivity.
NASA Technical Reports Server (NTRS)
Suttles, John T.; Wielicki, Bruce A.; Vemury, Sastri
1992-01-01
The ERBE algorithm is applied to the Nimbus-7 earth radiation budget (ERB) scanner data for June 1979 to analyze the performance of an inversion method in deriving top-of-atmosphere albedos and longwave radiative fluxes. The performance is assessed by comparing ERBE algorithm results with appropriate results derived using the sorting-by-angular-bins (SAB) method, the ERB MATRIX algorithm, and the 'new-cloud ERB' (NCLE) algorithm. Comparisons are made for top-of-atmosphere albedos, longwave fluxes, viewing zenith-angle dependence of derived albedos and longwave fluxes, and cloud fractional coverage. Using the SAB method as a reference, the rms accuracy of monthly average ERBE-derived results are estimated to be 0.0165 (5.6 W/sq m) for albedos (shortwave fluxes) and 3.0 W/sq m for longwave fluxes. The ERBE-derived results were found to depend systematically on the viewing zenith angle, varying from near nadir to near the limb by about 10 percent for albedos and by 6-7 percent for longwave fluxes. Analyses indicated that the ERBE angular models are the most likely source of the systematic angular dependences. Comparison of the ERBE-derived cloud fractions, based on a maximum-likelihood estimation method, with results from the NCLE showed agreement within about 10 percent.
NASA Astrophysics Data System (ADS)
Balkaya, Çağlayan; Ekinci, Yunus Levent; Göktürkler, Gökhan; Turan, Seçil
2017-01-01
3D non-linear inversion of total field magnetic anomalies caused by vertical-sided prismatic bodies has been achieved by differential evolution (DE), which is one of the population-based evolutionary algorithms. We have demonstrated the efficiency of the algorithm on both synthetic and field magnetic anomalies by estimating horizontal distances from the origin in both north and east directions, depths to the top and bottom of the bodies, inclination and declination angles of the magnetization, and intensity of magnetization of the causative bodies. In the synthetic anomaly case, we have considered both noise-free and noisy data sets due to two vertical-sided prismatic bodies in a non-magnetic medium. For the field case, airborne magnetic anomalies originated from intrusive granitoids at the eastern part of the Biga Peninsula (NW Turkey) which is composed of various kinds of sedimentary, metamorphic and igneous rocks, have been inverted and interpreted. Since the granitoids are the outcropped rocks in the field, the estimations for the top depths of two prisms representing the magnetic bodies were excluded during inversion studies. Estimated bottom depths are in good agreement with the ones obtained by a different approach based on 3D modelling of pseudogravity anomalies. Accuracy of the estimated parameters from both cases has been also investigated via probability density functions. Based on the tests in the present study, it can be concluded that DE is a useful tool for the parameter estimation of source bodies using magnetic anomalies.
NASA Technical Reports Server (NTRS)
Dubovik, O; Herman, M.; Holdak, A.; Lapyonok, T.; Taure, D.; Deuze, J. L.; Ducos, F.; Sinyuk, A.
2011-01-01
The proposed development is an attempt to enhance aerosol retrieval by emphasizing statistical optimization in inversion of advanced satellite observations. This optimization concept improves retrieval accuracy relying on the knowledge of measurement error distribution. Efficient application of such optimization requires pronounced data redundancy (excess of the measurements number over number of unknowns) that is not common in satellite observations. The POLDER imager on board the PARASOL microsatellite registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. The completeness of such observations is notably higher than for most currently operating passive satellite aerosol sensors. This provides an opportunity for profound utilization of statistical optimization principles in satellite data inversion. The proposed retrieval scheme is designed as statistically optimized multi-variable fitting of all available angular observations obtained by the POLDER sensor in the window spectral channels where absorption by gas is minimal. The total number of such observations by PARASOL always exceeds a hundred over each pixel and the statistical optimization concept promises to be efficient even if the algorithm retrieves several tens of aerosol parameters. Based on this idea, the proposed algorithm uses a large number of unknowns and is aimed at retrieval of extended set of parameters affecting measured radiation.
Qiu, Xiao-han; Zhang, Yu-jun; Yin, Gao-fang; Shi, Chao-yi; Yu, Xiao-ya; Zhao, Nan-jing; Liu, Wen-qing
2015-08-01
The fast chlorophyll fluorescence induction curve contains rich information of photosynthesis. It can reflect various information of vegetation, such as, the survival status, the pathological condition and the physiology trends under the stress state. Through the acquisition of algae fluorescence and induced optical signal, the fast phase of chlorophyll fluorescence kinetics curve was fitted. Based on least square fitting method, we introduced adaptive minimum error approaching method for fast multivariate nonlinear regression fitting toward chlorophyll fluorescence kinetics curve. We realized Fo (fixedfluorescent), Fm (maximum fluorescence yield), σPSII (PSII functional absorption cross section) details parameters inversion and the photosynthetic parameters inversion of Chlorella pyrenoidosa. And we also studied physiological variation of Chlorella pyrenoidosa under the stress of Cu(2+).
NASA Astrophysics Data System (ADS)
Eladj, Said; bansir, fateh; ouadfeul, sid Ali
2016-04-01
The application of genetic algorithm starts with an initial population of chromosomes representing a "model space". Chromosome chains are preferentially Reproduced based on Their fitness Compared to the total population. However, a good chromosome has a Greater opportunity to Produce offspring Compared To other chromosomes in the population. The advantage of the combination HGA / SAA is the use of a global search approach on a large population of local maxima to Improve Significantly the performance of the method. To define the parameters of the Hybrid Genetic Algorithm Steepest Ascent Auto Statics (HGA / SAA) job, we Evaluated by testing in the first stage of "Steepest Ascent," the optimal parameters related to the data used. 1- The number of iterations "Number of hill climbing iteration" is equal to 40 iterations. This parameter defines the participation of the algorithm "SA", in this hybrid approach. 2- The minimum eigenvalue for SA '= 0.8. This is linked to the quality of data and S / N ratio. To find an implementation performance of hybrid genetic algorithms in the inversion for estimating of the residual static corrections, tests Were Performed to determine the number of generation of HGA / SAA. Using the values of residual static corrections already calculated by the Approaches "SAA and CSAA" learning has Proved very effective in the building of the cross-correlation table. To determine the optimal number of generation, we Conducted a series of tests ranging from [10 to 200] generations. The application on real seismic data in southern Algeria allowed us to judge the performance and capacity of the inversion with this hybrid method "HGA / SAA". This experience Clarified the influence of the corrections quality estimated from "SAA / CSAA" and the optimum number of generation hybrid genetic algorithm "HGA" required to have a satisfactory performance. Twenty (20) generations Were enough to Improve continuity and resolution of seismic horizons. This Will allow
NASA Astrophysics Data System (ADS)
Hunziker, J.; Thorbecke, J.; Slob, E. C.
2014-12-01
Commonly, electromagnetic measurements for exploring and monitoring hydrocarbon reservoirs are inverted for the subsurface conductivity distribution by minimizing the difference between the actual data and a forward modeled dataset. The convergence of the inversion process to the correct solution strongly depends on the shape of the solution space. Since this is a non-linear problem, there exist a multitude of minima of which only the global one provides the correct conductivity values. To easily find the global minimum we desire it to have a broad cone of attraction, while it should also feature a very narrow bottom in order to obtain the subsurface conductivity with high resolution. In this study, we aim to determine which combination of input data corresponds to a favorable shape of the solution space. Since the solution space is N-dimensional, with N being the number of unknown subsurface parameters, plotting it is out of the question. In our approach, we use a genetic algorithm (Goldberg, 1989) to probe the solution space. Such algorithms have the advantage that every run of the same problem will end up at a different solution. Most of these solutions are expected to lie close to the global minimum. A situation where only few runs end up in the global minimum indicates that the solution space consists of a lot of local minima or that the cone of attraction of the global minimum is small. If a lot of runs end up with a similar data-misfit but with a large spread of the subsurface medium parameters in one or more direction, it can be concluded that the chosen data-input is not sensitive with respect to that direction. Compared to the study of Hunziker et al. 2014, we allow also to invert for subsurface boundaries and include more combinations of input datasets. The results so far suggest that it is essential to include the magnetic field in the inversion process in order to find the anisotropic conductivity values. ReferencesGoldberg, D. E., 1989. Genetic
The clusters Abell 222 and Abell 223: a multi-wavelength view
NASA Astrophysics Data System (ADS)
Durret, F.; Laganá, T. F.; Adami, C.; Bertin, E.
2010-07-01
Context. The Abell 222 and 223 clusters are located at an average redshift z ~ 0.21 and are separated by 0.26 deg. Signatures of mergers have been previously found in these clusters, both in X-rays and at optical wavelengths, thus motivating our study. In X-rays, they are relatively bright, and Abell 223 shows a double structure. A filament has also been detected between the clusters both at optical and X-ray wavelengths. Aims: We analyse the optical properties of these two clusters based on deep imaging in two bands, derive their galaxy luminosity functions (GLFs) and correlate these properties with X-ray characteristics derived from XMM-Newton data. Methods: The optical part of our study is based on archive images obtained with the CFHT Megaprime/Megacam camera, covering a total region of about 1 deg2, or 12.3 × 12.3 Mpc2 at a redshift of 0.21. The X-ray analysis is based on archive XMM-Newton images. Results: The GLFs of Abell 222 in the g' and r' bands are well fit by a Schechter function; the GLF is steeper in r' than in g'. For Abell 223, the GLFs in both bands require a second component at bright magnitudes, added to a Schechter function; they are similar in both bands. The Serna & Gerbal method allows to separate well the two clusters. No obvious filamentary structures are detected at very large scales around the clusters, but a third cluster at the same redshift, Abell 209, is located at a projected distance of 19.2 Mpc. X-ray temperature and metallicity maps reveal that the temperature and metallicity of the X-ray gas are quite homogeneous in Abell 222, while they are very perturbed in Abell 223. Conclusions: The Abell 222/Abell 223 system is complex. The two clusters that form this structure present very different dynamical states. Abell 222 is a smaller, less massive and almost isothermal cluster. On the other hand, Abell 223 is more massive and has most probably been crossed by a subcluster on its way to the northeast. As a consequence, the
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
A well-known challenge in uncertainty quantification (UQ) is the "curse of dimensionality". However, many high-dimensional UQ problems are essentially low-dimensional, because the randomness of the quantity of interest (QoI) is caused only by uncertain parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace. Motivated by this observation, we propose and demonstrate in this paper an inverse regression-based UQ approach (IRUQ) for high-dimensional problems. Specifically, we use an inverse regression procedure to estimate the SDR subspace and then convert the original problem to a low-dimensional one, which can be efficiently solved by building a response surface model such as a polynomial chaos expansion. The novelty and advantages of the proposed approach is seen in its computational efficiency and practicality. Comparing with Monte Carlo, the traditionally preferred approach for high-dimensional UQ, IRUQ with a comparable cost generally gives much more accurate solutions even for high-dimensional problems, and even when the dimension reduction is not exactly sufficient. Theoretically, IRUQ is proved to converge twice as fast as the approach it uses seeking the SDR subspace. For example, while a sliced inverse regression method converges to the SDR subspace at the rate of $O(n^{-1/2})$, the corresponding IRUQ converges at $O(n^{-1})$. IRUQ also provides several desired conveniences in practice. It is non-intrusive, requiring only a simulator to generate realizations of the QoI, and there is no need to compute the high-dimensional gradient of the QoI. Finally, error bars can be derived for the estimation results reported by IRUQ.
2012-01-01
the kernel e−st of this transform decays rapidly with respect to t, then in actuality a finite time interval is sufficient. In addition, the data...1989 Inverse Problems in Quantum Scattering Theory (New York: Springer) [20] Chuah H T, Lee K Y and Lau T W 1995 Dielectric constants of rubber and oil ... palm leaf samples at X-band IEEE Trans. Geosci. Remote Sens. 33 221–3 [21] DeAngelo M and Mueller J L 2010 2-D ∂-bar reconstructions of human chest
NASA Astrophysics Data System (ADS)
Li, Dongxing; Zhao, Yan; Dong, Xu
2008-03-01
In general image restoration, the point spread function (PSF) of the imaging system, and the observation noise, are known a priori information. The aero-optics effect is yielded when the objects ( e.g, missile, aircraft etc.) are flying in high speed or ultrasonic speed. In this situation, the PSF and the observation noise are unknown a priori. The identification and the restoration of the turbulence degraded images is a challenging problem in the world. The algorithm based on the nonnegativity and support constraints recursive inverse filtering (NAS-RIF) is proposed in order to identify and restore the turbulence degraded images. The NAS-RIF technique applies to situations in which the scene consists of a finite support object against a uniformly black, grey, or white background. The restoration procedure of NAS-RIF involves recursive filtering of the blurred image to minimize a convex cost function. The algorithm proposed in this paper is that the turbulence degraded image is filtered before it passes the recursive filter. The conjugate gradient minimization routine was used for minimization of the NAS-RIF cost function. The algorithm based on the NAS-RIF is used to identify and restore the wind tunnel tested images. The experimental results show that the restoration effect is improved obviously.
NASA Technical Reports Server (NTRS)
Kurtz, M. J.; Huchra, J. P.; Beers, T. C.; Geller, M. J.; Gioia, I. M.
1985-01-01
X-ray and optical observations of the cluster of galaxies Abell 744 are presented. The X-ray flux (assuming H(0) = 100 km/s per Mpc) is about 9 x 10 to the 42nd erg/s. The X-ray source is extended, but shows no other structure. Photographic photometry (in Kron-Cousins R), calibrated by deep CCD frames, is presented for all galaxies brighter than 19th magnitude within 0.75 Mpc of the cluster center. The luminosity function is normal, and the isopleths show little evidence of substructure near the cluster center. The cluster has a dominant central galaxy, which is classified as a normal brightest-cluster elliptical on the basis of its luminosity profile. New redshifts were obtained for 26 galaxies in the vicinity of the cluster center; 20 appear to be cluster members. The spatial distribution of redshifts is peculiar; the dispersion within the 150 kpc core radius is much greater than outside. Abell 744 is similar to the nearby cluster Abell 1060.
A Strong Merger Shock in Abell 665
NASA Technical Reports Server (NTRS)
Dasadia, S.; Sun, M.; Sarazin, C.; Morandi, A.; Markevitch, M.; Wik, D.; Feretti, L.; Giovannini, G.; Govoni, F.
2016-01-01
Deep (103 ks) Chandra observations of Abell 665 have revealed rich structures in this merging galaxy cluster, including a strong shock and two cold fronts. The newly discovered shock has a Mach number of M =?3.0 +/- 0.6, propagating in front of a cold disrupted cloud. This makes Abell 665 the second cluster, after the Bullet cluster, where a strong merger shock of M is approximately 3 has been detected. The shock velocity from jump conditions is consistent with (2.7 +/- 0.7) × 10(exp 3) km s(exp -1). The new data also reveal a prominent southern cold front with potentially heated gas ahead of it. Abell 665 also hosts a giant radio halo. There is a hint of diffuse radio emission extending to the shock at the north, which needs to be examined with better radio data. This new strong shock provides a great opportunity to study the reacceleration model with the X-ray and radio data combined.
NASA Astrophysics Data System (ADS)
Zhang, Zhenfei; Hagfors, Tor; Nielsen, Erling; Picardi, Giovanni; Mesdea, Arturo; Plaut, Jeffrey J.
2008-05-01
Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS) has offered abundant data, which have been used to estimate dielectric properties of the south polar layered deposits (SPLD) of Mars. This paper presents a new way to invert the data to estimate the dielectric properties of the SPLD. A total of 4364 measurements were analyzed. The received radar signals are controlled by the physical properties of the SPLD and its basal layer and in addition by a number of factors including the transmitted wave properties, the satellite height, and the atmosphere/ionosphere environments. The received signals may also be influenced by surface clutter. Most of these factors are variable. This complexity causes the inversion to be difficult. To carry out the inversion, it is therefore essential to define a reasonably simple model for the physics of the surface/subsurface layers where the radar signal is reflected. The top and bottom interfaces of the SPLD are observed by MARSIS as two reflection peaks of the radar signals. The intensity ratio between the two reflection peaks is observed to be a function of the time difference separating the two peaks. By modeling this dependency, the influences of the satellite position and the atmosphere/ionosphere environments are canceled. This is a major step toward carrying out the inversion. Nevertheless, the inverse problem remains ill-posed and highly nonlinear. Bayesian inference is employed to deal with the ill-posed aspect of the inversion, and genetic algorithm is introduced to deal with the nonlinearity. It is concluded that the most probable value of the relative dielectric constant of the SPLD lies in 3.0-5.0, conductivity 1.0-2.0 × 10-6 S/m, and the relative dielectric constant of the basal layer is 7.5-8.5 (The basal layer conductivity is assumed to be 1.0 × 10-7 S/m.) These results support a suggestion that the SPLD are water ice/dust mixtures with dust content varying from 0 to more than 75%.
NASA Astrophysics Data System (ADS)
Yoon, Kyung-Beom; Park, Won-Hee
2015-04-01
The convective heat transfer coefficient and surface emissivity before and after flame occurrence on a wood specimen surface and the flame heat flux were estimated using the repulsive particle swarm optimization algorithm and cone heater test results. The cone heater specified in the ISO 5660 standards was used, and six cone heater heat fluxes were tested. Preservative-treated Douglas fir 21 mm in thickness was used as the wood specimen in the tests. This study confirmed that the surface temperature of the specimen, which was calculated using the convective heat transfer coefficient, surface emissivity and flame heat flux on the wood specimen by a repulsive particle swarm optimization algorithm, was consistent with the measured temperature. Considering the measurement errors in the surface temperature of the specimen, the applicability of the optimization method considered in this study was evaluated.
ROSAT HRI images of Abell 85 and Abell 496: Evidence for inhomogeneities in cooling flows
NASA Technical Reports Server (NTRS)
Prestwich, Andrea H.; Guimond, Stephen J.; Luginbuhl, Christian; Joy, Marshall
1994-01-01
We present ROSAT HRI images of two clusters of galaxies with cooling flows, Abell 496 and Abell 85. In these clusters, x-ray emission on small scales above the general cluster emission is significant at the 3 sigma level. There is no evidence for optical counterparts. The enhancements may be associated with lumps of gas at a lower temperature and higher density than the ambient medium, or hotter, denser gas perhaps compressed by magnetic fields. These observations can be used to test models of how thermal instabilities form and evolve in cooling flows.
An improved inversion for FORMOSAT-3/COSMIC ionosphere electron density profiles
NASA Astrophysics Data System (ADS)
Pedatella, N. M.; Yue, X.; Schreiner, W. S.
2015-10-01
An improved method to retrieve electron density profiles from Global Positioning System (GPS) radio occultation (RO) data is presented and applied to Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) observations. The improved inversion uses a monthly grid of COSMIC F region peak densities (NmF2), which are obtained via the standard Abel inversion, to aid the Abel inversion by providing information on the horizontal gradients in the ionosphere. This lessens the impact of ionospheric gradients on the retrieval of GPS RO electron density profiles, reducing the dominant error source in the standard Abel inversion. Results are presented that demonstrate the NmF2 aided retrieval significantly improves the quality of the COSMIC electron density profiles. Improvements are most notable at E region altitudes, where the improved inversion reduces the artificial plasma cave that is generated by the Abel inversion spherical symmetry assumption at low latitudes during the daytime. Occurrence of unphysical negative electron densities at E region altitudes is also reduced. Furthermore, the NmF2 aided inversion has a positive impact at F region altitudes, where it results in a more distinct equatorial ionization anomaly. COSMIC electron density profiles inverted using our new approach are currently available through the University Corporation for Atmospheric Research COSMIC Data Analysis and Archive Center. Owing to the significant improvement in the results, COSMIC data users are encouraged to use electron density profiles based on the improved inversion rather than those inverted by the standard Abel inversion.
NASA Astrophysics Data System (ADS)
Fulin, Su; Hongxin, Yang
2016-07-01
For better using of inverse synthetic aperture radar (ISAR) images of ship targets, it is more desirable to select a proper imaging time to obtain high quality top-view or side-view images. However, optimum imaging time selection is not robust enough for the restriction of traditional geometric feature extraction methods. In our study, we propose a method based on the geometric features and gradient maximization. First, we select the imaging instant from radar echoes by the centerline and mainmast of the ship. In this part, we propose a geometric features extraction method to improve the robustness of instant selection in different scenarios. Then, an image gradient maximization is employed to estimate the period for ISAR imaging. Finally, experimental results of both simulated and real signals are provided to demonstrate the effectiveness and practicability of the algorithm.
Fussen, D; Arijs, E; Nevejans, D; Van Hellemont, F; Brogniez, C; Lenoble, J
1998-05-20
We present the results of a comparison of the total extinction altitude profiles measured at the same time and at same location by the ORA (Occultation Radiometer) and Stratospheric Aerosol and Gas Experiment II solar occultation experiments at three different wavelengths. A series of 25 events for which the grazing points of both experiments lie within a 2 degrees window has been analyzed. The mean relative differences observed over the altitude range 15-45 km are -8.4%, 1.6%, and 3% for the three channels (0.385, 0.6, and 1.02 microm). Some systematic degradation occurs below 20 km (as the result of signal saturation and possible cloud interference) and above 40 km (low absorption). The fair general agreement between the extinction profiles obtained by two different instruments enhances our confidence in the results of the ORA experiment and of the recently developed vertical inversion algorithm applied to real data.
Molokanov, A; Chojnacki, E; Blanchardon, E
2010-01-01
The individual monitoring of internal exposure of workers comprises two steps: measurement and measurement interpretation. The latter consists in reconstructing the intake of a radionuclide from the activity measurement and calculating the dose using a biokinetic model of the radionuclide behavior in the human body. Mathematically, reconstructing the intake is solving an inverse problem described by a measurement-model equation. The aim of this paper is to propose a solution to this inverse problem when the measurement-model parameters are considered as uncertain. For that, an analysis of the uncertainty on the intake calculation is performed taking into account the dispersion of the measured quantity and the uncertainties of the measurement-model parameters. It is shown that both frequentist and Bayesian approaches can be used to solve the problem according to the measurement-model formulation. A common calculation algorithm is proposed to support both approaches and applied to the examples of tritiated water intake and plutonium inhalation by a worker.
NASA Technical Reports Server (NTRS)
Palacios, Sherry L.; Schafer, Chris; Broughton, Jennifer; Guild, Liane S.; Kudela, Raphael M.
2013-01-01
There is a need in the Biological Oceanography community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand energy flow through ecosystems, to track the fate of carbon in the ocean, and to detect and monitor-for harmful algal blooms (HABs). The ocean color community has responded to this demand with the development of phytoplankton functional type (PFT) discrimination algorithms. These PFT algorithms fall into one of three categories depending on the science application: size-based, biogeochemical function, and taxonomy. The new PFT algorithm Phytoplankton Detection with Optics (PHYDOTax) is an inversion algorithm that discriminates taxon-specific biomass to differentiate among six taxa found in the California Current System: diatoms, dinoflagellates, haptophytes, chlorophytes, cryptophytes, and cyanophytes. PHYDOTax was developed and validated in Monterey Bay, CA for the high resolution imaging spectrometer, Spectroscopic Aerial Mapping System with On-board Navigation (SAMSON - 3.5 nm resolution). PHYDOTax exploits the high spectral resolution of an imaging spectrometer and the improved spatial resolution that airborne data provides for coastal areas. The objective of this study was to apply PHYDOTax to a relatively lower resolution imaging spectrometer to test the algorithm's sensitivity to atmospheric correction, to evaluate capability with other sensors, and to determine if down-sampling spectral resolution would degrade its ability to discriminate among phytoplankton taxa. This study is a part of the larger Hyperspectral Infrared Imager (HyspIRI) airborne simulation campaign which is collecting Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imagery aboard NASA's ER-2 aircraft during three seasons in each of two years over terrestrial and marine targets in California. Our aquatic component seeks to develop and test algorithms to retrieve water quality properties (e.g. HABs and river plumes) in both marine and in
NASA Astrophysics Data System (ADS)
Palacios, S. L.; Schafer, C. B.; Broughton, J.; Guild, L. S.; Kudela, R. M.
2013-12-01
There is a need in the Biological Oceanography community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand energy flow through ecosystems, to track the fate of carbon in the ocean, and to detect and monitor-for harmful algal blooms (HABs). The ocean color community has responded to this demand with the development of phytoplankton functional type (PFT) discrimination algorithms. These PFT algorithms fall into one of three categories depending on the science application: size-based, biogeochemical function, and taxonomy. The new PFT algorithm Phytoplankton Detection with Optics (PHYDOTax) is an inversion algorithm that discriminates taxon-specific biomass to differentiate among six taxa found in the California Current System: diatoms, dinoflagellates, haptophytes, chlorophytes, cryptophytes, and cyanophytes. PHYDOTax was developed and validated in Monterey Bay, CA for the high resolution imaging spectrometer, Spectroscopic Aerial Mapping System with On-board Navigation (SAMSON - 3.5 nm resolution). PHYDOTax exploits the high spectral resolution of an imaging spectrometer and the improved spatial resolution that airborne data provides for coastal areas. The objective of this study was to apply PHYDOTax to a relatively lower resolution imaging spectrometer to test the algorithm's sensitivity to atmospheric correction, to evaluate capability with other sensors, and to determine if down-sampling spectral resolution would degrade its ability to discriminate among phytoplankton taxa. This study is a part of the larger Hyperspectral Infrared Imager (HyspIRI) airborne simulation campaign which is collecting Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imagery aboard NASA's ER-2 aircraft during three seasons in each of two years over terrestrial and marine targets in California. Our aquatic component seeks to develop and test algorithms to retrieve water quality properties (e.g. HABs and river plumes) in both marine and in
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming
2014-12-01
The Ant Colony Optimization algorithm based on the probability density function (PDF-ACO) is applied to estimate the bimodal aerosol particle size distribution (PSD). The direct problem is solved by the modified Anomalous Diffraction Approximation (ADA, as an approximation for optically large and soft spheres, i.e., χ≫1 and |m-1|≪1) and the Beer-Lambert law. First, a popular bimodal aerosol PSD and three other bimodal PSDs are retrieved in the dependent model by the multi-wavelength extinction technique. All the results reveal that the PDF-ACO algorithm can be used as an effective technique to investigate the bimodal PSD. Then, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution function to retrieve the bimodal PSDs under the independent model. Finally, the J-SB and M-β functions are applied to recover actual measurement aerosol PSDs over Beijing and Shanghai obtained from the aerosol robotic network (AERONET). The numerical simulation and experimental results demonstrate that these two general functions, especially the J-SB function, can be used as a versatile distribution function to retrieve the bimodal aerosol PSD when no priori information about the PSD is available.
A multifrequency MUSIC algorithm for locating small inhomogeneities in inverse scattering
NASA Astrophysics Data System (ADS)
Griesmaier, Roland; Schmiedecke, Christian
2017-03-01
We consider an inverse scattering problem for time-harmonic acoustic or electromagnetic waves with sparse multifrequency far field data-sets. The goal is to localize several small penetrable objects embedded inside an otherwise homogeneous background medium from observations of far fields of scattered waves corresponding to incident plane waves with one fixed incident direction but several different frequencies. We assume that the far field is measured at a few observation directions only. Taking advantage of the smallness of the scatterers with respect to wavelength we utilize an asymptotic representation formula for the far field to design and analyze a MUSIC-type reconstruction method for this setup. We establish lower bounds on the number of frequencies and receiver directions that are required to recover the number and the positions of an ensemble of scatterers from the given measurements. Furthermore we briefly sketch a possible application of the reconstruction method to the practically relevant case of multifrequency backscattering data. Numerical examples are presented to document the potentials and limitations of this approach.
SelInv - An Algorithm for Selected Inversion of a Sparse Symmetric Matrix
Lin, Lin; Yang, Chao; Meza, Juan C.; Lu, Jianfeng; Ying, Lexing; E, Weinan
2009-10-16
We describe an efficient implementation of an algorithm for computing selected elements of a general sparse symmetric matrix A that can be decomposed as A = LDL^T, where L is lower triangular and D is diagonal. Our implementation, which is called SelInv, is built on top of an efficient supernodal left-looking LDL^T factorization of A. We discuss how computational efficiency can be gained by making use of a relative index array to handle indirect addressing. We report the performance of SelInv on a collection of sparse matrices of various sizes and nonzero structures. We also demonstrate how SelInv can be used in electronic structure calculations.
The cluster of galaxies Abell 376
NASA Astrophysics Data System (ADS)
Proust, D.; Capelato, H. V.; Hickel, G.; Sodré, L., Jr.; Lima Neto, G. B.; Cuevas, H.
2003-08-01
We present a dynamical analysis of the galaxy cluster Abell 376 based on a set of 73 velocities, most of them measured at Pic du Midi and Haute-Provence observatories and completed with data from the literature. Data on individual galaxies are presented and the accuracy of the determined velocities is discussed as well as some properties of the cluster. We obtained an improved mean redshift value z = 0.0478+0.005-0.006 and velocity dispersion sigma = 852+120-76 km s-1. Our analysis indicates that inside a radius of ~ 900 h70-1 kpc ( ~ 15 arcmin) the cluster is well relaxed without any remarkable features and the X-ray emission traces fairly well the galaxy distribution. A possible substructure is seen at 20 arcmin from the centre towards the Southwest direction, but is not confirmed by the velocity field. This SW clump is, however, kinematically bound to the main structure of Abell 376. A dense condensation of galaxies is detected at 46 arcmin (projected distance 2.6 h70-1 Mpc) from the centre towards the Northwest and analysis of the apparent luminosity distribution of its galaxies suggests that this clump is part of the large scale structure of Abell 376. X-ray spectroscopic analysis of ASCA data resulted in a temperature kT = 4.3 +/- 0.4 keV and metal abundance Z = 0.32 +/- 0.08 Zsun. The velocity dispersion corresponding to this temperature using the TX-sigma scaling relation is in agreement with the measured galaxies velocities. Based on observations made Haute-Provence and Pic du Midi Observatories (France). Table 1 is also available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/407/31
Martin, Andre-Guy; Roy, Jean; Beaulieu, Luc; Pouliot, Jean; Harel, Francois; Vigneault, Eric . E-mail: Eric.Vigneault@chuq.qc.ca
2007-02-01
Purpose: To report outcomes and toxicity of the first Canadian permanent prostate implant program. Methods and Materials: 396 consecutive patients (Gleason {<=}6, initial prostate specific antigen (PSA) {<=}10 and stage T1-T2a disease) were implanted between June 1994 and December 2001. The median follow-up is of 60 months (maximum, 136 months). All patients were planned with fast-simulated annealing inverse planning algorithm with high activity seeds ([gt] 0.76 U). Acute and late toxicity is reported for the first 213 patients using a modified RTOG toxicity scale. The Kaplan-Meier biochemical failure-free survival (bFFS) is reported according to the ASTRO and Houston definitions. Results: The bFFS at 60 months was of 88.5% (90.5%) according to the ASTRO (Houston) definition and, of 91.4% (94.6%) in the low risk group (initial PSA {<=}10 and Gleason {<=}6 and Stage {<=}T2a). Risk factors statistically associated with bFFS were: initial PSA >10, a Gleason score of 7-8, and stage T2b-T3. The mean D90 was of 151 {+-} 36.1 Gy. The mean V100 was of 85.4 {+-} 8.5% with a mean V150 of 60.1 {+-} 12.3%. Overall, the implants were well tolerated. In the first 6 months, 31.5% of the patients were free of genitourinary symptoms (GUs), 12.7% had Grade 3 GUs; 91.6% were free of gastrointestinal symptoms (GIs). After 6 months, 54.0% were GUs free, 1.4% had Grade 3 GUs; 95.8% were GIs free. Conclusion: The inverse planning with fast simulated annealing and high activity seeds gives a 5-year bFFS, which is comparable with the best published series with a low toxicity profile.
NASA Astrophysics Data System (ADS)
Park, I.; Kim, K. Y.; Jolly, A. D.
2014-12-01
The Ngauruhoe Volcano is an andesitic volcano of 2,291 m height in the North Island, New Zealand. One-dimensional shear-wave velocity (Vs) structure beneath the OTVZ seismic station near the volcano was inferred by the genetic algorithm (GA) inversion of radial receiver functions (RFs). Radial RFs were derived from 337 teleseismic events (Mw ≥ 5.5 and epicentral distances between 30° and 90°) recorded by a broad-band seismometer at the seismic station during the period from November 11, 2011 to September 11, 2013. Among the derived RFs, only 87 RFs with higher signal to noise ratios were used for the GA inversion method. Three hundred velocity models for 100 generations were derived using velocity models comprising 32 layers with a maximum depth of 60 km. The inverted models were averaged to obtain the final Vs model, which indicates a clear discontinuity at a depth of 18±1 km where Vs abruptly increases from 3.1 to 4.0 km/s. Above the sharp Vs discontinuity indicating the Moho, an average Vs is 2.8 km/s. Low-velocity layers (LVLs) are identified at depths of 10-16 km in the lower crust (Vs < 3.0 km/s) and 28-40 km in the upper mantle (Vs < 4.4 km/s). Corresponding average Vs are 2.8 and 4.2 km/s, respectively. The thin crust with relatively low velocities and existence of LVLs in the lower crust and upper mantle allude to the presence of magma associated with the subducting Pacific Plate. The limited number of teleseismic events recorded at the OTVZ station prevents from further investigation into the effect of RFs of dipping boundaries and anisotropy.
Semenov, Alexander; Zaikin, Oleg
2016-01-01
In this paper we propose an approach for constructing partitionings of hard variants of the Boolean satisfiability problem (SAT). Such partitionings can be used for solving corresponding SAT instances in parallel. For the same SAT instance one can construct different partitionings, each of them is a set of simplified versions of the original SAT instance. The effectiveness of an arbitrary partitioning is determined by the total time of solving of all SAT instances from it. We suggest the approach, based on the Monte Carlo method, for estimating time of processing of an arbitrary partitioning. With each partitioning we associate a point in the special finite search space. The estimation of effectiveness of the particular partitioning is the value of predictive function in the corresponding point of this space. The problem of search for an effective partitioning can be formulated as a problem of optimization of the predictive function. We use metaheuristic algorithms (simulated annealing and tabu search) to move from point to point in the search space. In our computational experiments we found partitionings for SAT instances encoding problems of inversion of some cryptographic functions. Several of these SAT instances with realistic predicted solving time were successfully solved on a computing cluster and in the volunteer computing project SAT@home. The solving time agrees well with estimations obtained by the proposed method.
NASA Astrophysics Data System (ADS)
Hebbar, Ullhas; Paul, Anup; Banerjee, Rupak
2016-11-01
Image based modeling is finding increasing relevance in assisting diagnosis of Pulmonary Valve-Vasculature Dysfunction (PVD) in congenital heart disease patients. This research presents compliant artery - blood interaction in a patient specific Pulmonary Artery (PA) model. This is an improvement over our previous numerical studies which assumed rigid walled arteries. The impedance of the arteries and the energy transfer from the Right Ventricle (RV) to PA is governed by compliance, which in turn is influenced by the level of pre-stress in the arteries. In order to evaluate the pre-stress, an inverse algorithm was developed using an in-house script written in MATLAB and Python, and implemented using the Finite Element Method (FEM). This analysis used a patient specific material model developed by our group, in conjunction with measured pressure (invasive) and velocity (non-invasive) values. The analysis was performed on an FEM solver, and preliminary results indicated that the Main PA (MPA) exhibited higher compliance as well as increased hysteresis over the cardiac cycle when compared with the Left PA (LPA). The computed compliance values for the MPA and LPA were 14% and 34% lesser than the corresponding measured values. Further, the computed pressure drop and flow waveforms were in close agreement with the measured values. In conclusion, compliant artery - blood interaction models of patient specific geometries can play an important role in hemodynamics based diagnosis of PVD.
NASA Astrophysics Data System (ADS)
Gance, J.; Samyn, K.; Grandjean, G.; Malet, J.-P.
2012-04-01
This work presents a traveltime inversion method developed specially for imaging detailed subsurface features in the case of heterogeneous soils. The algorithm considers the initial SIRT algorithm proposed by Grandjean and Sage (2004) based on the use of Fresnel wavepaths and a probabilistic reconstruction approach. The method is improved by using a Quasi-Newton method, more robust than SIRT. It is demonstrated that the Jacobian matrix is approximated by the Fresnel weights, without introducing too large uncertainties. In addition to its robustness, this inversion algorithm proposes a regularization strategy based on the physics of wave propagation in soil. This allows to overcome the use of numerical regularization operators, always difficult to parameterize, and to remove the subjectivity of the user in the inversion result. Moreover, as the width of Fresnel volume is related to the frequency, an increase of frequency (and therefore a decrease of Fresnel wavepaths width) is introduced for each step in order to cover the entire finite bandwidth of the source signal. The inversion is thus controlled by large variations of velocities in the first steps and more and more detailed heterogeneities of the soil in the following steps. This technique is applied to a real dataset acquired at the Super-Sauze landslide (French Alps) and allows to highlight the presence of a deep water supply interpreted as a preferential flow path within the landslide.
The cluster Abell 780: an optical view
NASA Astrophysics Data System (ADS)
Durret, F.; Slezak, E.; Adami, C.
2009-11-01
Context: The Abell 780 cluster, better known as the Hydra A cluster, has been thouroughly analyzed in X-rays. However, little is known about its optical properties. Aims: We propose to derive the galaxy luminosity function (GLF) in this apparently relaxed cluster and to search for possible environmental effects by comparing the GLFs in various regions and by looking at the galaxy distribution at large scale around Abell 780. Methods: Our study is based on optical images obtained with the ESO 2.2m telescope and WFI camera in the B and R bands, covering a total region of 67.22 × 32.94 arcmin^2, or 4.235 × 2.075 Mpc2 for a cluster redshift of 0.0539. Results: In a region of 500 kpc radius around the cluster center, the GLF in the R band shows a double structure, with a broad and flat bright part and a flat faint end that can be fit by a power law with an index α ~ - 0.85 ± 0.12 in the 20.25 ≤ R ≤ 21.75 interval. If we divide this 500 kpc radius region in north+south or east+west halves, we find no clear difference between the GLFs in these smaller regions. No obvious large-scale structure is apparent within 5 Mpc from the cluster, based on galaxy redshifts and magnitudes collected from the NED database in a much larger region than that covered by our data, suggesting that there is no major infall of material in any preferential direction. However, the Serna-Gerbal method reveals a gravitationally bound structure of 27 galaxies, which includes the cD, and of a more strongly gravitationally bound structure of 14 galaxies. Conclusions: These optical results agree with the overall relaxed structure of Abell 780 previously derived from X-ray analyses. Based on observations obtained at the European Southern Observatory, program ESO 68.A-0084(A), P. I. E. Slezak. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics
The Abell 85 BCG: A Nucleated, Coreless Galaxy
NASA Astrophysics Data System (ADS)
Madrid, Juan P.; Donzelli, Carlos J.
2016-03-01
New high-resolution r-band imaging of the brightest cluster galaxy (BCG) in Abell 85 (Holm 15A) was obtained using the Gemini Multi Object Spectrograph. These data were taken with the aim of deriving an accurate surface brightness profile of the BCG of Abell 85, in particular, its central region. The new Gemini data show clear evidence of a previously unreported nuclear emission that is evident as a distinct light excess in the central kiloparsec of the surface brightness profile. We find that the light profile is never flat nor does it present a downward trend toward the center of the galaxy. That is, the new Gemini data show a different physical reality from the featureless, “evacuated core” recently claimed for the Abell 85 BCG. After trying different models, we find that the surface brightness profile of the BCG of Abell 85 is best fit by a double Sérsic model.
THE ABELL 85 BCG: A NUCLEATED, CORELESS GALAXY
Madrid, Juan P.
2016-03-01
New high-resolution r-band imaging of the brightest cluster galaxy (BCG) in Abell 85 (Holm 15A) was obtained using the Gemini Multi Object Spectrograph. These data were taken with the aim of deriving an accurate surface brightness profile of the BCG of Abell 85, in particular, its central region. The new Gemini data show clear evidence of a previously unreported nuclear emission that is evident as a distinct light excess in the central kiloparsec of the surface brightness profile. We find that the light profile is never flat nor does it present a downward trend toward the center of the galaxy. That is, the new Gemini data show a different physical reality from the featureless, “evacuated core” recently claimed for the Abell 85 BCG. After trying different models, we find that the surface brightness profile of the BCG of Abell 85 is best fit by a double Sérsic model.
NASA Astrophysics Data System (ADS)
Chen, Ying; Lo, Joseph Y.; Baker, Jay A.; Dobbins, James T., III
2006-03-01
Breast cancer is a major problem and the most common cancer among women. The nature of conventional mammpgraphy makes it very difficult to distinguish a cancer from overlying breast tissues. Digital Tomosynthesis refers to a three-dimensional imaging technique that allows reconstruction of an arbitrary set of planes in the breast from limited-angle series of projection images as the x-ray source moves. Several tomosynthesis algorithms have been proposed, including Matrix Inversion Tomosynthesis (MITS) and Filtered Back Projection (FBP) that have been investigated in our lab. MITS shows better high frequency response in removing out-of-plane blur, while FBP shows better low frequency noise propertities. This paper presents an effort to combine MITS and FBP for better breast tomosynthesis reconstruction. A high-pass Gaussian filter was designed and applied to three-slice "slabbing" MITS reconstructions. A low-pass Gaussian filter was designed and applied to the FBP reconstructions. A frequency weighting parameter was studied to blend the high-passed MITS with low-passed FBP frequency components. Four different reconstruction methods were investigated and compared with human subject images: 1) MITS blended with Shift-And-Add (SAA), 2) FBP alone, 3) FBP with applied Hamming and Gaussian Filters, and 4) Gaussian Frequency Blending (GFB) of MITS and FBP. Results showed that, compared with FBP, Gaussian Frequency Blending (GFB) has better performance for high frequency content such as better reconstruction of micro-calcifications and removal of high frequency noise. Compared with MITS, GFB showed more low frequency breast tissue content.
The magnitude-redshift relation for 561 Abell clusters
NASA Technical Reports Server (NTRS)
Postman, M.; Huchra, J. P.; Geller, M. J.; Henry, J. P.
1985-01-01
The Hubble diagram for the 561 Abell clusters with measured redshifts has been examined using Abell's (1958) corrected photo-red magnitudes for the tenth-ranked cluster member (m10). After correction for the Scott effect and K dimming, the data are in good agreement with a linear magnitude-redshift relation with a slope of 0.2 out to z = 0.1. New redshift data are also presented for 20 Abell clusters. Abell's m10 is suitable for redshift estimation for clusters with m10 of no more than 16.5. At fainter m10, the number of foreground galaxies expected within an Abell radius is large enough to make identification of the tenth-ranked galaxy difficult. Interlopers bias the estimated redshift toward low values at high redshift. Leir and van den Bergh's (1977) redshift estimates suffer from this same bias but to a smaller degree because of the use of multiple cluster parameters. Constraints on deviations of cluster velocities from the mean cosmological flow require greater photometric accuracy than is provided by Abell's m10 magnitudes.
NASA Astrophysics Data System (ADS)
Fang, Hongjian; Zhang, Haijiang; Yao, Huajian; Allam, Amir; Zigone, Dimitri; Ben-Zion, Yehuda; Thurber, Clifford; vanÂ derÂ Hilst, Robert D.
2016-05-01
We introduce a new algorithm for joint inversion of body wave and surface wave data to get better 3-D P wave (Vp) and S wave (Vs) velocity models by taking advantage of the complementary strengths of each data set. Our joint inversion algorithm uses a one-step inversion of surface wave traveltime measurements at different periods for 3-D Vs and Vp models without constructing the intermediate phase or group velocity maps. This allows a more straightforward modeling of surface wave traveltime data with the body wave arrival times. We take into consideration the sensitivity of surface wave data with respect to Vp in addition to its large sensitivity to Vs, which means both models are constrained by two different data types. The method is applied to determine 3-D crustal Vp and Vs models using body wave and Rayleigh wave data in the Southern California plate boundary region, which has previously been studied with both double-difference tomography method using body wave arrival times and ambient noise tomography method with Rayleigh and Love wave group velocity dispersion measurements. Our approach creates self-consistent and unique models with no prominent gaps, with Rayleigh wave data resolving shallow and large-scale features and body wave data constraining relatively deeper structures where their ray coverage is good. The velocity model from the joint inversion is consistent with local geological structures and produces better fits to observed seismic waveforms than the current Southern California Earthquake Center (SCEC) model.
Chand, S.; Minshull, T.A.; Priest, J.A.; Best, A.I.; Clayton, C.R.I.; Waite, W.F.
2006-01-01
The presence of gas hydrate in marine sediments alters their physical properties. In some circumstances, gas hydrate may cement sediment grains together and dramatically increase the seismic P- and S-wave velocities of the composite medium. Hydrate may also form a load-bearing structure within the sediment microstructure, but with different seismic wave attenuation characteristics, changing the attenuation behaviour of the composite. Here we introduce an inversion algorithm based on effective medium modelling to infer hydrate saturations from velocity and attenuation measurements on hydrate-bearing sediments. The velocity increase is modelled as extra binding developed by gas hydrate that strengthens the sediment microstructure. The attenuation increase is modelled through a difference in fluid flow properties caused by different permeabilities in the sediment and hydrate microstructures. We relate velocity and attenuation increases in hydrate-bearing sediments to their hydrate content, using an effective medium inversion algorithm based on the self-consistent approximation (SCA), differential effective medium (DEM) theory, and Biot and squirt flow mechanisms of fluid flow. The inversion algorithm is able to convert observations in compressional and shear wave velocities and attenuations to hydrate saturation in the sediment pore space. We applied our algorithm to a data set from the Mallik 2L–38 well, Mackenzie delta, Canada, and to data from laboratory measurements on gas-rich and water-saturated sand samples. Predictions using our algorithm match the borehole data and water-saturated laboratory data if the proportion of hydrate contributing to the load-bearing structure increases with hydrate saturation. The predictions match the gas-rich laboratory data if that proportion decreases with hydrate saturation. We attribute this difference to differences in hydrate formation mechanisms between the two environments.
Teaching about Inverse Functions
ERIC Educational Resources Information Center
Esty, Warren
2005-01-01
In their sections on inverses most precalculus texts emphasize an algorithm for finding f [superscript -1] given f. However, inspection of precalculus and calculus texts shows that students will never again use the algorithm, which suggests the textbook emphasis may be misplaced. Inverses appear primarily when equations need to be solved, which…
The Filtered Abel Transform and Its Application in Combustion Diagnostics
NASA Technical Reports Server (NTRS)
Simons, Stephen N. (Technical Monitor); Yuan, Zeng-Guang
2003-01-01
Many non-intrusive combustion diagnosis methods generate line-of-sight projections of a flame field. To reconstruct the spatial field of the measured properties, these projections need to be deconvoluted. When the spatial field is axisymmetric, commonly used deconvolution method include the Abel transforms, the onion peeling method and the two-dimensional Fourier transform method and its derivatives such as the filtered back projection methods. This paper proposes a new approach for performing the Abel transform method is developed, which possesses the exactness of the Abel transform and the flexibility of incorporating various filters in the reconstruction process. The Abel transform is an exact method and the simplest among these commonly used methods. It is evinced in this paper that all the exact reconstruction methods for axisymmetric distributions must be equivalent to the Abel transform because of its uniqueness and exactness. Detailed proof is presented to show that the two dimensional Fourier methods when applied to axisymmetric cases is identical to the Abel transform. Discrepancies among various reconstruction method stem from the different approximations made to perform numerical calculations. An equation relating the spectrum of a set of projection date to that of the corresponding spatial distribution is obtained, which shows that the spectrum of the projection is equal to the Abel transform of the spectrum of the corresponding spatial distribution. From the equation, if either the projection or the distribution is bandwidth limited, the other is also bandwidth limited, and both have the same bandwidth. If the two are not bandwidth limited, the Abel transform has a bias against low wave number components in most practical cases. This explains why the Abel transform and all exact deconvolution methods are sensitive to high wave number noises. The filtered Abel transform is based on the fact that the Abel transform of filtered projection data is equal
NASA Astrophysics Data System (ADS)
Li, Tao; Mallick, Subhashis
2015-02-01
Consideration of azimuthal anisotropy, at least to an orthorhombic symmetry is important in exploring the naturally fractured and unconventional hydrocarbon reservoirs. Full waveform inversion of multicomponent seismic data can, in principle, provide more robust estimates of subsurface elastic parameters and density than the inversion of single component (P wave) seismic data. In addition, azimuthally dependent anisotropy can only be resolved by carefully studying the multicomponent seismic displacement data acquired and processed along different azimuths. Such an analysis needs an inversion algorithm capable of simultaneously optimizing multiple objectives, one for each data component along each azimuth. These multicomponent and multi-azimuthal seismic inversions are non-linear with non-unique solutions; it is therefore appropriate to treat the objectives as a vector and simultaneously optimize each of its components such that the optimal set of solutions could be obtained. The fast non-dominated sorting genetic algorithm (NSGA II) is a robust stochastic global search method capable of handling multiple objectives, but its computational expense increases with increasing number of objectives and the number of model parameters to be inverted for. In addition, an accurate extraction of subsurface azimuthal anisotropy requires multicomponent seismic data acquired at a fine spatial resolution along many source-to-receiver azimuths. Because routine acquisition of such data is prohibitively expensive, they are typically available along two or at most three azimuthal orientations at a spatial resolution where such an inversion could be applied. This paper proposes a novel multi-objective methodology using a parallelized version of NSGA II for waveform inversion of multicomponent seismic displacement data along two azimuths. By scaling the objectives prior to ranking, redefining the crowding distance as functions of the scaled objective and the model spaces, and varying
Cool Core Disruption in Abell 1763
NASA Astrophysics Data System (ADS)
Douglass, Edmund; Blanton, Elizabeth L.; Clarke, Tracy E.; Randall, Scott W.; Edwards, Louise O. V.; Sabry, Ziad
2017-01-01
We present the analysis of a 20 ksec Chandra archival observation of the massive galaxy cluster Abell 1763. A model-subtracted image highlighting excess cluster emission reveals a large spiral structure winding outward from the core to a radius of ~950 kpc. We measure the gas of the inner spiral to have significantly lower entropy than non-spiral regions at the same radius. This is consistent with the structure resulting from merger-induced motion of the cluster’s cool core, a phenomenon seen in many systems. Atypical of spiral-hosting clusters, an intact cool core is not detected. Its absence suggests the system has experienced significant disruption since the initial dynamical encounter that set the sloshing core in motion. Along the major axis of the elongated ICM distribution we detect thermal features consistent with the merger event most likely responsible for cool core disruption. The merger-induced transition towards non-cool core status will be discussed. The interaction between the powerful (P1.4 ~ 1026 W Hz-1) cluster-center WAT radio source and its ICM environment will also be discussed.
NASA Astrophysics Data System (ADS)
Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Guillemot, C.; Murray, J. R.
2015-12-01
The earthquake early warning (EEW) systems in California and elsewhere can greatly benefit from algorithms that generate estimates of finite-fault parameters. These estimates could significantly improve real-time shaking calculations and yield important information for immediate disaster response. Minson et al. (2015) determined that combining FinDer's seismic-based algorithm (Böse et al., 2012) with BEFORES' geodetic-based algorithm (Minson et al., 2014) yields a more robust and informative joint solution than using either algorithm alone. FinDer examines the distribution of peak ground accelerations from seismic stations and determines the best finite-fault extent and strike from template matching. BEFORES employs a Bayesian framework to search for the best slip inversion over all possible fault geometries in terms of strike and dip. Using FinDer and BEFORES together generates estimates of finite-fault extent, strike, dip, preferred slip, and magnitude. To yield the quickest, most flexible, and open-source version of the joint algorithm, we translated BEFORES and FinDer from Matlab into C++. We are now developing a C++ Application Protocol Interface for these two algorithms to be connected to the seismic and geodetic data flowing from the EEW system. The interface that is being developed will also enable communication between the two algorithms to generate the joint solution of finite-fault parameters. Once this interface is developed and implemented, the next step will be to run test seismic and geodetic data through the system via the Earthworm module, Tank Player. This will allow us to examine algorithm performance on simulated data and past real events.
The Dark Matter filament between Abell 222/223
NASA Astrophysics Data System (ADS)
Dietrich, Jörg P.; Werner, Norbert; Clowe, Douglas; Finoguenov, Alexis; Kitching, Tom; Miller, Lance; Simionescu, Aurora
2016-10-01
Weak lensing detections and measurements of filaments have been elusive for a long time. The reason is that the low density contrast of filaments generally pushes the weak lensing signal to unobservably low scales. To nevertheless map the dark matter in filaments exquisite data and unusual systems are necessary. SuprimeCam observations of the supercluster system Abell 222/223 provided the required combination of excellent seeing images and a fortuitous alignment of the filament with the line-of-sight. This boosted the lensing signal to a detectable level and led to the first weak lensing mass measurement of a large-scale structure filament. The filament connecting Abell 222 and Abell 223 is now the only one traced by the galaxy distribution, dark matter, and X-ray emission from the hottest phase of the warm-hot intergalactic medium. The combination of these data allows us to put the first constraints on the hot gas fraction in filaments.
The merging cluster Abell 1758 revisited: multi-wavelength observations and numerical simulations
NASA Astrophysics Data System (ADS)
Durret, F.; Laganá, T. F.; Haider, M.
2011-05-01
Context. Cluster properties can be more distinctly studied in pairs of clusters, where we expect the effects of interactions to be strong. Aims: We here discuss the properties of the double cluster Abell 1758 at a redshift z ~ 0.279. These clusters show strong evidence for merging. Methods: We analyse the optical properties of the North and South cluster of Abell 1758 based on deep imaging obtained with the Canada-France-Hawaii Telescope (CFHT) archive Megaprime/Megacam camera in the g' and r' bands, covering a total region of about 1.05 × 1.16 deg2, or 16.1 × 17.6 Mpc2. Our X-ray analysis is based on archive XMM-Newton images. Numerical simulations were performed using an N-body algorithm to treat the dark-matter component, a semi-analytical galaxy-formation model for the evolution of the galaxies and a grid-based hydrodynamic code with a parts per million (PPM) scheme for the dynamics of the intra-cluster medium. We computed galaxy luminosity functions (GLFs) and 2D temperature and metallicity maps of the X-ray gas, which we then compared to the results of our numerical simulations. Results: The GLFs of Abell 1758 North are well fit by Schechter functions in the g' and r' bands, but with a small excess of bright galaxies, particularly in the r' band; their faint-end slopes are similar in both bands. In contrast, the GLFs of Abell 1758 South are not well fit by Schechter functions: excesses of bright galaxies are seen in both bands; the faint-end of the GLF is not very well defined in g'. The GLF computed from our numerical simulations assuming a halo mass-luminosity relation agrees with those derived from the observations. From the X-ray analysis, the most striking features are structures in the metal distribution. We found two elongated regions of high metallicity in Abell 1758 North with two peaks towards the centre. In contrast, Abell 1758 South shows a deficit of metals in its central regions. Comparing observational results to those derived from numerical
NASA Technical Reports Server (NTRS)
Aires, F.; Rossow, W. B.; Scott, N. A.; Chedin, A.; Hansen, James E. (Technical Monitor)
2001-01-01
A fast temperature water vapor and ozone atmospheric profile retrieval algorithm is developed for the high spectral resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. Compression and de-noising of IASI observations are performed using Principal Component Analysis. This preprocessing methodology also allows, for a fast pattern recognition in a climatological data set to obtain a first guess. Then, a neural network using first guess information is developed to retrieve simultaneously temperature, water vapor and ozone atmospheric profiles. The performance of the resulting fast and accurate inverse model is evaluated with a large diversified data set of radiosondes atmospheres including rare events.
A 1400-MHz survey of 1478 Abell clusters of galaxies
NASA Technical Reports Server (NTRS)
Owen, F. N.; White, R. A.; Hilldrup, K. C.; Hanisch, R. J.
1982-01-01
Observations of 1478 Abell clusters of galaxies with the NRAO 91-m telescope at 1400 MHz are reported. The measured beam shape was deconvolved from the measured source Gaussian fits in order to estimate the source size and position angle. All detected sources within 0.5 corrected Abell cluster radii are listed, including the cluster number, richness class, distance class, magnitude of the tenth brightest galaxy, redshift estimate, corrected cluster radius in arcmin, right ascension and error, declination and error, total flux density and error, and angular structure for each source.
2010-04-27
Laboratory 2. Seminars of the Department of Mathematics of the University of North Carolina at Charlotte 3. SPIE meeting in San Francisco 4. Henry ... Poincare Institute, Paris, France 5. Workshop on Inverse Problems at Chinese University of Hong Kong (plenary speaker) 6. International Conference on
LensPerfect Analysis of Abell 1689
NASA Astrophysics Data System (ADS)
Coe, Dan A.
2007-12-01
I present the first massmap to perfectly reproduce the position of every gravitationally-lensed multiply-imaged galaxy detected to date in ACS images of Abell 1689. This massmap was obtained using a powerful new technique made possible by a recent advance in the field of Mathematics. It is the highest resolution assumption-free Dark Matter massmap to date, with the resolution being limited only by the number of multiple images detected. We detect 8 new multiple image systems and identify multiple knots in individual galaxies to constrain a grand total of 168 knots within 135 multiple images of 42 galaxies. No assumptions are made about mass tracing light, and yet the brightest visible structures in A1689 are reproduced in our massmap, a few with intriguing positional offsets. Our massmap probes radii smaller than that resolvable in current Dark Matter simulations of galaxy clusters. And at these radii, we observe slight deviations from the NFW and Sersic profiles which describe simulated Dark Matter halos so well. While we have demonstrated that our method is able to recover a known input massmap (to limited resolution), further tests are necessary to determine the uncertainties of our mass profile and positions of massive subclumps. I compile the latest weak lensing data from ACS, Subaru, and CFHT, and attempt to fit a single profile, either NFW or Sersic, to both the observed weak and strong lensing. I confirm the finding of most previous authors, that no single profile fits extremely well to both simultaneously. Slight deviations are revealed, with the best fits slightly over-predicting the mass profile at both large and small radius. Our easy-to-use software, called LensPerfect, will be made available soon. This research was supported by the European Commission Marie Curie International Reintegration Grant 017288-BPZ and the PNAYA grant AYA2005-09413-C02.
Revisiting Abell 2744: a powerful synergy of GLASS spectroscopy and HFF photometry
NASA Astrophysics Data System (ADS)
Wang, Xin; Wang
We present new emission line identifications and improve the lensing reconstruction of the mass distribution of galaxy cluster Abell 2744 using the Grism Lens-Amplified Survey from Space (GLASS) spectroscopy and the Hubble Frontier Fields (HFF) imaging. We performed blind and targeted searches for faint line emitters on all objects, including the arc sample, within the field of view (FoV) of GLASS prime pointings. We report 55 high quality spectroscopic redshifts, 5 of which are for arc images. We also present an extensive analysis based on the HFF photometry, measuring the colors and photometric redshifts of all objects within the FoV, and comparing the spectroscopic and photometric redshift estimates. In order to improve the lens model of Abell 2744, we develop a rigorous algorithm to screen arc images, based on their colors and morphology, and selecting the most reliable ones to use. As a result, 25 systems (corresponding to 72 images) pass the screening process and are used to reconstruct the gravitational potential of the cluster pixellated on an adaptive mesh. The resulting total mass distribution is compared with a stellar mass map obtained from the Spitzer Frontier Fields data in order to study the relative distribution of stars and dark matter in the cluster.
NASA Astrophysics Data System (ADS)
Gilat Schmidt, Taly; Sidky, Emil Y.
2015-03-01
Photon-counting detectors with pulse-height analysis have shown promise for improved spectral CT imaging. This study investigated a novel spectral CT reconstruction method that directly estimates basis-material images from the measured energy-bin data (i.e., `one-step' reconstruction). The proposed algorithm can incorporate constraints to stabilize the reconstruction and potentially reduce noise. The algorithm minimizes the error between the measured energy-bin data and the data estimated from the reconstructed basis images. A total variation (TV) constraint was also investigated for additional noise reduction. The proposed one-step algorithm was applied to simulated data of an anthropomorphic phantom with heterogeneous tissue composition. Reconstructed water, bone, and gadolinium basis images were compared for the proposed one-step algorithm and the conventional `two-step' method of decomposition followed by reconstruction. The unconstrained algorithm provided a 30% to 60% reduction in noise standard deviation compared to the two-step algorithm. The fTV =0.8 constraint provided a small reduction in noise (˜ 1%) compared to the unconstrained reconstruction. Images reconstructed with the fTV =0.5 constraint demonstrated 77% to 94% standard deviation reduction compared to the two-step reconstruction, however with increased blurring. There were no significant differences in the mean values reconstructed by the investigated algorithms. Overall, the proposed one-step spectral CT reconstruction algorithm provided three-material-decomposition basis images with reduced noise compared to the conventional two-step approach. When using a moderate TV constraint factor (fTV = 0.8), a 30%-60% reduction in noise standard deviation was achieved while preserving the edge profile for this simulated phantom.
NASA Astrophysics Data System (ADS)
Sourbier, F.; Operto, S.; Virieux, J.
2006-12-01
We present a distributed-memory parallel algorithm for 2D visco-acoustic full-waveform inversion of wide-angle seismic data. Our code is written in fortran90 and use MPI for parallelism. The algorithm was applied to real wide-angle data set recorded by 100 OBSs with a 1-km spacing in the eastern-Nankai trough (Japan) to image the deep structure of the subduction zone. Full-waveform inversion is applied sequentially to discrete frequencies by proceeding from the low to the high frequencies. The inverse problem is solved with a classic gradient method. Full-waveform modeling is performed with a frequency-domain finite-difference method. In the frequency-domain, solving the wave equation requires resolution of a large unsymmetric system of linear equations. We use the massively parallel direct solver MUMPS (http://www.enseeiht.fr/irit/apo/MUMPS) for distributed-memory computer to solve this system. The MUMPS solver is based on a multifrontal method for the parallel factorization. The MUMPS algorithm is subdivided in 3 main steps: a symbolic analysis step that performs re-ordering of the matrix coefficients to minimize the fill-in of the matrix during the subsequent factorization and an estimation of the assembly tree of the matrix. Second, the factorization is performed with dynamic scheduling to accomodate numerical pivoting and provides the LU factors distributed over all the processors. Third, the resolution is performed for multiple sources. To compute the gradient of the cost function, 2 simulations per shot are required (one to compute the forward wavefield and one to back-propagate residuals). The multi-source resolutions can be performed in parallel with MUMPS. In the end, each processor stores in core a sub-domain of all the solutions. These distributed solutions can be exploited to compute in parallel the gradient of the cost function. Since the gradient of the cost function is a weighted stack of the shot and residual solutions of MUMPS, each processor
Retrieval Performance and Indexing Differences in ABELL and MLAIB
ERIC Educational Resources Information Center
Graziano, Vince
2012-01-01
Searches for 117 British authors are compared in the Annual Bibliography of English Language and Literature (ABELL) and the Modern Language Association International Bibliography (MLAIB). Authors are organized by period and genre within the early modern era. The number of records for each author was subdivided by format, language of publication,…
NASA Astrophysics Data System (ADS)
Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.
2012-04-01
We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to
NASA Astrophysics Data System (ADS)
Belchansky, G.; Alpatsky, I.; Mordvintsev, I.; Douglas, D.
Investigating new methods to estimate sea-ice geophysical parameters using multisensor satellite data is critical for global change studies. The most widely used and consistent data to study sea ice at global scale are SMMR and SSM/I passive microwave measurements available since 1978. However, comparisons with LANDSAT, AVHRR and ERS-1 SAR have demonstrated substantial seasonal and regional differences in SSM/I ice parameter estimates (Belchansky and Douglas, 2000, 2002). This report presents investigating methods of improving SSM/I and OKEAN sea ice inversion parameters using MLP neural networks, and compare the sea ice classification results from different neural networks and linear mixture model. Efficiency of four sea ice type inversion (classification) algorithms utilizing SSM/I, OKEAN-01, ERS and RADARSAT satellite data were compared and investigated. The first one applied different linear mixture models (NASA Team, Bootstrap, and OKEAN). The second, third and fourth algorithms applied the modified MLP neural networks with different learning algorithms based, respectively, on 1) error back propagation and simulated annealing (Kirkpatrick, 1983); 2) dynamic learning and polynomial basis function (Chen et al., 1996); and 3) dynamic learning and two-step optimization. Both last algorithms used the Kalman filtering technique. Our studies demonstrated that both modified MLP neural networks with dynamic learning were more efficient (in terms of learning time, accuracy, and ability to generalize the selected learning data) than modified MLP neural network with learning algorithms based on the error back propagation and simulated annealing for simple approximation problems. MY sea ice and albedo inversion from SSM/I brightness temperatures and respective OKEAN learning data sets demonstrated that these algorithms caused over-fitting in comparison with the MLP neural network with the error back propagation and simulated annealing. Therefore, for MY sea ice inversion
Wang, Jing; Guan, Huaiqun; Solberg, Timothy
2011-07-15
Purpose: A statistical projection restoration algorithm based on the penalized weighted least-squares (PWLS) criterion can substantially improve the image quality of low-dose CBCT images. The performance of PWLS is largely dependent on the choice of the penalty parameter. Previously, the penalty parameter was chosen empirically by trial and error. In this work, the authors developed an inverse technique to calculate the penalty parameter in PWLS for noise suppression of low-dose CBCT in image guided radiotherapy (IGRT). Methods: In IGRT, a daily CBCT is acquired for the same patient during a treatment course. In this work, the authors acquired the CBCT with a high-mAs protocol for the first session and then a lower mAs protocol for the subsequent sessions. The high-mAs projections served as the goal (ideal) toward, which the low-mAs projections were to be smoothed by minimizing the PWLS objective function. The penalty parameter was determined through an inverse calculation of the derivative of the objective function incorporating both the high and low-mAs projections. Then the parameter obtained can be used for PWLS to smooth the noise in low-dose projections. CBCT projections for a CatPhan 600 and an anthropomorphic head phantom, as well as for a brain patient, were used to evaluate the performance of the proposed technique. Results: The penalty parameter in PWLS was obtained for each CBCT projection using the proposed strategy. The noise in the low-dose CBCT images reconstructed from the smoothed projections was greatly suppressed. Image quality in PWLS-processed low-dose CBCT was comparable to its corresponding high-dose CBCT. Conclusions: A technique was proposed to estimate the penalty parameter for PWLS algorithm. It provides an objective and efficient way to obtain the penalty parameter for image restoration algorithms that require predefined smoothing parameters.
An efficient and fast parallel method for Volterra integral equations of Abel type
NASA Astrophysics Data System (ADS)
Capobianco, Giovanni; Conte, Dajana
2006-05-01
In this paper we present an efficient and fast parallel waveform relaxation method for Volterra integral equations of Abel type, obtained by reformulating a nonstationary waveform relaxation method for systems of equations with linear coefficient constant kernel. To this aim we consider the Laplace transform of the equation and here we apply the recurrence relation given by the Chebyshev polynomial acceleration for algebraic linear systems. Back in the time domain, we obtain a three term recursion which requires, at each iteration, the evaluation of convolution integrals, where only the Laplace transform of the kernel is known. For this calculation we can use a fast convolution algorithm. Numerical experiments have been done also on problems where it is not possible to use the original nonstationary method, obtaining good results in terms of improvement of the rate of convergence with respect the stationary method.
A weak-lensing analysis of the Abell 383 cluster
NASA Astrophysics Data System (ADS)
Huang, Z.; Radovich, M.; Grado, A.; Puddu, E.; Romano, A.; Limatola, L.; Fu, L.
2011-05-01
Aims: We use deep CFHT and SUBARU uBVRIz archival images of the Abell 383 cluster (z = 0.187) to estimate its mass by weak-lensing. Methods: To this end, we first use simulated images to check the accuracy provided by our Kaiser-Squires-Broadhurst (KSB) pipeline. These simulations include shear testing programme (STEP) 1 and 2 simulations, as well as more realistic simulations of the distortion of galaxy shapes by a cluster with a Navarro-Frenk-White (NFW) profile. From these simulations we estimate the effect of noise on shear measurement and derive the correction terms. The R-band image is used to derive the mass by fitting the observed tangential shear profile with an NFW mass profile. Photometric redshifts are computed from the uBVRIz catalogs. Different methods for the foreground/background galaxy selection are implemented, namely selection by magnitude, color, and photometric redshifts, and the results are compared. In particular, we developed a semi-automatic algorithm to select the foreground galaxies in the color-color diagram, based on the observed colors. Results: Using color selection or photometric redshifts improves the correction of dilution from foreground galaxies: this leads to higher signals in the inner parts of the cluster. We obtain a cluster mass Mvir = 7.5+2.7_{-1.9 × 1014} M⊙: this value is 20% higher than previous estimates and is more consistent the mass expected from X-ray data. The R-band luminosity function of the cluster is computed and gives a total luminosity Ltot = (2.14 ± 0.5) × 1012 L⊙ and a mass-to-luminosity ratio M/L 300 M⊙/L⊙. Based on: data collected with the Subaru Telescope (University of Tokyo) and obtained from the SMOKA, which is operated by the Astronomy Data Center, National Astronomical Observatory of Japan; observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council (NRC) of Canada
Jasra, Ajay; Law, Kody J. H.; Zhou, Yan
2016-01-01
Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are used for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.
NASA Astrophysics Data System (ADS)
Stachlewska, Iwona S.; Christoph, Ritter; Neuber, Roland
2005-10-01
The background aerosol conditions and the conditions contaminated with aerosol of antropogenic origin (Arctic haze) were investigated during two Arctic campaigns, the Arctic Study of Tropospheric Aerosols, Clouds and Radiation (ASTAR) in 2004 and Svalbard Experiment (SVALEX) in 2005, respectively. Results obtained by application of the two-stream inversion algorithm to the elastic lidar signals measured on two days representative for each campaign are presented. The calculations were done using signals obtained by the nadir-looking Airborne Mobile Aerosol Lidar (AMALi) probing lower troposphere from the AWI research aircraft Polar 2 overflying the stationary Koldewey Aerosol Raman Lidar (KARL) based at the AWI Koldewey Research Station in Ny Ålesund, Svalbard. The method allowed independent retrieval of extinction and backscatter coefficient profiles and lidar ratio profiles for each of the two days representative for both clean and polluted lower troposphere in Arctic.
Fox, Andrew; Williams, Mathew; Richardson, Andrew D.; Cameron, David; Gove, Jeffrey H.; Quaife, Tristan; Ricciuto, Daniel M; Reichstein, Markus; Tomelleri, Enrico; Trudinger, Cathy; Van Wijk, Mark T.
2009-10-01
We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) ofCO2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration,were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving>80% success rate and mean NEE confidence intervals <110 gCm-2 year-1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence
Sarode, Ketan Dinkar; Kumar, V Ravi; Kulkarni, B D
2016-05-01
An efficient inverse problem approach for parameter estimation, state and structure identification from dynamic data by embedding training functions in a genetic algorithm methodology (ETFGA) is proposed for nonlinear dynamical biosystems using S-system canonical models. Use of multiple shooting and decomposition approach as training functions has been shown for handling of noisy datasets and computational efficiency in studying the inverse problem. The advantages of the methodology are brought out systematically by studying it for three biochemical model systems of interest. By studying a small-scale gene regulatory system described by a S-system model, the first example demonstrates the use of ETFGA for the multifold aims of the inverse problem. The estimation of a large number of parameters with simultaneous state and network identification is shown by training a generalized S-system canonical model with noisy datasets. The results of this study bring out the superior performance of ETFGA on comparison with other metaheuristic approaches. The second example studies the regulation of cAMP oscillations in Dictyostelium cells now assuming limited availability of noisy data. Here, flexibility of the approach to incorporate partial system information in the identification process is shown and its effect on accuracy and predictive ability of the estimated model are studied. The third example studies the phenomenological toy model of the regulation of circadian oscillations in Drosophila that follows rate laws different from S-system power-law. For the limited noisy data, using a priori information about properties of the system, we could estimate an alternate S-system model that showed robust oscillatory behavior with predictive abilities.
Jasra, Ajay; Law, Kody J. H.; Zhou, Yan
2016-01-01
Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less
Lin, Lin; Yang, Chao; Lu, Jiangfeng; Ying, Lexing; E, Weinan
2009-09-25
We present an efficient parallel algorithm and its implementation for computing the diagonal of $H^-1$ where $H$ is a 2D Kohn-Sham Hamiltonian discretized on a rectangular domain using a standard second order finite difference scheme. This type of calculation can be used to obtain an accurate approximation to the diagonal of a Fermi-Dirac function of $H$ through a recently developed pole-expansion technique \\cite{LinLuYingE2009}. The diagonal elements are needed in electronic structure calculations for quantum mechanical systems \\citeHohenbergKohn1964, KohnSham 1965,DreizlerGross1990. We show how elimination tree is used to organize the parallel computation and how synchronization overhead is reduced by passing data level by level along this tree using the technique of local buffers and relative indices. We analyze the performance of our implementation by examining its load balance and communication overhead. We show that our implementation exhibits an excellent weak scaling on a large-scale high performance distributed parallel machine. When compared with standard approach for evaluating the diagonal a Fermi-Dirac function of a Kohn-Sham Hamiltonian associated a 2D electron quantum dot, the new pole-expansion technique that uses our algorithm to compute the diagonal of $(H-z_i I)^-1$ for a small number of poles $z_i$ is much faster, especially when the quantum dot contains many electrons.
X-Ray Imaging-Spectroscopy of Abell 1835
NASA Technical Reports Server (NTRS)
Peterson, J. R.; Paerels, F. B. S.; Kaastra, J. S.; Arnaud, M.; Reiprich T. H.; Fabian, A. C.; Mushotzky, R. F.; Jernigan, J. G.; Sakelliou, I.
2000-01-01
We present detailed spatially-resolved spectroscopy results of the observation of Abell 1835 using the European Photon Imaging Cameras (EPIC) and the Reflection Grating Spectrometers (RGS) on the XMM-Newton observatory. Abell 1835 is a luminous (10(exp 46)ergs/s), medium redshift (z = 0.2523), X-ray emitting cluster of galaxies. The observations support the interpretation that large amounts of cool gas are present in a multi-phase medium surrounded by a hot (kT(sub e) = 8.2 keV) outer envelope. We detect O VIII Ly(alpha) and two Fe XXIV complexes in the RGS spectrum. The emission measure of the cool gas below kT(sub e) = 2.7 keV is much lower than expected from standard cooling-flow models, suggesting either a more complicated cooling process than simple isobaric radiative cooling or differential cold absorption of the cooler gas.
NASA Astrophysics Data System (ADS)
Ganapol, B. D.; Furfaro, R.; Johnson, L. F.; Herwitz, S. R.
2003-12-01
Over the past two years, NASA has had great interest in exploring the economic potential of deploying UAVs (Unmanned Aerial Vehicles) as long-duration platforms equipped with high resolution imaging systems for commercial agricultural applications. In October 2002, a team in the Ecosystem Science and Technology Branch at NASA/Ames Research Center prepared and successfully flew a UAV, equipped with off-the-shelf camera systems, over coffee plantations at Kauai (Hawaii). The idea is to help growers to find the best possible harvesting strategy. The most important information that needs to be conveyed to the growers is the percentage of ripe, unripe and overripe cherries in the field. It is of vital importance to devise a robust and reliable "intelligent "algorithm capable of predicting the amount of ripe cherries present in any digital image coming from the onboard cameras. During the campaign, the two UAV camera systems produced digital images that contain information about the down-looking plantation field. These images need to be processed to extract information concerning the percentage of ripe (yellow) cherries. To date, no robust automated algorithm has been developed to perform this task. Currently, every image is viewed by human eyes on a case by case basis. We propose a neural network algorithm that can automate the process in an intelligent way. Biologically inspired Neural Networks are made of elements called "neurons" that can simulate the brain activity during a learning process. The idea is to design an appropriate neural network that learns the relation between the reflectance coming from an image and the percentage of cherries present in a coffee field. We envision a situation in which reflectance from digital images at different wavebands is processed by a trained neural network and the percentage of the different cherries estimated. The key factor is training the network to recognize the reflectance/cherry percentage relation. Over the past few
The GenABEL Project for statistical genomics
Karssen, Lennart C.; van Duijn, Cornelia M.; Aulchenko, Yurii S.
2016-01-01
Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the “core team”, facilitating agile statistical omics methodology development and fast dissemination. PMID:27347381
SPITZER OBSERVATIONS OF ABELL 1763. I. INFRARED AND OPTICAL PHOTOMETRY
Edwards, Louise O. V.; Fadda, Dario; Biviano, Andrea
2010-02-15
We present a photometric analysis of the galaxy cluster Abell 1763 at visible and infrared wavelengths. Included are fully reduced images in r', J, H, and K{sub s} obtained using the Palomar 200in telescope, as well as the IRAC and MIPS images from Spitzer. The cluster is covered out to approximately 3 virial radii with deep 24 {mu}m imaging (a 5{sigma} depth of 0.2 mJy). This same field of {approx}40' x 40' is covered in all four IRAC bands as well as the longer wavelength MIPS bands (70 and 160 {mu}m). The r' imaging covers {approx}0.8 deg{sup 2} down to 25.5 mag, and overlaps with most of the MIPS field of view. The J, H, and K{sub s} images cover the cluster core and roughly half of the filament galaxies, which extend toward the neighboring cluster, Abell 1770. This first, in a series of papers on Abell 1763, discusses the data reduction methods and source extraction techniques used for each data set. We present catalogs of infrared sources (with 24 and/or 70 {mu}m emission) and their corresponding emission in the optical (u', g', r', i', z'), and near- to far-IR (J, H, K{sub s} , IRAC, and MIPS 160 {mu}m). We provide the catalogs and reduced images to the community through the NASA/IPAC Infrared Science Archive.
NASA Astrophysics Data System (ADS)
Aragón, J. L.; Vázquez Polo, G.; Gómez, A.
A computational algorithm for the generation of quasiperiodic tiles based on the cut and projection method is presented. The algorithm is capable of projecting any type of lattice embedded in any euclidean space onto any subspace making it possible to generate quasiperiodic tiles with any desired symmetry. The simplex method of linear programming and the Moore-Penrose generalized inverse are used to construct the cut (strip) in the higher dimensional space which is to be projected.
RADIO AND DEEP CHANDRA OBSERVATIONS OF THE DISTURBED COOL CORE CLUSTER ABELL 133
Randall, S. W.; Nulsen, P. E. J.; Forman, W. R.; Murray, S. S.; Clarke, T. E.; Owers, M. S.; Sarazin, C. L.
2010-10-10
We present results based on new Chandra and multi-frequency radio observations of the disturbed cool core cluster Abell 133. The diffuse gas has a complex bird-like morphology, with a plume of emission extending from two symmetric wing-like features. The plume is capped with a filamentary radio structure that has been previously classified as a radio relic. X-ray spectral fits in the region of the relic indicate the presence of either high-temperature gas or non-thermal emission, although the measured photon index is flatter than would be expected if the non-thermal emission is from inverse Compton scattering of the cosmic microwave background by the radio-emitting particles. We find evidence for a weak elliptical X-ray surface brightness edge surrounding the core, which we show is consistent with a sloshing cold front. The plume is consistent with having formed due to uplift by a buoyantly rising radio bubble, now seen as the radio relic, and has properties consistent with buoyantly lifted plumes seen in other systems (e.g., M87). Alternatively, the plume may be a gas sloshing spiral viewed edge-on. Results from spectral analysis of the wing-like features are inconsistent with the previous suggestion that the wings formed due to the passage of a weak shock through the cool core. We instead conclude that the wings are due to X-ray cavities formed by displacement of X-ray gas by the radio relic. The central cD galaxy contains two small-scale cold gas clumps that are slightly offset from their optical and UV counterparts, suggestive of a galaxy-galaxy merger event. On larger scales, there is evidence for cluster substructure in both optical observations and the X-ray temperature map. We suggest that the Abell 133 cluster has recently undergone a merger event with an interloping subgroup, initialing gas sloshing in the core. The torus of sloshed gas is seen close to edge-on, leading to the somewhat ragged appearance of the elliptical surface brightness edge. We show
NASA Astrophysics Data System (ADS)
Edwards, L. O. V.; Alpert, H. S.; Trierweiler, I. L.; Abraham, T.; Beizer, V. G.
2016-09-01
We present the first results from an integral field unit (IFU) spectroscopic survey of a ˜75 kpc region around three brightest cluster galaxies (BCGs), combining over 100 IFU fibres to study the intracluster light (ICL). We fit population synthesis models to estimate age and metallicity. For Abell 85 and Abell 2457, the ICL is best-fit with a fraction of old, metal-rich stars like in the BCG, but requires 30-50 per cent young and metal-poor stars, a component not found in the BCGs. This is consistent with the ICL having been formed by a combination of interactions with less massive, younger, more metal-poor cluster members in addition to stars that form the BCG. We find that the three galaxies are in different stages of evolution and may be the result of different formation mechanisms. The BCG in Abell 85 is near a relatively young, metal-poor galaxy, but the dynamical friction time-scale is long and the two are unlikely to be undergoing a merger. The outer regions of Abell 2457 show a higher relative fraction of metal-poor stars, and we find one companion, with a higher fraction of young, metal-poor stars than the BCG, which is likely to merge within a gigayear. Several luminous red galaxies are found at the centre of the cluster IIZw108, with short merger time-scales, suggesting that the system is about to embark on a series of major mergers to build up a dominant BCG. The young, metal-poor component found in the ICL is not found in the merging galaxies.
NASA Astrophysics Data System (ADS)
Gentry, R. W.
2002-12-01
The Shelby Farms test site in Shelby County, Tennessee is being developed to better understand recharge hydraulics to the Memphis aquifer in areas where leakage through an overlying aquitard occurs. The site is unique in that it demonstrates many opportunities for interdisciplinary research regarding environmental tracers, anthropogenic impacts and inverse modeling. The objective of the research funding the development of the test site is to better understand the groundwater hydrology and hydraulics between a shallow alluvial aquifer and the Memphis aquifer given an area of leakage, defined as an aquitard window. The site is situated in an area on the boundary of a highly developed urban area and is currently being used by an agricultural research agency and a local recreational park authority. Also, an abandoned landfill is situated to the immediate south of the window location. Previous research by the USGS determined the location of the aquitard window subsequent to the landfill closure. Inverse modeling using a genetic algorithm approach has identified the likely extents of the area of the window given an interaquifer accretion rate. These results, coupled with additional fieldwork, have been used to guide the direction of the field studies and the overall design of the research project. This additional work has encompassed the drilling of additional monitoring wells in nested groups by rotasonic drilling methods. The core collected during the drilling will provide additional constraints to the physics of the problem that may provide additional help in redefining the conceptual model. The problem is non-unique with respect to the leakage area and accretion rate and further research is being performed to provide some idea of the advective flow paths using a combination of tritium and 3He analyses and geochemistry. The outcomes of the research will result in a set of benchmark data and physical infrastructure that can be used to evaluate other environmental
The Distribution of Dark and Luminous Matter in the Galaxy Cluster Merger Abell 2146
NASA Astrophysics Data System (ADS)
King, Lindsay; Clowe, Douglas; Coleman, Joseph E.; Russell, Helen; Santana, Rebecca; White, Jacob; Canning, Rebecca; Deering, Nicole; Fabian, Andrew C.; Lee, Brandyn; Li, Baojiu; McNamara, Brian R.
2017-01-01
Abell 2146 (z = 0.232) consists of two galaxy clusters undergoing a major merger, presenting two large shock fronts on Chandra X-ray Observatory maps. These observations are consistent with a collision close to the plane of the sky, caught soon after first core passage. Here we outline the weak gravitational lensing analysis of the total mass in the system, using the distorted shapes of distant galaxies seen with Hubble Space Telescope. The highest peak in the mass reconstruction is centred on the brightest cluster galaxy in Abell 2146-A. The mass associated with Abell 2146-B is more extended. The best-fitting mass model with two components has a mass ratio of ~3:1 for the two clusters. From the weak lensing analysis, Abell 2146-A is the primary halo component, and the origin of the apparent discrepancy with the X-ray analysis where Abell 2146-B is the primary halo will be discussed.
NASA Astrophysics Data System (ADS)
Ansari, Hamid Reza
2014-09-01
In this paper we propose a new method for predicting rock porosity based on a combination of several artificial intelligence systems. The method focuses on one of the Iranian carbonate fields in the Persian Gulf. Because there is strong heterogeneity in carbonate formations, estimation of rock properties experiences more challenge than sandstone. For this purpose, seismic colored inversion (SCI) and a new approach of committee machine are used in order to improve porosity estimation. The study comprises three major steps. First, a series of sample-based attributes is calculated from 3D seismic volume. Acoustic impedance is an important attribute that is obtained by the SCI method in this study. Second, porosity log is predicted from seismic attributes using common intelligent computation systems including: probabilistic neural network (PNN), radial basis function network (RBFN), multi-layer feed forward network (MLFN), ε-support vector regression (ε-SVR) and adaptive neuro-fuzzy inference system (ANFIS). Finally, a power law committee machine (PLCM) is constructed based on imperial competitive algorithm (ICA) to combine the results of all previous predictions in a single solution. This technique is called PLCM-ICA in this paper. The results show that PLCM-ICA model improved the results of neural networks, support vector machine and neuro-fuzzy system.
The discovery of diffuse steep spectrum sources in Abell 2256
NASA Astrophysics Data System (ADS)
van Weeren, R. J.; Intema, H. T.; Oonk, J. B. R.; Röttgering, H. J. A.; Clarke, T. E.
2009-12-01
Context: Hierarchical galaxy formation models indicate that during their lifetime galaxy clusters undergo several mergers. An example of such a merging cluster is Abell 2256. Here we report on the discovery of three diffuse radio sources in the periphery of Abell 2256, using the Giant Metrewave Radio Telescope (GMRT). Aims: The aim of the observations was to search for diffuse ultra-steep spectrum radio sources within the galaxy cluster Abell 2256. Methods: We have carried out GMRT 325 MHz radio continuum observations of Abell 2256. V, R and I band images of the cluster were taken with the 4.2 m William Herschel Telescope (WHT). Results: We have discovered three diffuse elongated radio sources located about 1 Mpc from the cluster center. Two are located to the west of the cluster center, and one to the southeast. The sources have a measured physical extent of 170, 140 and 240 kpc, respectively. The two western sources are also visible in deep low-resolution 115-165 MHz Westerbork Synthesis Radio Telescope (WSRT) images, although they are blended into a single source. For the combined emission of the blended source we find an extreme spectral index (α) of -2.05 ± 0.14 between 140 and 351 MHz. The extremely steep spectral index suggests these two sources are most likely the result of adiabatic compression of fossil radio plasma due to merger shocks. For the source to the southeast, we find that {α < -1.45} between 1369 and 325 MHz. We did not find any clear optical counterparts to the radio sources in the WHT images. Conclusions: The discovery of the steep spectrum sources implies the existence of a population of faint diffuse radio sources in (merging) clusters with such steep spectra that they have gone unnoticed in higher frequency (⪆1 GHz) observations. Simply considering the timescales related to the AGN activity, synchrotron losses, and the presence of shocks, we find that most massive clusters should possess similar sources. An exciting possibility
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmüller, U.; Strozzi, T.
2012-12-01
The Lost-Hills oil field located in Kern County,California ranks sixth in total remaining reserves in California. Hundreds of densely packed wells characterize the field with one well every 5000 to 20000 square meters. Subsidence due to oil extraction can be grater than 10 cm/year and is highly variable both in space and time. The RADARSAT-1 SAR satellite collected data over this area with a 24-day repeat during a 2 year period spanning 2002-2004. Relatively high interferometric correlation makes this an excellent region for development and test of deformation time-series inversion algorithms. Errors in deformation time series derived from a stack of differential interferograms are primarily due to errors in the digital terrain model, interferometric baselines, variability in tropospheric delay, thermal noise and phase unwrapping errors. Particularly challenging is separation of non-linear deformation from variations in troposphere delay and phase unwrapping errors. In our algorithm a subset of interferometric pairs is selected from a set of N radar acquisitions based on criteria of connectivity, time interval, and perpendicular baseline. When possible, the subset consists of temporally connected interferograms, otherwise the different groups of interferograms are selected to overlap in time. The maximum time interval is constrained to be less than a threshold value to minimize phase gradients due to deformation as well as minimize temporal decorrelation. Large baselines are also avoided to minimize the consequence of DEM errors on the interferometric phase. Based on an extension of the SVD based inversion described by Lee et al. ( USGS Professional Paper 1769), Schmidt and Burgmann (JGR, 2003), and the earlier work of Berardino (TGRS, 2002), our algorithm combines estimation of the DEM height error with a set of finite difference smoothing constraints. A set of linear equations are formulated for each spatial point that are functions of the deformation velocities
Disentangling the ICL with the CHEFs: Abell 2744 as a Case Study
NASA Astrophysics Data System (ADS)
Jiménez-Teja, Y.; Dupke, R.
2016-03-01
Measurements of the intracluster light (ICL) are still prone to methodological ambiguities, and there are multiple techniques in the literature to address them, mostly based on the binding energy, the local density distribution, or the surface brightness. A common issue with these methods is the a priori assumption of a number of hypotheses on either the ICL morphology, its surface brightness level, or some properties of the brightest cluster galaxy (BCG). The discrepancy in the results is high, and numerical simulations just place a boundary on the ICL fraction in present-day galaxy clusters in the range 10%-50%. We developed a new algorithm based on the Chebyshev-Fourier functions to estimate the ICL fraction without relying on any a priori assumption about the physical or geometrical characteristics of the ICL. We are able to not only disentangle the ICL from the galactic luminosity but mark out the limits of the BCG from the ICL in a natural way. We test our technique with the recently released data of the cluster Abell 2744, observed by the Frontier Fields program. The complexity of this multiple merging cluster system and the formidable depth of these images make it a challenging test case to prove the efficiency of our algorithm. We found a final ICL fraction of 19.17 ± 2.87%, which is very consistent with numerical simulations.
DISENTANGLING THE ICL WITH THE CHEFs: ABELL 2744 AS A CASE STUDY
Jiménez-Teja, Y.; Dupke, R.
2016-03-20
Measurements of the intracluster light (ICL) are still prone to methodological ambiguities, and there are multiple techniques in the literature to address them, mostly based on the binding energy, the local density distribution, or the surface brightness. A common issue with these methods is the a priori assumption of a number of hypotheses on either the ICL morphology, its surface brightness level, or some properties of the brightest cluster galaxy (BCG). The discrepancy in the results is high, and numerical simulations just place a boundary on the ICL fraction in present-day galaxy clusters in the range 10%–50%. We developed a new algorithm based on the Chebyshev–Fourier functions to estimate the ICL fraction without relying on any a priori assumption about the physical or geometrical characteristics of the ICL. We are able to not only disentangle the ICL from the galactic luminosity but mark out the limits of the BCG from the ICL in a natural way. We test our technique with the recently released data of the cluster Abell 2744, observed by the Frontier Fields program. The complexity of this multiple merging cluster system and the formidable depth of these images make it a challenging test case to prove the efficiency of our algorithm. We found a final ICL fraction of 19.17 ± 2.87%, which is very consistent with numerical simulations.
The inverse electroencephalography pipeline
NASA Astrophysics Data System (ADS)
Weinstein, David Michael
The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.
Hierarchical Velocity Structure in the Core of Abell 2597
NASA Technical Reports Server (NTRS)
Still, Martin; Mushotzky, Richard
2004-01-01
We present XMM-Newton RGS and EPIC data of the putative cooling flow cluster Abell 2597. Velocities of the low-ionization emission lines in the spectrum are blue shifted with respect to the high-ionization lines by 1320 (sup +660) (sub -210) kilometers per second, which is consistent with the difference in the two peaks of the galaxy velocity distribution and may be the signature of bulk turbulence, infall, rotation or damped oscillation in the cluster. A hierarchical velocity structure such as this could be the direct result of galaxy mergers in the cluster core, or the injection of power into the cluster gas from a central engine. The uniform X-ray morphology of the cluster, the absence of fine scale temperature structure and the random distribution of the the galaxy positions, independent of velocity, suggests that our line of sight is close to the direction of motion. These results have strong implications for cooling flow models of the cluster Abell 2597. They give impetus to those models which account for the observed temperature structure of some clusters using mergers instead of cooling flows.
The Noble-Abel Stiffened-Gas equation of state
NASA Astrophysics Data System (ADS)
Le Métayer, Olivier; Saurel, Richard
2016-04-01
Hyperbolic two-phase flow models have shown excellent ability for the resolution of a wide range of applications ranging from interfacial flows to fluid mixtures with several velocities. These models account for waves propagation (acoustic and convective) and consist in hyperbolic systems of partial differential equations. In this context, each phase is compressible and needs an appropriate convex equation of state (EOS). The EOS must be simple enough for intensive computations as well as boundary conditions treatment. It must also be accurate, this being challenging with respect to simplicity. In the present approach, each fluid is governed by a novel EOS named "Noble Abel stiffened gas," this formulation being a significant improvement of the popular "Stiffened Gas (SG)" EOS. It is a combination of the so-called "Noble-Abel" and "stiffened gas" equations of state that adds repulsive effects to the SG formulation. The determination of the various thermodynamic functions and associated coefficients is the aim of this article. We first use thermodynamic considerations to determine the different state functions such as the specific internal energy, enthalpy, and entropy. Then we propose to determine the associated coefficients for a liquid in the presence of its vapor. The EOS parameters are determined from experimental saturation curves. Some examples of liquid-vapor fluids are examined and associated parameters are computed with the help of the present method. Comparisons between analytical and experimental saturation curves show very good agreement for wide ranges of temperature for both liquid and vapor.
Current methods of radio occultation data inversion
NASA Technical Reports Server (NTRS)
Kliore, A. J.
1972-01-01
The methods of Abel integral transform and ray-tracing inversion have been applied to data received from radio occultation experiments as a means of obtaining refractive index profiles of the ionospheres and atmospheres of Mars and Venus. In the case of Mars, certain simplifications are introduced by the assumption of small refractive bending in the atmosphere. General inversion methods, independent of the thin atmosphere approximation, have been used to invert the data obtained from the radio occultation of Mariner 5 by Venus; similar methods will be used to analyze data obtained from Jupiter with Pioneers F and G, as well as from the other outer planets in the Outer Planet Grand Tour Missions.
Smoothing Technique and Variance Propagation for Abel Inversion of Scattered Data
1977-04-01
data and determination of the coefficients and transformation matrix. The bulk of this work is accomplished in SUB- ROUTINE COVCAL. However, subsequent...I 11111 I I I i I I I I I H i i I i l I i l i l i i i i l l l t ! i l 72 AE DC-T Ro76-163 I CALL INPUT I 1 1 C) A.4.0 FLOW CHARTS...2.0515~ 1D -03 . . . . . . . L S k I I 4 ? O - U . . . 3 , 7 0 6 9 9 4 0 - 0 3 4 , 1 1 0 7 8 6 0 - 0 3 3 , 6 9 7 1 1 6 D - 0 3 • . 2 ,5221~4D
An inversion method for cometary atmospheres
NASA Astrophysics Data System (ADS)
Hubert, B.; Opitom, C.; Hutsemékers, D.; Jehin, E.; Munhoven, G.; Manfroid, J.; Bisikalo, D. V.; Shematovich, V. I.
2016-10-01
Remote observation of cometary atmospheres produces a measurement of the cometary emissions integrated along the line of sight. This integration is the so-called Abel transform of the local emission rate. The observation is generally interpreted under the hypothesis of spherical symmetry of the coma. Under that hypothesis, the Abel transform can be inverted. We derive a numerical inversion method adapted to cometary atmospheres using both analytical results and least squares fitting techniques. This method, derived under the usual hypothesis of spherical symmetry, allows us to retrieve the radial distribution of the emission rate of any unabsorbed emission, which is the fundamental, physically meaningful quantity governing the observation. A Tikhonov regularization technique is also applied to reduce the possibly deleterious effects of the noise present in the observation and to warrant that the problem remains well posed. Standard error propagation techniques are included in order to estimate the uncertainties affecting the retrieved emission rate. Several theoretical tests of the inversion techniques are carried out to show its validity and robustness. In particular, we show that the Abel inversion of real data is only weakly sensitive to an offset applied to the input flux, which implies that the method, applied to the study of a cometary atmosphere, is only weakly dependent on uncertainties on the sky background which has to be subtracted from the raw observations of the coma. We apply the method to observations of three different comets observed using the TRAPPIST telescope: 103P/ Hartley 2, F6/ Lemmon and A1/ Siding Spring. We show that the method retrieves realistic emission rates, and that characteristic lengths and production rates can be derived from the emission rate for both CN and C2 molecules. We show that the retrieved characteristic lengths can differ from those obtained from a direct least squares fitting over the observed flux of radiation, and
Brig. Gen. Richard F. Abel and Col. Natan J. Lindsay answering questions
NASA Technical Reports Server (NTRS)
1982-01-01
Brigadier General Richard F. Abel, right, director of public affairs for the Air Force, and Colonel Nathan J. Lindsay of the USAF's space division, answer questions concerning STS-4 during a press conference at JSC on May 20, 1982.
SHOCKING TAILS IN THE MAJOR MERGER ABELL 2744
Owers, Matt S.; Couch, Warrick J.; Nulsen, Paul E. J.; Randall, Scott W.
2012-05-01
We identify four rare 'jellyfish' galaxies in Hubble Space Telescope imagery of the major merger cluster Abell 2744. These galaxies harbor trails of star-forming knots and filaments which have formed in situ in gas tails stripped from the parent galaxies, indicating they are in the process of being transformed by the environment. Further evidence for rapid transformation in these galaxies comes from their optical spectra, which reveal starburst, poststarburst, and active galactic nucleus features. Most intriguingly, three of the jellyfish galaxies lie near intracluster medium features associated with a merging 'Bullet-like' subcluster and its shock front detected in Chandra X-ray images. We suggest that the high-pressure merger environment may be responsible for the star formation in the gaseous tails. This provides observational evidence for the rapid transformation of galaxies during the violent core passage phase of a major cluster merger.
Giant ringlike radio structures around galaxy cluster Abell 3376.
Bagchi, Joydeep; Durret, Florence; Neto, Gastão B Lima; Paul, Surajit
2006-11-03
In the current paradigm of cold dark matter cosmology, large-scale structures are assembling through hierarchical clustering of matter. In this process, an important role is played by megaparsec (Mpc)-scale cosmic shock waves, arising in gravity-driven supersonic flows of intergalactic matter onto dark matter-dominated collapsing structures such as pancakes, filaments, and clusters of galaxies. Here, we report Very Large Array telescope observations of giant ( approximately 2 Mpc by 1.6 Mpc), ring-shaped nonthermal radio-emitting structures, found at the outskirts of the rich cluster of galaxies Abell 3376. These structures may trace the elusive shock waves of cosmological large-scale matter flows, which are energetic enough to power them. These radio sources may also be the acceleration sites where magnetic shocks are possibly boosting cosmic-ray particles with energies of up to 10(18) to 10(19) electron volts.
Ram pressure induced star formation in Abell 3266
NASA Astrophysics Data System (ADS)
Bonsall, Brittany
An X-ray observation of the merging galaxy cluster Abell 3266 was obtained via the ROSAT PSPC. This information, along with spectroscopic data from the WIde-field Nearby Galaxy-clusters Survey (i.e. WINGS), were used to investigate whether ram pressure is a mechanism that influences star formation. Galaxies exhibiting ongoing star formation are identified by the presence of strong Balmer lines (Hbeta), known to correspond to early type stars. Older galaxies where a rapid increase in star formation has recently ceased, known as E+A galaxies, are identified by strong Hbeta absorption coupled with little to no [OII] emission. The correlation between recent star formation and "high" ram pressure, as defined by Kapferer et al. (2009) as ≥ 5 x 10-11 dyn cm-2, was tested and lead to a contradiction of the previously held belief that ram pressure influences star formation on the global cluster scale.
ABEL description and implementation of cyber net system
NASA Astrophysics Data System (ADS)
Lu, Jiyuan; Jing, Liang
2013-03-01
Cyber net system is a subclass of Petri Nets. It has more powerful description capability and more complex properties compared with P/T system. Due to its nonlinear relation, it can't use analysis techniques of other net systems directly. This influences the research on cyber net system. In this paper, the author uses hardware description language to describe cyber net system. Simulation analysis is carried out through EDA software tools to disclose properties of the system. This method is introduced in detail through cyber net system model of computing Fibonacci series. ABEL source codes and simulation wave are also presented. The source codes are compiled, optimized, fit design and downloaded to the Programmable Logic Device. Thus ASIC of computing Fibonacci series is obtained. It will break a new path for the analysis and application study of cyber net system.
Shocking Tails in the Major Merger Abell 2744
NASA Astrophysics Data System (ADS)
Owers, Matt S.; Couch, Warrick J.; Nulsen, Paul E. J.; Randall, Scott W.
2012-05-01
We identify four rare "jellyfish" galaxies in Hubble Space Telescope imagery of the major merger cluster Abell 2744. These galaxies harbor trails of star-forming knots and filaments which have formed in situ in gas tails stripped from the parent galaxies, indicating they are in the process of being transformed by the environment. Further evidence for rapid transformation in these galaxies comes from their optical spectra, which reveal starburst, poststarburst, and active galactic nucleus features. Most intriguingly, three of the jellyfish galaxies lie near intracluster medium features associated with a merging "Bullet-like" subcluster and its shock front detected in Chandra X-ray images. We suggest that the high-pressure merger environment may be responsible for the star formation in the gaseous tails. This provides observational evidence for the rapid transformation of galaxies during the violent core passage phase of a major cluster merger.
NASA Astrophysics Data System (ADS)
Razali, Azhani Mohd; Abdullah, Jaafar
2015-04-01
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.
Razali, Azhani Mohd Abdullah, Jaafar
2015-04-29
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.
The distribution of dark and luminous matter in the unique galaxy cluster merger Abell 2146
NASA Astrophysics Data System (ADS)
King, Lindsay J.; Clowe, Douglas I.; Coleman, Joseph E.; Russell, Helen R.; Santana, Rebecca; White, Jacob A.; Canning, Rebecca E. A.; Deering, Nicole J.; Fabian, Andrew C.; Lee, Brandyn E.; Li, Baojiu; McNamara, Brian R.
2016-06-01
Abell 2146 (z = 0.232) consists of two galaxy clusters undergoing a major merger. The system was discovered in previous work, where two large shock fronts were detected using the Chandra X-ray Observatory, consistent with a merger close to the plane of the sky, caught soon after first core passage. A weak gravitational lensing analysis of the total gravitating mass in the system, using the distorted shapes of distant galaxies seen with Advanced Camera for Surveys - Wide Field Channel on Hubble Space Telescope, is presented. The highest peak in the reconstruction of the projected mass is centred on the brightest cluster galaxy (BCG) in Abell 2146-A. The mass associated with Abell 2146-B is more extended. Bootstrapped noise mass reconstructions show the mass peak in Abell 2146-A to be consistently centred on the BCG. Previous work showed that BCG-A appears to lag behind an X-ray cool core; although the peak of the mass reconstruction is centred on the BCG, it is also consistent with the X-ray peak given the resolution of the weak lensing mass map. The best-fitting mass model with two components centred on the BCGs yields M200 = 1.1^{+0.3}_{-0.4} × 1015 and 3^{+1}_{-2} × 1014 M⊙ for Abell 2146-A and Abell 2146-B, respectively, assuming a mass concentration parameter of c = 3.5 for each cluster. From the weak lensing analysis, Abell 2146-A is the primary halo component, and the origin of the apparent discrepancy with the X-ray analysis where Abell 2146-B is the primary halo is being assessed using simulations of the merger.
GHRS observations of mass-loaded flows in Abell 78
NASA Technical Reports Server (NTRS)
Harrington, J. Patrick; Borkowski, Kazimierz J.; Tsvetanov, Zlatan
1995-01-01
Spectroscopic observations of the central star of the planetary nebula Abell 78 were obtained with the Goddard High Resolution Spectrograph (GHRS) onboard the Hubble Space Telescope (HST) in the vicinity of the C IV lambda 1548.2, 1550.8 doublet. We find a series of narrow absorption features superposed on the broad, P Cygni stellar wind profile. These features are seen in both components of the doublet at heliocentric radial velocities of -18, -71, -131, and -192 km/s. At higher velocities, individual components are no longer distinct but, rather, merge into a continuous absorption extending to approximately -385 km/s. This is among the highest velocities ever detected for gas in a planetary nebula. The -18 km/s feature originates in an outer envelope of normal composition, while the -71 km/s feature is produced in the wind-swept shell encircling an irregular wind-blown bubble in the planetary nebula center. The hydrogen-poor ejecta of Abell 78, consisting of dense knots with wind-blown tails, are located in the bubble's interior, in the vicinity of the stellar wind termination shock. The high-velocity C IV lambda 154 absorption features can be explained as due to parcels of ejecta being accelerated to high velocities as they are swept up by the stellar wind during its interaction with dense condensations of H-poor ejecta. As the ablated material is accelerated, it will partially mix with the stellar wind, creating a mass-loaded flow. The abundance anomalies seen at the rim of the bubble attest to the transport of H-poor knot material by such a flow.
Matvienko, G G; Oshlakov, V K; Sukhanov, A Ya; Stepanov, A N
2015-02-28
We consider the algorithms that implement a broadband ('multiwave') radiative transfer with allowance for multiple (aerosol) scattering and absorption by main atmospheric gases. In the spectral range of 0.6 – 1 μm, a closed numerical simulation of modifications of the supercontinuum component of a probing femtosecond pulse is performed. In the framework of the algorithms for solving the inverse atmospheric-optics problems with the help of a genetic algorithm, we give an interpretation of the experimental backscattered spectrum of the supercontinuum. An adequate reconstruction of the distribution mode for the particles of artificial aerosol with the narrow-modal distributions in a size range of 0.5 – 2 mm and a step of 0.5 mm is obtained. (light scattering)
Dirken, J J; Vlaanderen, W
1994-01-01
Inversion of the uterus is a rare complication of childbirth. A primigravida aged 21 and a multigravida aged 32, hospitalized as emergency cases because of inversion of the uterus with major blood loss, were treated with infusion of liquids (to combat shock), repositioning of the uterus under anaesthesia and prevention of reinversion by uterine tonics. Inversion of the uterus should be part of the differential diagnosis in every case of fluxus post partum.
Mass, velocity anisotropy, and pseudo phase-space density profiles of Abell 2142
NASA Astrophysics Data System (ADS)
Munari, E.; Biviano, A.; Mamon, G. A.
2014-06-01
Aims: We aim to compute the mass and velocity anisotropy profiles of Abell 2142 and, from there, the pseudo phase-space density profile Q(r) and the density slope - velocity anisotropy β - γ relation, and then to compare them with theoretical expectations. Methods: The mass profiles were obtained by using three techniques based on member galaxy kinematics, namely the caustic method, the method of dispersion-kurtosis, and MAMPOSSt. Through the inversion of the Jeans equation, it was possible to compute the velocity anisotropy profiles. Results: The mass profiles, as well as the virial values of mass and radius, computed with the different techniques agree with one another and with the estimates coming from X-ray and weak lensing studies. A combined mass profile is obtained by averaging the lensing, X-ray, and kinematics determinations. The cluster mass profile is well fitted by an NFW profile with c = 4.0 ± 0.5. The population of red and blue galaxies appear to have a different velocity anisotropy configuration, since red galaxies are almost isotropic, while blue galaxies are radially anisotropic, with a weak dependence on radius. The Q(r) profile for the red galaxy population agrees with the theoretical results found in cosmological simulations, suggesting that any bias, relative to the dark matter particles, in velocity dispersion of the red component is independent of radius. The β - γ relation for red galaxies matches the theoretical relation only in the inner region. The deviations might be due to the use of galaxies as tracers of the gravitational potential, unlike the non-collisional tracer used in the theoretical relation.
An optical view of the filament region of Abell 85
NASA Astrophysics Data System (ADS)
Boué, G.; Durret, F.; Adami, C.; Mamon, G. A.; Ilbert, O.; Cayatte, V.
2008-10-01
Aims: We present an optical investigation of the Abell 85 cluster filament (z = 0.055) previously interpreted in X-rays as groups falling on to the main cluster. We compare the distribution of galaxies with the X-ray filament, and investigate the galaxy luminosity functions in several bands and in several regions. We search for galaxies where star formation may have been triggered by interactions with intracluster gas or tidal pressure due to the cluster potential when entering the cluster. Methods: Our analysis is based on images covering the South tip of Abell 85 and its infalling filament, obtained with CFHT MegaPrime/MegaCam (1×1 deg2 field) in four bands (u^*, g', r', i') and ESO 2.2 m WFI (38×36 arcmin2 field) in a narrow band filter corresponding to the redshifted Hα line and in an RC broad band filter. The LFs are estimated by statistically subtracting a reference field. Background contamination is minimized by cutting out galaxies redder than the observed red sequence in the g'-i' versus i' colour-magnitude diagram. Results: The galaxy distribution shows a significantly flattened cluster, whose principal axis is slightly offset from the X-ray filament. The analysis of the broad band galaxy luminosity functions shows that the filament region is well populated. The filament is also independently detected as a gravitationally bound structure by the Serna & Gerbal (1996, A&A, 309, 65) hierarchical method. 101 galaxies are detected in the Hα filter, among which 23 have spectroscopic redshifts in the cluster, 2 have spectroscopic redshifts higher than the cluster and 58 have photometric redshifts that tend to indicate that they are background objects. One galaxy that is not detected in the Hα filter probably because of the filter low wavelength cut but shows Hα emission in its SDSS spectrum in the cluster redshift range has been added to our sample. The 24 galaxies with spectroscopic redshifts in the cluster are mostly concentrated in the South part of the
The merging cluster of galaxies Abell 3376: an optical view
NASA Astrophysics Data System (ADS)
Durret, F.; Perrot, C.; Lima Neto, G. B.; Adami, C.; Bertin, E.; Bagchi, J.
2013-12-01
Context. The cluster Abell 3376 is a merging cluster of galaxies at redshift z = 0.046. It is famous mostly for its giant radio arcs, and shows an elongated and highly substructured X-ray emission, but has not been analysed in detail at optical wavelengths. Aims: To improve our understanding of the effects of the major cluster merger on the galaxy properties, we analyse the galaxy luminosity function (GLF) in the B band in several regions as well as the dynamical properties of the substructures. Methods: We have obtained wide field images of Abell 3376 in the B band and derive the GLF applying a statistical subtraction of the background in three regions: a circle of 0.29 deg radius (1.5 Mpc) encompassing the whole cluster, and two circles centred on each of the two brightest galaxies (BCG2, northeast, coinciding with the peak of X-ray emission, and BCG1, southwest) of radii 0.15 deg (0.775 Mpc). We also compute the GLF in the zone around BCG1, which is covered by the WINGS survey in the B and V bands, by selecting cluster members in the red sequence in a (B - V) versus V diagram. Finally, we discuss the dynamical characteristics of the cluster implied by an analysis based on the Serna & Gerbal (SG) method. Results: The GLFs are not well fit by a single Schechter function, but satisfactory fits are obtained by summing a Gaussian and a Schechter function. The GLF computed by selecting galaxies in the red sequence in the region surrounding BCG1 can also be fit by a Gaussian plus a Schechter function. An excess of galaxies in the brightest bins is detected in the BCG1 and BCG2 regions. The dynamical analysis based on the SG method shows the existence of a main structure of 82 galaxies that can be subdivided into two main substructures of 25 and six galaxies. A smaller structure of six galaxies is also detected. Conclusions: The B band GLFs of Abell 3376 are clearly perturbed, as already found in other merging clusters. The dynamical properties are consistent with the
NASA Technical Reports Server (NTRS)
Kastner, S. O.; Rothe, E. D.; Neupert, W. M.
1976-01-01
Intensities of Fe XIV and Fe XIII EUV emission lines obtained at coronal locations beyond the limb by the Goddard spectroheliograph on the OSO 7 satellite have been corrected for the wavelength dependence of the instrument's sensitivity and have been Abel-inverted to provide a valid comparison with theoretical predictions for each ion. Details of the Abel-inversion procedure are given, including explicit formulas for application of Bracewell's (1956) method. The intensity ratios of pairs of lines originating from a common level are compared with expected theoretical transition probability ratios over a range of heliocentric distance; deviations in some cases yield information about adjacent unclassified lines. Comparison of the observations with predictions for Fe XIV and Fe XIII shows generally good agreement, with a few interesting discrepancies that may imply a corresponding need for more accurate collisional excitation cross sections. The same comparison yields the variation of electron density with heliocentric radius for each ion separately; the two density functions are found to agree within a factor of three.
A Study of Dwarf Galaxies in Five Rich Clusters I: Abell 1689 and Abell 1703
NASA Astrophysics Data System (ADS)
Bruursema, Justice; Riley, S.; Ford, H. C.; Zekser, K. C.; Infante, L.; Postman, M.
2008-05-01
Dwarf galaxies play an important role in understanding galactic formation, cluster dynamics, and large scale structure. Although local dwarf populations have been well studied, dwarf galaxies outside the local supercluster remain relatively unexamined. Using ACS Investigation Definition Team data, we examine the dwarf galaxy populations of A1689 (z=0.1832), A1703 (z=0.2580), A2218 (z=0.1756), CL0024+16 (z=0.395), and MS1358+62 (z=0.328). We have modeled and subtracted the light from the brighter elliptical galaxies using the XVISTA subroutine SNUC. An assumption of concentric elliptical isophotes is made and the position angle, ellipticity, and brightness are fit using a nonlinear least-squares algorithm. The subtraction of the models reveals a population of dwarf galaxies usually hidden by the light of bright ellipticals. SExtractor and Bayesian Photometric Redshifts (BPZ) are used in order to identify cluster members. With the 0.05" per pixel resolution of ACS and a completeness of mF625 = 28 we are able to identify approximately 1000 dwarf galaxies candidates, defined as MF625 > -18, in all five clusters combined. We will discuss the results of this research including, but not limited to, dwarf galaxy luminosity functions, radial distribution, and the characteristics of dwarfs compared to those in other well studied clusters. ACS was developed under NASA contract NAS5-32865, and this research was supported by NASA grant NAG5-7697.
The merging cluster Abell 1758: an optical and dynamical view
NASA Astrophysics Data System (ADS)
Monteiro-Oliveira, Rogerio; Serra Cypriano, Eduardo; Machado, Rubens; Lima Neto, Gastao B.
2015-08-01
The galaxy cluster Abell 1758-North (z=0.28) is a binary system composed by the sub-structures NW and NE. This is supposed to be a post-merging cluster due to observed detachment between the NE BCG and the respective X-ray emitting hot gas clump in a scenario very close to the famous Bullet Cluster. On the other hand, the projected position of the NW BCG coincides with the local hot gas peak. This system was been targeted previously by several studies, using multiple wavelengths and techniques, but there is still no clear picture of the scenario that could have caused this unusual configuration. To help solving this complex puzzle we added some pieces: firstly, we have used deep B, RC and z' Subaru images to perform both weak lensing shear and magnification analysis of A1758 (including here the South component that is not in interaction with A1758-North) modeling each sub-clump as an NFW profile in order to constrain masses and its center positions through MCMC methods; the second piece is the dynamical analysis using radial velocities available in the literature (143) plus new Gemini-GMOS/N measurements (68 new redshifts).From weak lensing we found that independent shear and magnification mass determinations are in excellent agreement between them and combining both we could reduce mass error bar by ~30% compared to shear alone. By combining this two weak-lensing probes we found that the position of both Northern BCGs are consistent with the masses centers within 2σ and and the NE hot gas peak to be offseted of the respective mass peak (M200=5.5 X 1014 M⊙) with very high significance. The most massive structure is NW (M200=7.95 X 1014 M⊙ ) where we observed no detachment between gas, DM and BCG.We have calculated a low line-of-sight velocity difference (<300 km/s) between A1758 NW and NE. We have combined it with the projected velocity of 1600 km/s which was estimated by previous X-ray analysis (David & Kempner 2004) and we have obtained a small angle between
The planetary nebula Abell 48 and its [WN] nucleus
NASA Astrophysics Data System (ADS)
Frew, David J.; Bojičić, I. S.; Parker, Q. A.; Stupar, M.; Wachter, S.; DePew, K.; Danehkar, A.; Fitzgerald, M. T.; Douchin, D.
2014-05-01
We have conducted a detailed multi-wavelength study of the peculiar nebula Abell 48 and its central star. We classify the nucleus as a helium-rich, hydrogen-deficient star of type [WN4-5]. The evidence for either a massive WN or a low-mass [WN] interpretation is critically examined, and we firmly conclude that Abell 48 is a planetary nebula (PN) around an evolved low-mass star, rather than a Population I ejecta nebula. Importantly, the surrounding nebula has a morphology typical of PNe, and is not enriched in nitrogen, and thus not the `peeled atmosphere' of a massive star. We estimate a distance of 1.6 kpc and a reddening, E(B - V) = 1.90 mag, the latter value clearly showing the nebula lies on the near side of the Galactic bar, and cannot be a massive WN star. The ionized mass (˜0.3 M⊙) and electron density (700 cm-3) are typical of middle-aged PNe. The observed stellar spectrum was compared to a grid of models from the Potsdam Wolf-Rayet (PoWR) grid. The best-fitting temperature is 71 kK, and the atmospheric composition is dominated by helium with an upper limit on the hydrogen abundance of 10 per cent. Our results are in very good agreement with the recent study of Todt et al., who determined a hydrogen fraction of 10 per cent and an unusually large nitrogen fraction of ˜5 per cent. This fraction is higher than any other low-mass H-deficient star, and is not readily explained by current post-AGB models. We give a discussion of the implications of this discovery for the late-stage evolution of intermediate-mass stars. There is now tentative evidence for two distinct helium-dominated post-AGB lineages, separate to the helium- and carbon-dominated surface compositions produced by a late thermal pulse. Further theoretical work is needed to explain these recent discoveries.
The Sunyaev-Zeldovich Effect in Abell 370
NASA Technical Reports Server (NTRS)
Grego, Laura; Carlstrom, John E.; Joy, Marshall K.; Reese, Erik D.; Holder, Gilbert P.; Patel, Sandeep; Cooray, Asantha R.; Holzappel, William L.
2000-01-01
We present interferometric measurements of the Sunyaev-Zeldovich (SZ) effect toward the galaxy cluster Abell 370. These measurements, which directly probe the pressure of the cluster's gas, show the gas distribution to be strongly aspherical, as do the X-ray and gravitational lensing observations. We calculate the cluster's gas mass fraction in two ways. We first compare the gas mass derived from the SZ measurements to the lensing-derived gravitational mass near the critical lensing radius. We also calculate the gas mass fraction from the SZ data by deprojecting the three-dimensional gas density distribution and deriving the total mass under the assumption that the gas is in hydrostatic equilibrium (HSE). We test the assumptions in the HSE method by comparing the total cluster mass implied by the two methods and find that they agree within the errors of the measurement. We discuss the possible system- atic errors in the gas mass fraction measurement and the constraints it places on the matter density parameter, Omega(sub M).
ABELL 1201: A MINOR MERGER AT SECOND CORE PASSAGE
Ma Chengjiun; Nulsen, Paul E. J.; McNamara, Brian R.; Murray, Stephen S.; Owers, Matt; Couch, Warrick J.
2012-06-20
We present an analysis of the structures and dynamics of the merging cluster Abell 1201, which has two sloshing cold fronts around a cooling core, and an offset gas core approximately 500 kpc northwest of the center. New Chandra and XMM-Newton data reveal a region of enhanced brightness east of the offset core, with breaks in surface brightness along its boundary to the north and east. This is interpreted as a tail of gas stripped from the offset core. Gas in the offset core and the tail is distinguished from other gas at the same distance from the cluster center chiefly by having higher density, hence lower entropy. In addition, the offset core shows marginally lower temperature and metallicity than the surrounding area. The metallicity in the cool core is high and there is an abrupt drop in metallicity across the southern cold front. We interpret the observed properties of the system, including the placement of the cold fronts, the offset core, and its tail in terms of a simple merger scenario. The offset core is the remnant of a merging subcluster, which first passed pericenter southeast of the center of the primary cluster and is now close to its second pericenter passage, moving at {approx_equal} 1000 km s{sup -1}. Sloshing excited by the merger gave rise to the two cold fronts and the disposition of the cold fronts reveals that we view the merger from close to the plane of the orbit of the offset core.
A series of shocks and edges in Abell 2219
NASA Astrophysics Data System (ADS)
Canning, R. E. A.; Allen, S. W.; Applegate, D. E.; Kelly, P. L.; von der Linden, A.; Mantz, A.; Million, E.; Morris, R. G.; Russell, H. R.
2017-01-01
We present deep, 170 ks, Chandra X-ray observations of Abell 2219 (z = 0.23), one of the hottest and most X-ray luminous clusters known, and which is experiencing a major merger event. We discover a `horseshoe' of high-temperature gas surrounding the ram-pressure-stripped, bright, hot, X-ray cores. We confirm an X-ray shock front located north-west of the X-ray centroid and along the projected merger axis. We also find a second shock front to the south-east of the X-ray centroid making this only the second cluster where both the shock and reverse shock are confirmed with X-ray temperature measurements. We also present evidence for a possible sloshing cold front in the `remnant tail' of one of the sub-cluster cores. The cold front and north-west shock front geometrically bound the radio halo and appear to be directly influencing the radio properties of the cluster.
Chandra Observations of Point Sources in Abell 2255
NASA Technical Reports Server (NTRS)
Davis, David S.; Miller, Neal A.; Mushotzky, Richard F.
2003-01-01
In our search for "hidden" AGN we present results from a Chandra observation of the nearby cluster Abell 2255. Eight cluster galaxies are associated with point-like X-ray emission, and we classify these galaxies based on their X-ray, radio, and optical properties. At least three are associated with active galactic nuclei (AGN) with no optical signatures of nuclear activity, with a further two being potential AGN. Of the potential AGN, one corresponds to a galaxy with a post-starburst optical spectrum. The remaining three X-ray detected cluster galaxies consist of two starbursts and an elliptical with luminous hot gas. Of the eight cluster galaxies five are associated with luminous (massive) galaxies and the remaining three lie in much lower luminosity systems. We note that the use of X-ray to optical flux ratios for classification of X-ray sources is often misleading, and strengthen the claim that the fraction of cluster galaxies hosting an AGN based on optical data is significantly lower than the fraction based on X-ray and radio data.
Detection of a radio bridge in Abell 3667
NASA Astrophysics Data System (ADS)
Carretti, E.; Brown, S.; Staveley-Smith, L.; Malarecki, J. M.; Bernardi, G.; Gaensler, B. M.; Haverkorn, M.; Kesteven, M. J.; Poppi, S.
2013-04-01
We have detected a radio bridge of unpolarized synchrotron emission connecting the NW relic of the galaxy cluster Abell 3667 to its central regions. We used data at 2.3 GHz from the S-band Polarization All Sky Survey and at 3.3 GHz from a follow-up observation, both conducted with the Parkes radio telescope. This emission is further aligned with a diffuse X-ray tail, and represents the most compelling evidence for an association between intracluster medium turbulence and diffuse synchrotron emission. This is the first clear detection of a bridge associated both with an outlying cluster relic and X-ray diffuse emission. All the indicators point towards the synchrotron bridge being related to the post-shock turbulent wake trailing the shock front generated by a major merger in a massive cluster. Although predicted by simulations, this is the first time such emission is detected with high significance and clearly associated with the path of a confirmed shock. Although the origin of the relativistic electrons is still unknown, the turbulent re-acceleration model provides a natural explanation for the large-scale emission. The equipartition magnetic field intensity of the bridge is Beq = 2.2 ± 0.3 μG. We further detect diffuse emission coincident with the central regions of the cluster for the first time.
A shock at the radio relic position in Abell 115
NASA Astrophysics Data System (ADS)
Botteon, A.; Gastaldello, F.; Brunetti, G.; Dallacasa, D.
2016-07-01
We analysed a deep Chandra observation (334 ks) of the galaxy cluster Abell 115 and detected a shock cospatial with the radio relic. The X-ray surface brightness profile across the shock region presents a discontinuity, corresponding to a density compression factor C=2.0± 0.1, leading to a Mach number M=1.7± 0.1 (M=1.4-2 including systematics). Temperatures measured in the upstream and downstream regions are consistent with what expected for such a shock: Tu=4.3+1.0-0.6 keV and Td=7.9+1.4-1.1 keV, respectively, implying a Mach number M=1.8+0.5-0.4. So far, only few other shocks discovered in galaxy clusters are consistently detected from both density and temperature jumps. The spatial coincidence between this discontinuity and the radio relic edge strongly supports the view that shocks play a crucial role in powering these synchrotron sources. We suggest that the relic is originated by shock re-acceleration of relativistic electrons rather than acceleration from the thermal pool. The position and curvature of the shock and the associated relic are consistent with an off-axis merger with unequal mass ratio where the shock is expected to bend around the core of the less massive cluster.
Shedding light on the matter of Abell 781
NASA Astrophysics Data System (ADS)
Wittman, D.; Dawson, William; Benson, Bryant
2014-02-01
The galaxy cluster Abell 781 West has been viewed as a challenge to weak gravitational lensing mass calibration, as Cook & dell'Antonio found that the weak lensing signal-to-noise ratio in three independent sets of observations was consistently lower than expected from mass models based on X-ray and dynamical measurements. We correct some errors in statistical inference in Cook & dell'Antonio and show that their own results agree well with the dynamical mass and exhibit at most 2.2-2.9σ low compared to the X-ray mass, similar to the tension between the dynamical and X-ray masses. Replacing their simple magnitude cut with weights based on source photometric redshifts eliminates the tension between lensing and X-ray masses; in this case the weak lensing mass estimate is actually higher than, but still in agreement with, the dynamical estimate. A comparison of lensing analyses with and without photometric redshifts shows that a 1-2σ chance alignment of low-redshift sources lowers the signal-to-noise ratio observed by all previous studies which used magnitude cuts rather than photometric redshifts. The fluctuation is unexceptional, but appeared to be highly significant in Cook & dell'Antonio due to the errors in statistical interpretation.
The Sunyaev-Zel'dovich Effect Spectrum of Abell 2163
NASA Technical Reports Server (NTRS)
LaRoque, S. J.; Carlstrom, J. E.; Reese, E. D.; Holder, G. P.; Holzapfel, W. L.; Joy, M.; Grego, L.; Six, N. Frank (Technical Monitor)
2002-01-01
We present an interferometric measurement of the Sunyaev-Zel'dovich effect (SZE) at 1 cm for the galaxy cluster Abell 2163. We combine this data point with previous measurements at 1.1, 1.4, and 2.1 mm from the SuZIE experiment to construct the most complete SZE spectrum to date. The intensity in four wavelength bands is fit to determine the Compton y-parameter (y(sub 0)) and the peculiar velocity (v(sub p)) for this cluster. Our results are y(sub 0) = 3.56((sup +0.41+0.27)(sub -0.41-0.19)) X 10(exp -4) and v(sub p) = 410((sup +1030+460) (sub -850-440)) km s(exp -1) where we list statistical and systematic uncertainties, respectively, at 68% confidence. These results include corrections for contamination by Galactic dust emission. We find less contamination by dust emission than previously reported. The dust emission is distributed over much larger angular scales than the cluster signal and contributes little to the measured signal when the details of the SZE observing strategy are taken into account.
Narrow-angle tail radio sources and the distribution of galaxy orbits in Abell clusters
NASA Technical Reports Server (NTRS)
O'Dea, Christopher P.; Sarazin, Craig L.; Owen, Frazer N.
1987-01-01
The present data on the orientations of the tails with respect to the cluster centers of a sample of 70 narrow-angle-tail (NAT) radio sources in Abell clusters show the distribution of tail angles to be inconsistent with purely radial or circular orbits in all the samples, while being consistent with isotropic orbits in (1) the whole sample, (2) the sample of NATs far from the cluster center, and (3) the samples of morphologically regular Abell clusters. Evidence for very radial orbits is found, however, in the sample of NATs near the cluster center. If these results can be generalized to all cluster galaxies, then the presence of radial orbits near the center of Abell clusters suggests that violent relaxation may not have been fully effective even within the cores of the regular clusters.
Bubbles and B-Flats: A Deep Observation of Abell 2052
NASA Astrophysics Data System (ADS)
Blanton, Elizabeth
2004-09-01
The cooling flow cluster Abell 2052 has, arguably, the morphology most similar to the Perseus cluster as seen with Chandra images. Two clear bubbles to the N and S of the center of Abell 2052 are filled with the radio lobes associated with 3C 317. An unsharp-masked image reveals faint ripple features similar to those seen in the Perseus cluster which may represent the propagation of sound waves into the cluster from the radio source. We propose to observe Abell 2052 much more deeply to study the ripple features, search for ghost bubbles, search for cooling gas in the bright shells around the radio source that may link the X-ray and H-alpha emission, detect hot gas within the X-ray holes, and directly compare the star formation and cooling rates in the cluster center.
New machine-readable version of Abell catalog of clusters of galaxies
NASA Astrophysics Data System (ADS)
Kalinkov, M.; Stavrev, K. Y.; Kuneva, I. F.
An improved version of the magnetic-tape catalog of Abell and Zwicky clusters of galaxies (Kalinkov et al., 1976) is briefly characterized, with an emphasis on the distance-calibration and homogenization techniques employed in its compilation. The distance calibration is improved by performing regression analyses on clusters of known Bautz-Morgan type; parameter and standard-deviation values are presented in a table. Selection effects are investigated, and it is shown that the increase in absolute magnitude estimates with distance is less pronounced for the values based on the photored magnitude of the first-rank galaxy (Leir and van den Bergh, 1977) than for those determined by Abell (1958).
Angular cross-relations of Abell clusters in different distance classes
NASA Technical Reports Server (NTRS)
Szalay, A. S.; Hollosi, J.; Toth, G.
1989-01-01
The angular autocorrelation and cross-correlation functions of the D = 1 ... 4, D = 5, and D = 6 distance class Abell clusters are estimated. There is a strong anticorrelation between the most distant D = 6 and the closest D = 1 ... 4 subsamples. It is suggested that an artifact of the cluster identification process presumably due to the finite angular size of the cluster. This anticorrelation seems to contradict some recent estimations of projection contaminations in the Abell catalog. The angular proximity of a foreground cluster may have caused a background cluster not to be counted as it was thought to be a subcluster or it was erroneously assigned to a nearer distance class.
The nearby Abell clusters. III - Luminosity functions for eight rich clusters
NASA Technical Reports Server (NTRS)
Oegerle, William R.; Hoessel, John G.
1989-01-01
Red photographic data on eight rich Abell clusters are combined with previous results on four other Abell clusters to study the luminosity functions of the clusters. The results produce a mean value of the characteristic galaxy magnitude (M asterisk) that is consistent with previous results. No relation is found between the magnitude of the first-ranked cluster galaxy and M asterisk, suggesting that the value of M asterisk is not changed by dynamical evolution. The faint ends of the luminosity functions for many of the clusters are quite flat, validating the nonuniversality in the parametrization of Schechter (1976) functions for rich clusters of galaxies.
U(1)-invariant membranes: The geometric formulation, Abel, and pendulum differential equations
Zheltukhin, A. A.; Trzetrzelewski, M.
2010-06-15
The geometric approach to study the dynamics of U(1)-invariant membranes is developed. The approach reveals an important role of the Abel nonlinear differential equation of the first type with variable coefficients depending on time and one of the membrane extendedness parameters. The general solution of the Abel equation is constructed. Exact solutions of the whole system of membrane equations in the D=5 Minkowski space-time are found and classified. It is shown that if the radial component of the membrane world vector is only time dependent, then the dynamics is described by the pendulum equation.
The nearby Abell clusters. III. Luminosity functions for eight rich clusters
Oegerle, W.R.; Hoessel, J.G. Washburn Observatory, Madison, WI )
1989-11-01
Red photographic data on eight rich Abell clusters are combined with previous results on four other Abell clusters to study the luminosity functions of the clusters. The results produce a mean value of the characteristic galaxy magnitude (M asterisk) that is consistent with previous results. No relation is found between the magnitude of the first-ranked cluster galaxy and M asterisk, suggesting that the value of M asterisk is not changed by dynamical evolution. The faint ends of the luminosity functions for many of the clusters are quite flat, validating the nonuniversality in the parametrization of Schechter (1976) functions for rich clusters of galaxies. 40 refs.
Generalized matrix inversion is not harder than matrix multiplication
NASA Astrophysics Data System (ADS)
Petkovic, Marko D.; Stanimirovic, Predrag S.
2009-08-01
Starting from the Strassen method for rapid matrix multiplication and inversion as well as from the recursive Cholesky factorization algorithm, we introduced a completely block recursive algorithm for generalized Cholesky factorization of a given symmetric, positive semi-definite matrix . We used the Strassen method for matrix inversion together with the recursive generalized Cholesky factorization method, and established an algorithm for computing generalized {2,3} and {2,4} inverses. Introduced algorithms are not harder than the matrix-matrix multiplication.
The Sunyaev-Zel'dovich Effect in Abell 370
NASA Technical Reports Server (NTRS)
Grego, Laura; Carlstrom, John E.; Joy, Marshall K.; Reese, Erik D.; Holder, Gilbert P.; Patel, Sandeep; Holzapfel, William L.; Cooray, Asantha K.
1999-01-01
We present interferometric measurements of the Sunyaev-Zel'dovich (SZ) effect towards the galaxy cluster Abell 370. These measurements, which directly probe the pressure of the cluster's gas, show the gas is strongly aspherical, on agreement with the morphology revealed by x-ray and gravitational lensing observations. We calculate the cluster's gas mass fraction by comparing the gas mass derived from the SZ measurements to the lensing-derived gravitational mass near the critical lensing radius. We also calculate the gas mass fraction from the SZ data by deriving the total mass under the assumption that the gas is in hydrostatic equilibrium (HSE). We test the assumptions in the HSE method by comparing the total cluster mass implied by the two methods. The Hubble constant derived for this cluster, when the known systematic uncertainties are included, has a very wide range of values and therefore does not provide additional constraints on the validity of the assumptions. We examine carefully the possible systematic errors in the gas fraction measurement. The gas fraction is a lower limit to the cluster's baryon fraction and so we compare the gas mass fraction, calibrated by numerical simulations to approximately the virial radius, to measurements of the global mass fraction of baryonic matter, OMEGA(sub B)/OMEGA(sub matter). Our lower limit to the cluster baryon fraction is f(sub B) = (0.043 +/- 0.014)/h (sub 100). From this, we derive an upper limit to the universal matter density, OMEGA(sub matter) <= 0.72/h(sub 100), and a likely value of OMEGA(sub matter) <= (0.44(sup 0.15, sub -0.12)/h(sub 100).
THE GALAXY POPULATION OF LOW-REDSHIFT ABELL CLUSTERS
Barkhouse, Wayne A.; Yee, H. K. C.; Lopez-Cruz, Omar E-mail: hyee@astro.utoronto.c
2009-10-01
We present a study of the luminosity and color properties of galaxies selected from a sample of 57 low-redshift Abell clusters. We utilize the non-parametric dwarf-to-giant ratio (DGR) and the blue galaxy fraction (f{sub b} ) to investigate the clustercentric radial-dependent changes in the cluster galaxy population. Composite cluster samples are combined by scaling the counting radius by r {sub 200} to minimize radius selection bias. The separation of galaxies into a red and blue population was achieved by selecting galaxies relative to the cluster color-magnitude relation. The DGR of the red and blue galaxies is found to be independent of cluster richness (B {sub gc}), although the DGR is larger for the blue population at all measured radii. A decrease in the DGR for the red and red+blue galaxies is detected in the cluster core region, while the blue galaxy DGR is nearly independent of radius. The f{sub b} is found not to correlate with B {sub gc}; however, a steady decline toward the inner-cluster region is observed for the giant galaxies. The dwarf galaxy f{sub b} is approximately constant with clustercentric radius except for the inner-cluster core region where f{sub b} decreases. The clustercentric radial dependence of the DGR and the galaxy blue fraction indicates that it is unlikely that a simple scenario based on either pure disruption or pure fading/reddening can describe the evolution of infalling dwarf galaxies; both outcomes are produced by the cluster environment.
Deep Westerbork observations of Abell 2256 at 350 MHz
NASA Astrophysics Data System (ADS)
Brentjens, M. A.
2008-10-01
Deep polarimetric Westerbork observations of the galaxy cluster Abell 2256 are presented, covering a frequency range of 325-377 MHz. The central halo source has a diameter of the order of 1.2 Mpc (18´), which is somewhat larger than at 1.4 GHz. With α = -1.61±0.04, the halo spectrum between 1.4 GHz and 22.25 MHz is less steep than previously thought. The centre of the ultra steep spectrum source in the eastern part of the cluster exhibits a spectral break near 400 MHz. It is estimated to be at least 51 million years old, but possibly older than 125 million years. A final measurement requires observations in the 10-150 MHz range. It remains uncertain whether the source is a radio tail of Fabricant galaxy 122, situated in the northeastern tip of the source. Faraday rotation measure synthesis revealed no polarized flux at all in the cluster. The polarization fraction of the brightest parts of the relic area is less than 1%. The RM-synthesis nevertheless revealed 9 polarized sources in the field enabling an accurate measurement of the Galactic Faraday rotation (-33±2 rad m-2 in front of the relic). Based on its depolarization on longer wavelengths, the line-of-sight magnetic field in relic filament G is estimated to be between 0.02 and 2 μG. A value of 0.2 μG appears most reasonable given the currently available data.
Merger shocks in Abell 3667 and the Cygnus A cluster
NASA Astrophysics Data System (ADS)
Sarazin, C. L.; Finoguenov, A.; Wik, D. R.
2013-04-01
We present new XMM-Newton observations of the northwest (NW) radio relic region in the cluster Abell 3667. We detect a jump in the X-ray surface brightness and X-ray temperature at the sharp outer edge of the radio relic which indicate that this is the location of a merger shock with a Mach number of about 2. Comparing the radio emission to the shock properties implies that approximately 0.2% of the dissipated shock kinetic energy goes into accelerating relativistic electrons. This is an order of magnitude smaller than the efficiency of shock acceleration in many Galactic supernova remnants, which may be due to the lower Mach numbers of cluster merger shocks. The X-ray and radio properties indicate that the magnetic field strength in the radio relic is ⪆ 3 μG, which is a very large field at a projected distance of ˜ 2.2 Mpc from the center of a cluster. The radio spectrum is relatively flat at the shock, and steepens dramatically with distance behind the shock. This is consistent with radiative losses by the electrons and the post-shock speed determined from the X-ray properties. The Cygnus A radio source is located in a merging cluster of galaxies. This appears to be an early-stage merger. Our recent Suzaku observation confirm the presence of a hot region between the two subclusters which agrees with the predicted shocked region. The high spectral resolution of the CCDs on Suzaku allowed us to measure the radial component of the merger velocity, Δ v_r ≈ 2650 km s-1.
Aluminum could be transported via phloem in Camellia oleifera Abel.
Zeng, Qi Long; Chen, Rong Fu; Zhao, Xue Qiang; Shen, Ren Fang; Noguchi, Akira; Shinmachi, Fumie; Hasegawa, Isao
2013-01-01
Aluminum (Al) accumulation and long-distance transport in oil tea (Camellia oleifera Abel.), known to be an Al accumulator, was investigated. The average Al concentration in the embryo of oil tea seeds was 389 mg Al kg(-1) dry weight, which was higher than seeds of other Al accumulators. By partially suppressing leaf transpiration in the field, Al accumulation in leaves was depressed, which clarified the importance of xylem transport to Al accumulation in leaves. However, the effects of xylem transport alone could not sufficiently explain the high Al accumulation in the seasons when the leaf transpiration is weak, which hints the necessity of phloem transport working. Aluminum content in phloem exudates of barks provides another evidence of phloem transport. Images from scanning electron microscopy and energy-dispersive analysis also showed that Al was present in the phloem of oil tea petioles. Aluminum in oil tea could also be redistributed: higher concentrations of Al were found in leaves when Al was supplied to a different leaf of the same plant. In addition, Al was present in newly emerging roots of oil tea seedlings in which all original roots were excised prior to treatment, and a positive correlation existed between Al content in the newly formed roots and that in the leaves. The results using the empty seed coat technique showed that Al unloading via the phloem occurred during seed development. In conclusion, the results demonstrated that Al could be redistributed between leaves, from seeds to leaves, leaves to roots and leaves to seeds, which indicates that Al can be transported via the phloem in oil tea.
Discovery of a Star Formation Region in Abell 2052
NASA Astrophysics Data System (ADS)
Martel, André R.; Sparks, William B.; Allen, Mark G.; Koekemoer, Anton M.; Baum, Stefi A.
2002-03-01
We report the discovery of an ultraviolet filament detected in a new Space Telescope Imaging Spectrograph (STIS) NUV-MAMA image of the cD galaxy UGC 9799, located in the cooling-flow cluster Abell 2052 and host to the radio source 3C 317. The filament is ~2 kpc in length and is located at a distance of ~4 kpc from the nucleus along a north-south axis. It consists of three knots embedded along the edges of a diffuse filamentary halo. The northern half of the filament is narrow (~100 pc) and straight while the southern half is bent and more diffuse. The blue color (NUV-V~-2.4) and morphology of the filament are most consistent with a recent episode of star formation (T~5 Myr). Only a few×104 Msolar of young stars or a star formation rate of ~10-3 Msolar yr-1 is required to produce the feature. A steep ultraviolet halo is detected around the unresolved nucleus, and it may be associated with an old stellar component. No ultraviolet features are identified at the location of the extended emission-line nebulae observed from the ground, indicating that OB stars are not the primary source of ionization in these regions. We consider cooling flows and a merger with a satellite galaxy the trigger for the starburst regions and conclude that the latter is the more consistent with the chaotic dust lanes spread throughout the host galaxy. The star formation observed is orders of magnitude less than the inferred cooling rate in the cooling flow scenario. Based on observations made with the NASA/ESA Hubble Space Telescope, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
The mass distribution of the unusual merging cluster Abell 2146 from strong lensing
NASA Astrophysics Data System (ADS)
Coleman, Joseph E.; King, Lindsay J.; Oguri, Masamune; Russell, Helen R.; Canning, Rebecca E. A.; Leonard, Adrienne; Santana, Rebecca; White, Jacob A.; Baum, Stefi A.; Clowe, Douglas I.; Edge, Alastair; Fabian, Andrew C.; McNamara, Brian R.; O'Dea, Christopher P.
2017-01-01
Abell 2146 consists of two galaxy clusters that have recently collided close to the plane of the sky, and it is unique in showing two large shocks on Chandra X-ray Observatory images. With an early stage merger, shortly after first core passage, one would expect the cluster galaxies and the dark matter to be leading the X-ray emitting plasma. In this regard, the cluster Abell 2146-A is very unusual in that the X-ray cool core appears to lead, rather than lag, the brightest cluster galaxy (BCG) in their trajectories. Here we present a strong-lensing analysis of multiple-image systems identified on Hubble Space Telescope images. In particular, we focus on the distribution of mass in Abell 2146-A in order to determine the centroid of the dark matter halo. We use object colours and morphologies to identify multiple-image systems; very conservatively, four of these systems are used as constraints on a lens mass model. We find that the centroid of the dark matter halo, constrained using the strongly lensed features, is coincident with the BCG, with an offset of ≈2 kpc between the centres of the dark matter halo and the BCG. Thus from the strong-lensing model, the X-ray cool core also leads the centroid of the dark matter in Abell 2146-A, with an offset of ≈30 kpc.
NASA Astrophysics Data System (ADS)
Krings, T.; Gerilowski, K.; Buchwitz, M.; Reuter, M.; Tretner, A.; Erzinger, J.; Heinze, D.; Burrows, J. P.; Bovensmann, H.
2011-04-01
MAMAP is an airborne passive remote sensing instrument designed for measuring columns of methane (CH4) and carbon dioxide (CO2). The MAMAP instrument consists of two optical grating spectrometers: One in the short wave infrared band (SWIR) at 1590-1690 nm to measure CO2 and CH4 absorptions and another one in the near infrared (NIR) at 757-768 nm to measure O2 absorptions for reference purposes. MAMAP can be operated in both nadir and zenith geometry during the flight. Mounted on an airplane MAMAP can effectively survey areas on regional to local scales with a ground pixel resolution of about 29 m × 33 m for a typical aircraft altitude of 1250 m and a velocity of 200 km h-1. The retrieval precision of the measured column relative to background is typically ≲ 1% (1σ). MAMAP can be used to close the gap between satellite data exhibiting global coverage but with a rather coarse resolution on the one hand and highly accurate in situ measurements with sparse coverage on the other hand. In July 2007 test flights were performed over two coal-fired powerplants operated by Vattenfall Europe Generation AG: Jänschwalde (27.4 Mt CO2 yr-1) and Schwarze Pumpe (11.9 Mt CO2 yr-1), about 100 km southeast of Berlin, Germany. By using two different inversion approaches, one based on an optimal estimation scheme to fit Gaussian plume models from multiple sources to the data, and another using a simple Gaussian integral method, the emission rates can be determined and compared with emissions as stated by Vattenfall Europe. An extensive error analysis for the retrieval's dry column results (XCO2 and XCH4) and for the two inversion methods has been performed. Both methods - the Gaussian plume model fit and the Gaussian integral method - are capable of delivering reliable estimates for strong point source emission rates, given appropriate flight patterns and detailed knowledge of wind conditions.
MUSE observations of the lensing cluster Abell 1689
NASA Astrophysics Data System (ADS)
Bina, D.; Pelló, R.; Richard, J.; Lewis, J.; Patrício, V.; Cantalupo, S.; Herenz, E. C.; Soto, K.; Weilbacher, P. M.; Bacon, R.; Vernet, J. D. R.; Wisotzki, L.; Clément, B.; Cuby, J. G.; Lagattuta, D. J.; Soucail, G.; Verhamme, A.
2016-05-01
Context. This paper presents the results obtained with the Multi Unit Spectroscopic Explorer (MUSE) for the core of the lensing cluster Abell 1689, as part of MUSE's commissioning at the ESO Very Large Telescope. Aims: Integral-field observations with MUSE provide a unique view of the central 1 × 1 arcmin2 region at intermediate spectral resolution in the visible domain, allowing us to conduct a complete census of both cluster galaxies and lensed background sources. Methods: We performed a spectroscopic analysis of all sources found in the MUSE data cube. Two hundred and eighty-two objects were systematically extracted from the cube based on a guided-and-manual approach. We also tested three different tools for the automated detection and extraction of line emitters. Cluster galaxies and lensed sources were identified based on their spectral features. We investigated the multiple-image configuration for all known sources in the field. Results: Previous to our survey, 28 different lensed galaxies displaying 46 multiple images were known in the MUSE field of view, most of them were detected through photometric redshifts and lensing considerations. Of these, we spectroscopically confirm 12 images based on their emission lines, corresponding to 7 different lensed galaxies between z = 0.95 and 5.0. In addition, 14 new galaxies have been spectroscopically identified in this area thanks to MUSE data, with redshifts ranging between 0.8 and 6.2. All background sources detected within the MUSE field of view correspond to multiple-imaged systems lensed by A1689. Seventeen sources in total are found at z ≥ 3 based on their Lyman-α emission, with Lyman-α luminosities ranging between 40.5 ≲ log (Lyα) ≲ 42.5 after correction for magnification. This sample is particularly sensitive to the slope of the luminosity function toward the faintest end. The density of sources obtained in this survey is consistent with a steep value of α ≤ -1.5, although this result still
NASA Astrophysics Data System (ADS)
Sergienko, Olga
2013-04-01
Since Doug MacAyeal's pioneering studies of the ice-stream basal traction optimizations by control methods, inversions for unknown parameters (e.g., basal traction, accumulation patterns, etc) have become a hallmark of the present-day ice-sheet modeling. The common feature of such inversion exercises is a direct relationship between optimized parameters and observations used in the optimization procedure. For instance, in the standard optimization for basal traction by the control method, ice-stream surface velocities constitute the control data. The optimized basal traction parameters explicitly appear in the momentum equations for the ice-stream velocities (compared to the control data). The inversion for basal traction is carried out by minimization of the cost (or objective, misfit) function that includes the momentum equations facilitated by the Lagrange multipliers. Here, we build upon this idea, and demonstrate how to optimize for parameters indirectly related to observed data using a suite of nested constraints (like Russian dolls) with additional sets of Lagrange multipliers in the cost function. This method opens the opportunity to use data from a variety of sources and types (e.g., velocities, radar layers, surface elevation changes, etc.) in the same optimization process.
NASA Astrophysics Data System (ADS)
Krings, T.; Gerilowski, K.; Buchwitz, M.; Reuter, M.; Tretner, A.; Erzinger, J.; Heinze, D.; Pflüger, U.; Burrows, J. P.; Bovensmann, H.
2011-09-01
MAMAP is an airborne passive remote sensing instrument designed to measure the dry columns of methane (CH4) and carbon dioxide (CO2). The MAMAP instrument comprises two optical grating spectrometers: the first observing in the short wave infrared band (SWIR) at 1590-1690 nm to measure CO2 and CH4 absorptions, and the second in the near infrared (NIR) at 757-768 nm to measure O2 absorptions for reference/normalisation purposes. MAMAP can be operated in both nadir and zenith geometry during the flight. Mounted on an aeroplane, MAMAP surveys areas on regional to local scales with a ground pixel resolution of approximately 29 m × 33 m for a typical aircraft altitude of 1250 m and a velocity of 200 km h-1. The retrieval precision of the measured column relative to background is typically ≲ 1% (1σ). MAMAP measurements are valuable to close the gap between satellite data, having global coverage but with a rather coarse resolution, on the one hand, and highly accurate in situ measurements with sparse coverage on the other hand. In July 2007, test flights were performed over two coal-fired power plants operated by Vattenfall Europe Generation AG: Jänschwalde (27.4 Mt CO2 yr-1) and Schwarze Pumpe (11.9 Mt CO2 yr-1), about 100 km southeast of Berlin, Germany. By using two different inversion approaches, one based on an optimal estimation scheme to fit Gaussian plume models from multiple sources to the data, and another using a simple Gaussian integral method, the emission rates can be determined and compared with emissions reported by Vattenfall Europe. An extensive error analysis for the retrieval's dry column results (XCO2 and XCH4) and for the two inversion methods has been performed. Both methods - the Gaussian plume model fit and the Gaussian integral method - are capable of deriving estimates for strong point source emission rates that are within ±10% of the reported values, given appropriate flight patterns and detailed knowledge of wind conditions.
Nabais, João-Maria
2008-12-01
Abel Salazar was a true renaissance spirit, scientist, doctor, humanist, artist and writer. His paintings combined realism with a very strong social sense. This article looks at his art and the influence that he had through it on his contemporaries.
NASA Astrophysics Data System (ADS)
Pilz, Marco; Parolai, Stefano; Woith, Heiko
2017-01-01