Fast algorithm for computing the Abel inversion integral in broadband reflectometry
Nunes, F.D.
1995-10-01
The application of the Hansen--Jablokow recursive technique is proposed for the numerical computation of the Abel inversion integral which is used in ({ital O}-mode) frequency-modulated broadband reflectometry to evaluate plasma density profiles. Compared to the usual numerical methods the recursive algorithm allows substantial time savings that can be important when processing massive amounts of data aiming to control the plasma in real time. {copyright} {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.
A new Abel inversion by means of the integrals of an input function with noise
NASA Astrophysics Data System (ADS)
Li, Xian-Fang; Huang, Li; Huang, Yong
2007-01-01
Abel's integral equations arise in many areas of natural science and engineering, particularly in plasma diagnostics. This paper proposes a new and effective approximation of the inversion of Abel transform. This algorithm can be simply implemented by symbolic computation, and moreover an nth-order approximation reduces to the exact solution when it is a polynomial in r2 of degree less than or equal to n. Approximate Abel inversion is expressed in terms of integrals of input measurement data; so the suggested approach is stable for experimental data with random noise. An error analysis of the approximation of Abel inversion is given. Finally, several test examples used frequently in plasma diagnostics are given to illustrate the effectiveness and stability of this method.
Bayesian Abel Inversion in Quantitative X-Ray Radiography
Howard, Marylesa; Fowler, Michael; Luttman, Aaron; Mitchell, Stephen E.; Hock, Margaret C.
2016-05-19
A common image formation process in high-energy X-ray radiography is to have a pulsed power source that emits X-rays through a scene, a scintillator that absorbs X-rays and uoresces in the visible spectrum in response to the absorbed photons, and a CCD camera that images the visible light emitted from the scintillator. The intensity image is related to areal density, and, for an object that is radially symmetric about a central axis, the Abel transform then gives the object's volumetric density. Two of the primary drawbacks to classical variational methods for Abel inversion are their sensitivity to the type andmore » scale of regularization chosen and the lack of natural methods for quantifying the uncertainties associated with the reconstructions. In this work we cast the Abel inversion problem within a statistical framework in order to compute volumetric object densities from X-ray radiographs and to quantify uncertainties in the reconstruction. A hierarchical Bayesian model is developed with a likelihood based on a Gaussian noise model and with priors placed on the unknown density pro le, the data precision matrix, and two scale parameters. This allows the data to drive the localization of features in the reconstruction and results in a joint posterior distribution for the unknown density pro le, the prior parameters, and the spatial structure of the precision matrix. Results of the density reconstructions and pointwise uncertainty estimates are presented for both synthetic signals and real data from a U.S. Department of Energy X-ray imaging facility.« less
Improved Abel transform inversion: First application to COSMIC/FORMOSAT-3
NASA Astrophysics Data System (ADS)
Aragon-Angel, A.; Hernandez-Pajares, M.; Juan, J.; Sanz, J.
2007-05-01
In this paper the first results of Ionospheric Tomographic inversion are presented, using the Improved Abel Transform on the COSMIC/FORMOSAT-3 constellation of 6 LEO satellites, carrying on-board GPS receivers.[- 4mm] The Abel transform inversion is a wide used technique which in the ionospheric context makes it possible to retrieve electron densities as a function of height based of STEC (Slant Total Electron Content) data gathered from GPS receivers on board of LEO (Low Earth Orbit) satellites. Within this precise use, the classical approach of the Abel inversion is based on the assumption of spherical symmetry of the electron density in the vicinity of an occultation, meaning that the electron content varies in height but not horizontally. In particular, one implication of this assumption is that the VTEC (Vertical Total Electron Content) is a constant value for the occultation region. This assumption may not always be valid since horizontal ionospheric gradients (a very frequent feature in some ionosphere problematic areas such as the Equatorial region) could significantly affect the electron profiles. [- 4mm] In order to overcome this limitation/problem of the classical Abel inversion, a studied improvement of this technique can be obtained by assuming separability in the electron density (see Hernández-Pajares et al. 2000). This means that the electron density can be expressed by the multiplication of VTEC data and a shape function which assumes all the height dependency in it while the VTEC data keeps the horizontal dependency. Actually, it is more realistic to assume that this shape fuction depends only on the height and to use VTEC information to take into account the horizontal variation rather than considering spherical symmetry in the electron density function as it has been carried out in the classical approach of the Abel inversion.[-4mm] Since the above mentioned improved Abel inversion technique has already been tested and proven to be a useful
Serre duality, Abel's theorem, and Jacobi inversion for supercurves over a thick superpoint
NASA Astrophysics Data System (ADS)
Rothstein, Mitchell J.; Rabin, Jeffrey M.
2015-04-01
The principal aim of this paper is to extend Abel's theorem to the setting of complex supermanifolds of dimension 1 | q over a finite-dimensional local supercommutative C-algebra. The theorem is proved by establishing a compatibility of Serre duality for the supercurve with Poincaré duality on the reduced curve. We include an elementary algebraic proof of the requisite form of Serre duality, closely based on the account of the reduced case given by Serre in Algebraic groups and class fields, combined with an invariance result for the topology on the dual of the space of répartitions. Our Abel map, taking Cartier divisors of degree zero to the dual of the space of sections of the Berezinian sheaf, modulo periods, is defined via Penkov's characterization of the Berezinian sheaf as the cohomology of the de Rham complex of the sheaf D of differential operators. We discuss the Jacobi inversion problem for the Abel map and give an example demonstrating that if n is an integer sufficiently large that the generic divisor of degree n is linearly equivalent to an effective divisor, this need not be the case for all divisors of degree n.
Inverting ion images without Abel inversion: maximum entropy reconstruction of velocity maps.
Dick, Bernhard
2014-01-14
A new method for the reconstruction of velocity maps from ion images is presented, which is based on the maximum entropy concept. In contrast to other methods used for Abel inversion the new method never applies an inversion or smoothing to the data. Instead, it iteratively finds the map which is the most likely cause for the observed data, using the correct likelihood criterion for data sampled from a Poissonian distribution. The entropy criterion minimizes the information content in this map, which hence contains no information for which there is no evidence in the data. Two implementations are proposed, and their performance is demonstrated with simulated and experimental data: Maximum Entropy Velocity Image Reconstruction (MEVIR) obtains a two-dimensional slice through the velocity distribution and can be compared directly to Abel inversion. Maximum Entropy Velocity Legendre Reconstruction (MEVELER) finds one-dimensional distribution functions Q(l)(v) in an expansion of the velocity distribution in Legendre polynomials P((cos θ) for the angular dependence. Both MEVIR and MEVELER can be used for the analysis of ion images with intensities as low as 0.01 counts per pixel, with MEVELER performing significantly better than MEVIR for images with low intensity. Both methods perform better than pBASEX, in particular for images with less than one average count per pixel.
An efficient and flexible Abel-inversion method for noisy data
NASA Astrophysics Data System (ADS)
Antokhin, Igor I.
2016-08-01
We propose an efficient and flexible method for solving Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization on itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.
NASA Astrophysics Data System (ADS)
Katgert, P.; Murdin, P.
2000-11-01
Abell clusters are the most conspicuous groupings of galaxies identified by George Abell on the plates of the first photographic survey made with the SCHMIDT TELESCOPE at Mount Palomar in the 1950s. Sometimes, the term Abell clusters is used as a synonym of nearby, optically selected galaxy clusters....
Geophysical Inversion through Hierarchical Genetic Algorithm Scheme
NASA Astrophysics Data System (ADS)
Furman, Alex; Huisman, Johan A.
2010-05-01
Geophysical investigation is a powerful tool that allows non-invasive and non-destructive mapping of subsurface states and properties. However, non-uniqueness associated with the inversion process halts these methods from becoming of more quantitative use. One major direction researchers are going is constraining the inverse problem by hydrological observations and models. An alternative to the commonly used direct inversion methods are global optimization schemes (such as genetic algorithms and Monte Carlo Markov Chain methods). However, the major limitation here is the desired high resolution of the tomographic image, which leads to a large number of parameters and an unreasonably high computational effort when using global optimization schemes. One way to overcome these problems is to combine the advantages of both direct and global inversion methods through hierarchical inversion. That is, starting the inversion with relatively coarse resolution of parameters, achieving good inversion using one of the two inversion schemes (global or direct), and then refining the resolution and applying a combination of global and direct inversion schemes for the whole domain or locally. In this work we explore through synthetic case studies the option of using a global optimization scheme for inversion of electrical resistivity tomography data through hierarchical refinement of the model resolution.
NASA Astrophysics Data System (ADS)
Huestis, D. L.
Forward integration calculation of air mass, refraction, and time delay requires care even for very smooth model atmospheres. The literature abounds in examples of injudicious approximations, assumptions, transformations, variable substitutions, and failures to verify that the formulas work with unlimited accuracy for simple cases and also survive challenges from mathematically pathological but physically realizable cases. A few years ago we addressed the problem of evaluation of the Chapman function for attenuation along a straight line path in an exponential atmosphere. In this presentation we will describe issues and approaches for integration over light paths curved by refraction. The inverse problem, determining the altitude profile of mass density (index of refraction) or the concentration of an individual chemical species (absorption), from occultation data, also has its mathematically interesting (i.e., difficult) aspects. Now we automatically have noise and thus statistical analysis is just as important as calculus and numerical analysis. Here we will describe a new approach of least-squares fitting occultation data to an expansion over compact basis functions. This approach, which avoids numerical differentiation and singular integrals, was originally developed to analyze laboratory imaging data.Forward integration calculation of air mass, refraction, and time delay requires care even for very smooth model atmospheres. The literature abounds in examples of injudicious approximations, assumptions, transformations, variable substitutions, and failures to verify that the formulas work with unlimited accuracy for simple cases and also survive challenges from mathematically pathological but physically realizable cases. A few years ago we addressed the problem of evaluation of the Chapman function for attenuation along a straight line path in an exponential atmosphere. In this presentation we will describe issues and approaches for integration over light paths
Inversion for seismic anisotropy using genetic algorithms
Horne, S. Univ. of Edinburgh . Dept. of Geology and Geophysics); MacBeth, C. . Dept. of Geology and Geophysics)
1994-11-01
A general inversion scheme based on a genetic algorithm is developed to invert seismic observations for anisotropic parameters. The technique is applied to the inversion of shear-wave observations from two azimuthal VSP data sets from the Conoco test site in Oklahoma. Horizontal polarizations and time-delays are inverted for hexagonal and orthorhombic symmetries. The model solutions are consistent with previous studies using trial and error matching of full waveform synthetics. The shear-wave splitting observations suggest the presence of a shear-wave line singularity and are consistent with a dipping fracture system which is known to exist at the test site. Application of the inversion scheme prior to full waveform modeling demonstrates that a considerable saving in time is possible while retaining the same degree of accuracy.
A Parallel Processing Algorithm for Gravity Inversion
NASA Astrophysics Data System (ADS)
Frasheri, Neki; Bushati, Salvatore; Frasheri, Alfred
2013-04-01
The paper presents results of using MPI parallel processing for the 3D inversion of gravity anomalies. The work is done under the FP7 project HP-SEE (http://www.hp-see.eu/). The inversion of geophysical anomalies remains a challenge, and the use of parallel processing can be a tool to achieve better results, "compensating" the complexity of the ill-posed problem of inversion with the increase of volume of calculations. We considered the gravity as the simplest case of physical fields and experimented an algorithm based in the methodology known as CLEAN and developed by Högbom in 1974. The 3D geosection was discretized in finite cuboid elements and represented by a 3D array of nodes, while the ground surface where the anomaly is observed as a 2D array of points. Starting from a geosection with mass density zero in all nodes, iteratively the algorithm defines the 3D node that offers the best anomaly shape that approximates the observed anomaly minimizing the least squares error; the mass density in the best 3D node is modified with a prefixed density step and the related effect subtracted from the observed anomaly; the process continues until some criteria is fulfilled. Theoretical complexity of he algorithm was evaluated on the basis of iterations and run-time for a geosection discretized in different scales. We considered the average number N of nodes in one edge of the 3D array. The order of number of iterations was evaluated O(N^3); and the order of run-time was evaluated O(N^8). We used several different methods for the identification of the 3D node which effect offers the best least squares error in approximating the observed anomaly: unweighted least squares error for the whole 2D array of anomalous points; weighting least squares error by the inverted value of observed anomaly over each 3D node; and limiting the area of 2D anomalous points where least squares are calculated over shallow 3D nodes. By comparing results from the inversion of single body and two
Magnetotelluric inversion via reverse time migration algorithm of seismic data
Ha, Taeyoung . E-mail: tyha@math.snu.ac.kr; Shin, Changsoo . E-mail: css@model.snu.ac.kr
2007-07-01
We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.
SAGE II inversion algorithm. [Stratospheric Aerosol and Gas Experiment
NASA Technical Reports Server (NTRS)
Chu, W. P.; Mccormick, M. P.; Lenoble, J.; Brogniez, C.; Pruvost, P.
1989-01-01
The operational Stratospheric Aerosol and Gas Experiment II multichannel data inversion algorithm is described. Aerosol and ozone retrievals obtained with the algorithm are discussed. The algorithm is compared to an independently developed algorithm (Lenoble, 1989), showing that the inverted aerosol and ozone profiles from the two algorithms are similar within their respective uncertainties.
Rayleigh wave nonlinear inversion based on the Firefly algorithm
NASA Astrophysics Data System (ADS)
Zhou, Teng-Fei; Peng, Geng-Xin; Hu, Tian-Yue; Duan, Wen-Sheng; Yao, Feng-Chang; Liu, Yi-Mou
2014-06-01
Rayleigh waves have high amplitude, low frequency, and low velocity, which are treated as strong noise to be attenuated in reflected seismic surveys. This study addresses how to identify useful shear wave velocity profile and stratigraphic information from Rayleigh waves. We choose the Firefly algorithm for inversion of surface waves. The Firefly algorithm, a new type of particle swarm optimization, has the advantages of being robust, highly effective, and allows global searching. This algorithm is feasible and has advantages for use in Rayleigh wave inversion with both synthetic models and field data. The results show that the Firefly algorithm, which is a robust and practical method, can achieve nonlinear inversion of surface waves with high resolution.
Improved Inversion Algorithms for Near Surface Characterization
NASA Astrophysics Data System (ADS)
Astaneh, Ali Vaziri; Guddati, Murthy N.
2016-05-01
Near-surface geophysical imaging is often performed by generating surface waves, and estimating the subsurface properties through inversion, i.e. iteratively matching experimentally observed dispersion curves with predicted curves from a layered half-space model of the subsurface. Key to the effectiveness of inversion is the efficiency and accuracy of computing the dispersion curves and their derivatives. This paper presents improved methodologies for both dispersion curve and derivative computation. First, it is shown that the dispersion curves can be computed more efficiently by combining an unconventional complex-length finite element method (CFEM) to model the finite depth layers, with perfectly matched discrete layers (PMDL) to model the unbounded half-space. Second, based on analytical derivatives for theoretical dispersion curves, an approximate derivative is derived for so-called effective dispersion curve for realistic geophysical surface response data. The new derivative computation has a smoothing effect on the computation of derivatives, in comparison with traditional finite difference (FD) approach, and results in faster convergence. In addition, while the computational cost of FD differentiation is proportional to the number of model parameters, the new differentiation formula has a computational cost that is almost independent of the number of model parameters. At the end, as confirmed by synthetic and real-life imaging examples, the combination of CFEM+PMDL for dispersion calculation and the new differentiation formula results in more accurate estimates of the subsurface characteristics than the traditional methods, at a small fraction of computational effort.
Rayleigh wave inversion using heat-bath simulated annealing algorithm
NASA Astrophysics Data System (ADS)
Lu, Yongxu; Peng, Suping; Du, Wenfeng; Zhang, Xiaoyang; Ma, Zhenyuan; Lin, Peng
2016-11-01
The dispersion of Rayleigh waves can be used to obtain near-surface shear (S)-wave velocity profiles. This is performed mainly by inversion of the phase velocity dispersion curves, which has been proven to be a highly nonlinear and multimodal problem, and it is unsuitable to use local search methods (LSMs) as the inversion algorithm. In this study, a new strategy is proposed based on a variant of simulated annealing (SA) algorithm. SA, which simulates the annealing procedure of crystalline solids in nature, is one of the global search methods (GSMs). There are many variants of SA, most of which contain two steps: the perturbation of model and the Metropolis-criterion-based acceptance of the new model. In this paper we propose a one-step SA variant known as heat-bath SA. To test the performance of the heat-bath SA, two models are created. Both noise-free and noisy synthetic data are generated. Levenberg-Marquardt (LM) algorithm and a variant of SA, known as the fast simulated annealing (FSA) algorithm, are also adopted for comparison. The inverted results of the synthetic data show that the heat-bath SA algorithm is a reasonable choice for Rayleigh wave dispersion curve inversion. Finally, a real-world inversion example from a coal mine in northwestern China is shown, which proves that the scheme we propose is applicable.
An adaptive inverse kinematics algorithm for robot manipulators
NASA Technical Reports Server (NTRS)
Colbaugh, R.; Glass, K.; Seraji, H.
1990-01-01
An adaptive algorithm for solving the inverse kinematics problem for robot manipulators is presented. The algorithm is derived using model reference adaptive control (MRAC) theory and is computationally efficient for online applications. The scheme requires no a priori knowledge of the kinematics of the robot if Cartesian end-effector sensing is available, and it requires knowledge of only the forward kinematics if joint position sensing is used. Computer simulation results are given for the redundant seven-DOF robotics research arm, demonstrating that the proposed algorithm yields accurate joint angle trajectories for a given end-effector position/orientation trajectory.
A fast algorithm for sparse matrix computations related to inversion
NASA Astrophysics Data System (ADS)
Li, S.; Wu, W.; Darve, E.
2013-06-01
We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green's functions Gr and G< for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round-off errors
Inverse transport calculations in optical imaging with subspace optimization algorithms
Ding, Tian Ren, Kui
2014-09-15
Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analytically recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.
Application of multistatic inversion algorithms to landmine detection
NASA Astrophysics Data System (ADS)
Gürbüz, Ali Cafer; Counts, Tegan; Kim, Kangwook; McClellan, James H.; Scott, Waymond R., Jr.
2006-05-01
Multi-static ground-penetrating radar (GPR) uses an array of antennas to conduct a number of bistatic operations simultaneously. The multi-static GPR is used to obtain more information on the target of interest using angular diversity. An entirely computer controlled, multi-static GPR consisting of a linear array of six resistively-loaded vee dipoles (RVDs), a network analyzer, and a microwave switch matrix was developed to investigate the potential of multi-static inversion algorithms. The performance of a multi-static inversion algorithm is evaluated for targets buried in clean sand, targets buried under the ground covered by rocks, and targets held above the ground (in the air) using styrofoam supports. A synthetic-aperture, multi-static, time-domain GPR imaging algorithm is extended from conventional mono-static back-projection techniques and used to process the data. Good results are obtained for the clean surface and air targets; however, for targets buried under rocks, only the deeply buried targets could be accurately detected and located.
Development of an Inverse Algorithm for Resonance Inspection
Lai, Canhai; Xu, Wei; Sun, Xin
2012-10-01
Resonance inspection (RI), which employs the natural frequency spectra shift between the good and the anomalous part populations to detect defects, is a non-destructive evaluation (NDE) technique with many advantages such as low inspection cost, high testing speed, and broad applicability to structures with complex geometry compared to other contemporary NDE methods. It has already been widely used in the automobile industry for quality inspections of safety critical parts. Unlike some conventionally used NDE methods, the current RI technology is unable to provide details, i.e. location, dimension, or types, of the flaws for the discrepant parts. Such limitation severely hinders its wide spread applications and further development. In this study, an inverse RI algorithm based on maximum correlation function is proposed to quantify the location and size of flaws for a discrepant part. A dog-bone shaped stainless steel sample with and without controlled flaws are used for algorithm development and validation. The results show that multiple flaws can be accurately pinpointed back using the algorithms developed, and the prediction accuracy decreases with increasing flaw numbers and decreasing distance between flaws.
Aerosol Models for the CALIPSO Lidar Inversion Algorithms
NASA Technical Reports Server (NTRS)
Omar, Ali H.; Winker, David M.; Won, Jae-Gwang
2003-01-01
We use measurements and models to develop aerosol models for use in the inversion algorithms for the Cloud Aerosol Lidar and Imager Pathfinder Spaceborne Observations (CALIPSO). Radiance measurements and inversions of the AErosol RObotic NETwork (AERONET1, 2) are used to group global atmospheric aerosols using optical and microphysical parameters. This study uses more than 105 records of radiance measurements, aerosol size distributions, and complex refractive indices to generate the optical properties of the aerosol at more 200 sites worldwide. These properties together with the radiance measurements are then classified using classical clustering methods to group the sites according to the type of aerosol with the greatest frequency of occurrence at each site. Six significant clusters are identified: desert dust, biomass burning, urban industrial pollution, rural background, marine, and dirty pollution. Three of these are used in the CALIPSO aerosol models to characterize desert dust, biomass burning, and polluted continental aerosols. The CALIPSO aerosol model also uses the coarse mode of desert dust and the fine mode of biomass burning to build a polluted dust model. For marine aerosol, the CALIPSO aerosol model uses measurements from the SEAS experiment 3. In addition to categorizing the aerosol types, the cluster analysis provides all the column optical and microphysical properties for each cluster.
Genetic algorithms for geophysical parameter inversion from altimeter data
NASA Astrophysics Data System (ADS)
Ramillien, Guillaume
2001-11-01
A new approach for inverting several geophysical parameters at the same time from altimeter and marine data by implementing genetic algorithms (GAs) is presented. These original techniques of optimization based on non-deterministic rules simulate the evolution of a population of candidate solutions for a given objective function to minimize. They offer a robust and efficient alternative to gradient techniques for non-linear parameter inversion. Here genetic algorithms are used for solving a discrete gravity problem of data associated with an undersea relief, to retrieve seven parameters at the same time: the elastic thickness, the mean ocean depth, the seamount location (longitude/latitude), its amplitude, radius and density from its observed gravity/geoid signature. This approach was also successfully used to adjust lithosphere parameters in the real case of the Rarotonga seamount [21.2°S 159.8°W] in the Southern Cook Islands region, where GA simulations provided robust estimates of these seven parameters. The GA found very realistic values for the mean ocean depth and the seamount amplitude and the precise geographical location of Rarotonga Island. Moreover, the values of elastic thickness (~14-15km) and seamount density (~2850-2870kgm-3) estimated by the GA are consistent with the ones proposed in earlier studies.
Modelling and genetic algorithm based optimisation of inverse supply chain
NASA Astrophysics Data System (ADS)
Bányai, T.
2009-04-01
(Recycling of household appliances with emphasis on reuse options). The purpose of this paper is the presentation of a possible method for avoiding the unnecessary environmental risk and landscape use through unprovoked large supply chain of collection systems of recycling processes. In the first part of the paper the author presents the mathematical model of recycling related collection systems (applied especially for wastes of electric and electronic products) and in the second part of the work a genetic algorithm based optimisation method will be demonstrated, by the aid of which it is possible to determine the optimal structure of the inverse supply chain from the point of view economical, ecological and logistic objective functions. The model of the inverse supply chain is based on a multi-level, hierarchical collection system. In case of this static model it is assumed that technical conditions are permanent. The total costs consist of three parts: total infrastructure costs, total material handling costs and environmental risk costs. The infrastructure-related costs are dependent only on the specific fixed costs and the specific unit costs of the operation points (collection, pre-treatment, treatment, recycling and reuse plants). The costs of warehousing and transportation are represented by the material handling related costs. The most important factors determining the level of environmental risk cost are the number of out of time recycled (treated or reused) products, the number of supply chain objects and the length of transportation routes. The objective function is the minimization of the total cost taking into consideration the constraints. However a lot of research work discussed the design of supply chain [8], but most of them concentrate on linear cost functions. In the case of this model non-linear cost functions were used. The non-linear cost functions and the possible high number of objects of the inverse supply chain leaded to the problem of choosing a
NASA Technical Reports Server (NTRS)
Tsao, Nai-Kuan
1989-01-01
A class of direct inverse decomposition algorithms for solving systems of linear equations is presented. Their behavior in the presence of round-off errors is analyzed. It is shown that under some mild restrictions on their implementation, the class of direct inverse decomposition algorithms presented are equivalent in terms of the error complexity measures.
An algorithm for constrained one-step inversion of spectral CT data.
Foygel Barber, Rina; Sidky, Emil Y; Gilat Schmidt, Taly; Pan, Xiaochuan
2016-05-21
We develop a primal-dual algorithm that allows for one-step inversion of spectral CT transmission photon counts data to a basis map decomposition. The algorithm allows for image constraints to be enforced on the basis maps during the inversion. The derivation of the algorithm makes use of a local upper bounding quadratic approximation to generate descent steps for non-convex spectral CT data discrepancy terms, combined with a new convex-concave optimization algorithm. Convergence of the algorithm is demonstrated on simulated spectral CT data. Simulations with noise and anthropomorphic phantoms show examples of how to employ the constrained one-step algorithm for spectral CT data.
Modelling and genetic algorithm based optimisation of inverse supply chain
NASA Astrophysics Data System (ADS)
Bányai, T.
2009-04-01
(Recycling of household appliances with emphasis on reuse options). The purpose of this paper is the presentation of a possible method for avoiding the unnecessary environmental risk and landscape use through unprovoked large supply chain of collection systems of recycling processes. In the first part of the paper the author presents the mathematical model of recycling related collection systems (applied especially for wastes of electric and electronic products) and in the second part of the work a genetic algorithm based optimisation method will be demonstrated, by the aid of which it is possible to determine the optimal structure of the inverse supply chain from the point of view economical, ecological and logistic objective functions. The model of the inverse supply chain is based on a multi-level, hierarchical collection system. In case of this static model it is assumed that technical conditions are permanent. The total costs consist of three parts: total infrastructure costs, total material handling costs and environmental risk costs. The infrastructure-related costs are dependent only on the specific fixed costs and the specific unit costs of the operation points (collection, pre-treatment, treatment, recycling and reuse plants). The costs of warehousing and transportation are represented by the material handling related costs. The most important factors determining the level of environmental risk cost are the number of out of time recycled (treated or reused) products, the number of supply chain objects and the length of transportation routes. The objective function is the minimization of the total cost taking into consideration the constraints. However a lot of research work discussed the design of supply chain [8], but most of them concentrate on linear cost functions. In the case of this model non-linear cost functions were used. The non-linear cost functions and the possible high number of objects of the inverse supply chain leaded to the problem of choosing a
Numerical Laplace Transform Inversion Employing the Gaver-Stehfest Algorithm.
ERIC Educational Resources Information Center
Jacquot, Raymond G.; And Others
1985-01-01
Presents a technique for the numerical inversion of Laplace Transforms and several examples employing this technique. Limitations of the method in terms of available computer word length and the effects of these limitations on approximate inverse functions are also discussed. (JN)
A new SPECT reconstruction algorithm based on the Novikov explicit inversion formula
NASA Astrophysics Data System (ADS)
Kunyansky, Leonid A.
2001-04-01
We present a new reconstruction algorithm for single-photon emission computed tomography. The algorithm is based on the Novikov explicit inversion formula for the attenuated Radon transform with non-uniform attenuation. Our reconstruction technique can be viewed as a generalization of both the filtered backprojection algorithm and the Tretiak-Metz algorithm. We test the performance of the present algorithm in a variety of numerical experiments. Our numerical examples show that the algorithm is capable of accurate image reconstruction even in the case of strongly non-uniform attenuation coefficient, similar to that occurring in a human thorax.
Improved inversion algorithms for near-surface characterization
NASA Astrophysics Data System (ADS)
Vaziri Astaneh, Ali; Guddati, Murthy N.
2016-08-01
Near-surface geophysical imaging is often performed by generating surface waves, and estimating the subsurface properties through inversion, that is, iteratively matching experimentally observed dispersion curves with predicted curves from a layered half-space model of the subsurface. Key to the effectiveness of inversion is the efficiency and accuracy of computing the dispersion curves and their derivatives. This paper presents improved methodologies for both dispersion curve and derivative computation. First, it is shown that the dispersion curves can be computed more efficiently by combining an unconventional complex-length finite element method (CFEM) to model the finite depth layers, with perfectly matched discrete layers (PMDL) to model the unbounded half-space. Second, based on analytical derivatives for theoretical dispersion curves, an approximate derivative is derived for the so-called effective dispersion curve for realistic geophysical surface response data. The new derivative computation has a smoothing effect on the computation of derivatives, in comparison with traditional finite difference (FD) approach, and results in faster convergence. In addition, while the computational cost of FD differentiation is proportional to the number of model parameters, the new differentiation formula has a computational cost that is almost independent of the number of model parameters. At the end, as confirmed by synthetic and real-life imaging examples, the combination of CFEM + PMDL for dispersion calculation and the new differentiation formula results in more accurate estimates of the subsurface characteristics than the traditional methods, at a small fraction of computational effort.
Data inversion algorithm development for the hologen occultation experiment
NASA Technical Reports Server (NTRS)
Gordley, Larry L.; Mlynczak, Martin G.
1986-01-01
The successful retrieval of atmospheric parameters from radiometric measurement requires not only the ability to do ideal radiometric calculations, but also a detailed understanding of instrument characteristics. Therefore a considerable amount of time was spent in instrument characterization in the form of test data analysis and mathematical formulation. Analyses of solar-to-reference interference (electrical cross-talk), detector nonuniformity, instrument balance error, electronic filter time-constants and noise character were conducted. A second area of effort was the development of techniques for the ideal radiometric calculations required for the Halogen Occultation Experiment (HALOE) data reduction. The computer code for these calculations must be extremely complex and fast. A scheme for meeting these requirements was defined and the algorithms needed form implementation are currently under development. A third area of work included consulting on the implementation of the Emissivity Growth Approximation (EGA) method of absorption calculation into a HALOE broadband radiometer channel retrieval algorithm.
Effects of noise on lidar data inversion with the backward algorithm.
Comerón, Adolfo; Rocadenbosch, Francesc; López, Miguel Angel; Rodríguez, Alejandro; Muñoz, Constantino; García-Vizcaíno, David; Sicard, Michaël
2004-04-20
The lidar data-inversion algorithm widely known as the Klett method (and its more elaborate variants) has long been used to invert elastic-lidar data obtained from atmospheric sounding systems. The Klett backward algorithm has also been shown to be robust in the face of uncertainties concerning the boundary condition. Nevertheless electrical noise at the photoreceiver output unavoidably has an impact on the data-inversion process, and describing in an explicit way how it affects retrieval of the atmospheric optical coefficients can contribute to improvement in inversion quality. We examine formally the way noise disturbs backscatter-coefficient retrievals done with the Klett backward algorithm, derive a mathematical expression for the retrieved backscatter coefficient in the presence of noise affecting the signal, and assess the noise impact and suggest ways to limit it.
A combinatorial algorithm for the optimization of refraction seismics data inversion
NASA Astrophysics Data System (ADS)
Micciancio, Stefano
1993-08-01
The problem of data inversion in refraction seismics can be split in two parts: data first must be preprocessed in order to determine the travel-time curve; this essentially is a geometrical problem, complicated, however, by its pattern recognition aspects. Once the geometrical problem is solved, the second part, the inversion proper, is straightforward, as the soil layering model can be calculated according to well-known algorithms. The more difficult part of the problem is the former, which implies a type of pattern recognition; because of this type of difficulty, the geometrical part of the problem usually is committed to the skill of a human operator. This paper describes an algorithm exploiting combinatorial optimization techniques to automatize the pattern recognition part of the problem of data inversion in refraction seismics. The listing of a Pascal source program, implementing the algorithm proposed, is included.
NASA Astrophysics Data System (ADS)
Lin, Y.; O'Malley, D.; Vesselinov, V. V.
2015-12-01
Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a
VES/TEM 1D joint inversion by using Controlled Random Search (CRS) algorithm
NASA Astrophysics Data System (ADS)
Bortolozo, Cassiano Antonio; Porsani, Jorge Luís; Santos, Fernando Acácio Monteiro dos; Almeida, Emerson Rodrigo
2015-01-01
Electrical (DC) and Transient Electromagnetic (TEM) soundings are used in a great number of environmental, hydrological, and mining exploration studies. Usually, data interpretation is accomplished by individual 1D models resulting often in ambiguous models. This fact can be explained by the way as the two different methodologies sample the medium beneath surface. Vertical Electrical Sounding (VES) is good in marking resistive structures, while Transient Electromagnetic sounding (TEM) is very sensitive to conductive structures. Another difference is VES is better to detect shallow structures, while TEM soundings can reach deeper layers. A Matlab program for 1D joint inversion of VES and TEM soundings was developed aiming at exploring the best of both methods. The program uses CRS - Controlled Random Search - algorithm for both single and 1D joint inversions. Usually inversion programs use Marquadt type algorithms but for electrical and electromagnetic methods, these algorithms may find a local minimum or not converge. Initially, the algorithm was tested with synthetic data, and then it was used to invert experimental data from two places in Paraná sedimentary basin (Bebedouro and Pirassununga cities), both located in São Paulo State, Brazil. Geoelectric model obtained from VES and TEM data 1D joint inversion is similar to the real geological condition, and ambiguities were minimized. Results with synthetic and real data show that 1D VES/TEM joint inversion better recovers simulated models and shows a great potential in geological studies, especially in hydrogeological studies.
Bader, D A; Moret, B M; Yan, M
2001-01-01
Hannenhalli and Pevzner gave the first polynomial-time algorithm for computing the inversion distance between two signed permutations, as part of the larger task of determining the shortest sequence of inversions needed to transform one permutation into the other. Their algorithm (restricted to distance calculation) proceeds in two stages: in the first stage, the overlap graph induced by the permutation is decomposed into connected components; then, in the second stage, certain graph structures (hurdles and others) are identified. Berman and Hannenhalli avoided the explicit computation of the overlap graph and gave an O(nalpha(n)) algorithm, based on a Union-Find structure, to find its connected components, where alpha is the inverse Ackerman function. Since for all practical purposes alpha(n) is a constant no larger than four, this algorithm has been the fastest practical algorithm to date. In this paper, we present a new linear-time algorithm for computing the connected components, which is more efficient than that of Berman and Hannenhalli in both theory and practice. Our algorithm uses only a stack and is very easy to implement. We give the results of computational experiments over a large range of permutation pairs produced through simulated evolution; our experiments show a speed-up by a factor of 2 to 5 in the computation of the connected components and by a factor of 1.3 to 2 in the overall distance computation.
An inverse source location algorithm for radiation portal monitor applications
Miller, Karen A; Charlton, William S
2010-01-01
Radiation portal monitors are being deployed at border crossings throughout the world to prevent the smuggling of nuclear and radiological materials; however, a tension exists between security and the free-flow of commerce. Delays at ports-of-entry have major economic implications, so it is imperative to minimize portal monitor screening time. We have developed an algorithm to locate a radioactive source using a distributed array of detectors, specifically for use at border crossings. To locate the source, we formulated an optimization problem where the objective function describes the least-squares difference between the actual and predicted detector measurements. The predicted measurements are calculated by solving the 3-D deterministic neutron transport equation given an estimated source position. The source position is updated using the steepest descent method, where the gradient of the objective function with respect to the source position is calculated using adjoint transport calculations. If the objective function is smaller than the convergence criterion, then the source position has been identified. This paper presents the derivation of the underlying equations in the algorithm as well as several computational test cases used to characterize its accuracy.
An implementation of differential search algorithm (DSA) for inversion of surface wave data
NASA Astrophysics Data System (ADS)
Song, Xianhai; Li, Lei; Zhang, Xueqiang; Shi, Xinchun; Huang, Jianquan; Cai, Jianchao; Jin, Si; Ding, Jianping
2014-12-01
Surface wave dispersion analysis is widely used in geophysics to infer near-surface shear (S)-wave velocity profiles for a wide variety of applications. However, inversion of surface wave data is challenging for most local-search methods due to its high nonlinearity and to its multimodality. In this work, we proposed and implemented a new Rayleigh wave dispersion curve inversion scheme based on differential search algorithm (DSA), one of recently developed swarm intelligence-based algorithms. DSA is inspired from seasonal migration behavior of species of the living beings throughout the year for solving highly nonlinear, multivariable, and multimodal optimization problems. The proposed inverse procedure is applied to nonlinear inversion of fundamental-mode Rayleigh wave dispersion curves for near-surface S-wave velocity profiles. To evaluate calculation efficiency and stability of DSA, four noise-free and four noisy synthetic data sets are firstly inverted. Then, the performance of DSA is compared with that of genetic algorithms (GA) by two noise-free synthetic data sets. Finally, a real-world example from a waste disposal site in NE Italy is inverted to examine the applicability and robustness of the proposed approach on surface wave data. Furthermore, the performance of DSA is compared against that of GA by real data to further evaluate scores of the inverse procedure described here. Simulation results from both synthetic and actual field data demonstrate that differential search algorithm (DSA) applied to nonlinear inversion of surface wave data should be considered good not only in terms of the accuracy but also in terms of the convergence speed. The great advantages of DSA are that the algorithm is simple, robust and easy to implement. Also there are fewer control parameters to tune.
Mixed-radix Algorithm for the Computation of Forward and Inverse MDCT
Wu, Jiasong; Shu, Huazhong; Senhadji, Lotfi; Luo, Limin
2008-01-01
The modified discrete cosine transform (MDCT) and inverse MDCT (IMDCT) are two of the most computational intensive operations in MPEG audio coding standards. A new mixed-radix algorithm for efficient computing the MDCT/IMDCT is presented. The proposed mixed-radix MDCT algorithm is composed of two recursive algorithms. The first algorithm, called the radix-2 decimation in frequency (DIF) algorithm, is obtained by decomposing an N-point MDCT into two MDCTs with the length N/2. The second algorithm, called the radix-3 decimation in time (DIT) algorithm, is obtained by decomposing an N-point MDCT into three MDCTs with the length N/3. Since the proposed MDCT algorithm is also expressed in the form of a simple sparse matrix factorization, the corresponding IMDCT algorithm can be easily derived by simply transposing the matrix factorization. Comparison of the proposed algorithm with some existing ones shows that our proposed algorithm is more suitable for parallel implementation and especially suitable for the layer III of MPEG-1 and MPEG-2 audio encoding and decoding. Moreover, the proposed algorithm can be easily extended to the multidimensional case by using the vector-radix method. PMID:21258639
NASA Astrophysics Data System (ADS)
Venkata Rao, R.; Patel, Vivek
2012-08-01
This study explores the use of teaching-learning-based optimization (TLBO) and artificial bee colony (ABC) algorithms for determining the optimum operating conditions of combined Brayton and inverse Brayton cycles. Maximization of thermal efficiency and specific work of the system are considered as the objective functions and are treated simultaneously for multi-objective optimization. Upper cycle pressure ratio and bottom cycle expansion pressure of the system are considered as design variables for the multi-objective optimization. An application example is presented to demonstrate the effectiveness and accuracy of the proposed algorithms. The results of optimization using the proposed algorithms are validated by comparing with those obtained by using the genetic algorithm (GA) and particle swarm optimization (PSO) on the same example. Improvement in the results is obtained by the proposed algorithms. The results of effect of variation of the algorithm parameters on the convergence and fitness values of the objective functions are reported.
SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters
NASA Technical Reports Server (NTRS)
McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.
2014-01-01
Ocean color remote sensing provides synoptic-scale, near-daily observations of marine inherent optical properties (IOPs). Whilst contemporary ocean color algorithms are known to perform well in deep oceanic waters, they have difficulty operating in optically clear, shallow marine environments where light reflected from the seafloor contributes to the water-leaving radiance. The effect of benthic reflectance in optically shallow waters is known to adversely affect algorithms developed for optically deep waters [1, 2]. Whilst adapted versions of optically deep ocean color algorithms have been applied to optically shallow regions with reasonable success [3], there is presently no approach that directly corrects for bottom reflectance using existing knowledge of bathymetry and benthic albedo.To address the issue of optically shallow waters, we have developed a semi-analytical ocean color inversion algorithm: the Shallow Water Inversion Model (SWIM). SWIM uses existing bathymetry and a derived benthic albedo map to correct for bottom reflectance using the semi-analytical model of Lee et al [4]. The algorithm was incorporated into the NASA Ocean Biology Processing Groups L2GEN program and tested in optically shallow waters of the Great Barrier Reef, Australia. In-lieu of readily available in situ matchup data, we present a comparison between SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Property Algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA).
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-08-19
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
NASA Astrophysics Data System (ADS)
Jiang, Mingfeng; Xia, Ling; Shou, Guofa; Tang, Min
2007-03-01
Computing epicardial potentials from body surface potentials constitutes one form of ill-posed inverse problem of electrocardiography (ECG). To solve this ECG inverse problem, the Tikhonov regularization and truncated singular-value decomposition (TSVD) methods have been commonly used to overcome the ill-posed property by imposing constraints on the magnitudes or derivatives of the computed epicardial potentials. Such direct regularization methods, however, are impractical when the transfer matrix is large. The least-squares QR (LSQR) method, one of the iterative regularization methods based on Lanczos bidiagonalization and QR factorization, has been shown to be numerically more reliable in various circumstances than the other methods considered. This LSQR method, however, to our knowledge, has not been introduced and investigated for the ECG inverse problem. In this paper, the regularization properties of the Krylov subspace iterative method of LSQR for solving the ECG inverse problem were investigated. Due to the 'semi-convergence' property of the LSQR method, the L-curve method was used to determine the stopping iteration number. The performance of the LSQR method for solving the ECG inverse problem was also evaluated based on a realistic heart-torso model simulation protocol. The results show that the inverse solutions recovered by the LSQR method were more accurate than those recovered by the Tikhonov and TSVD methods. In addition, by combing the LSQR with genetic algorithms (GA), the performance can be improved further. It suggests that their combination may provide a good scheme for solving the ECG inverse problem.
ERIC Educational Resources Information Center
Brown, Malcolm
2009-01-01
Inversions are fascinating phenomena. They are reversals of the normal or expected order. They occur across a wide variety of contexts. What do inversions have to do with learning spaces? The author suggests that they are a useful metaphor for the process that is unfolding in higher education with respect to education. On the basis of…
Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm
NASA Astrophysics Data System (ADS)
Chen, Chao; Xia, Jianghai; Liu, Jiangping; Feng, Guangding
2006-03-01
Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant
Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm
Chen, C.; Xia, J.; Liu, J.; Feng, G.
2006-01-01
Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant
NASA Astrophysics Data System (ADS)
Dasch, Cameron J.
1992-03-01
It is shown that the Abel inversion, onion-peeling, and filtered backprojection methods can be intercompared without assumptions about the object being deconvolved. If the projection data are taken at equally spaced radial positions, the deconvolved field is given by weighted sums of the projections divided by the data spacing. The weighting factors are independent of the data spacing. All the methods are remarkably similar and have Abelian behavior: the field at a radial location is primarily determined by the weighted differences of a few projections around the radial position. Onion-peeling and an Abel inversion using two-point interpolation are similar. When the Shepp-Logan filtered backprojection method is reduced to one dimension, it is essentially identical to an Abel inversion using three-point interpolation. The weighting factors directly determine the relative noise performance: the three-point Abel inversion is the best, while onion peeling is the worst with approximately twice the noise. Based on ease of calculation, robustness, and noise, the three-point Abel inversion is recommended.
A simulation based method to assess inversion algorithms for transverse relaxation data
NASA Astrophysics Data System (ADS)
Ghosh, Supriyo; Keener, Kevin M.; Pan, Yong
2008-04-01
NMR relaxometry is a very useful tool for understanding various chemical and physical phenomena in complex multiphase systems. A Carr-Purcell-Meiboom-Gill (CPMG) [P.T. Callaghan, Principles of Nuclear Magnetic Resonance Microscopy, Clarendon Press, Oxford, 1991] experiment is an easy and quick way to obtain transverse relaxation constant (T2) in low field. Most of the samples usually have a distribution of T2 values. Extraction of this distribution of T2s from the noisy decay data is essentially an ill-posed inverse problem. Various inversion approaches have been used to solve this problem, to date. A major issue in using an inversion algorithm is determining how accurate the computed distribution is. A systematic analysis of an inversion algorithm, UPEN [G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data, Journal of Magnetic Resonance 132 (1998) 65-77; G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data II. Data spacing, T2 data, systematic data errors, and diagnostics, Journal of Magnetic Resonance 147 (2000) 273-285] was performed by means of simulated CPMG data generation. Through our simulation technique and statistical analyses, the effects of various experimental parameters on the computed distribution were evaluated. We converged to the true distribution by matching up the inversion results from a series of true decay data and a noisy simulated data. In addition to simulation studies, the same approach was also applied on real experimental data to support the simulation results.
NASA Astrophysics Data System (ADS)
Liu, B.; Li, S. C.; Nie, L. C.; Wang, J.; L, X.; Zhang, Q. S.
2012-12-01
Traditional inversion method is the most commonly used procedure for three-dimensional (3D) resistivity inversion, which usually takes the linearization of the problem and accomplish it by iterations. However, its accuracy is often dependent on the initial model, which can make the inversion trapped in local optima, even cause a bad result. Non-linear method is a feasible way to eliminate the dependence on the initial model. However, for large problems such as 3D resistivity inversion with inversion parameters exceeding a thousand, main challenges of non-linear method are premature and quite low search efficiency. To deal with these problems, we present an improved Genetic Algorithm (GA) method. In the improved GA method, smooth constraint and inequality constraint are both applied on the object function, by which the degree of non-uniqueness and ill-conditioning is decreased. Some measures are adopted from others by reference to maintain the diversity and stability of GA, e.g. real-coded method, and the adaptive adjustment of crossover and mutation probabilities. Then a generation method of approximately uniform initial population is proposed in this paper, with which uniformly distributed initial generation can be produced and the dependence on initial model can be eliminated. Further, a mutation direction control method is presented based on the joint algorithm, in which the linearization method is embedded in GA. The update vector produced by linearization method is used as mutation increment to maintain a better search direction compared with the traditional GA with non-controlled mutation operation. By this method, the mutation direction is optimized and the search efficiency is improved greatly. The performance of improved GA is evaluated by comparing with traditional inversion results in synthetic example or with drilling columnar sections in practical example. The synthetic and practical examples illustrate that with the improved GA method we can eliminate
NASA Astrophysics Data System (ADS)
Song, Xianhai; Li, Lei; Zhang, Xueqiang; Huang, Jianquan; Shi, Xinchun; Jin, Si; Bai, Yiming
2014-10-01
In recent years, Rayleigh waves are gaining popularity to obtain near-surface shear (S)-wave velocity profiles. However, inversion of Rayleigh wave dispersion curves is challenging for most local-search methods due to its high nonlinearity and to its multimodality. In this study, we proposed and tested a new Rayleigh wave dispersion curve inversion scheme based on differential evolution (DE) algorithm. DE is a novel stochastic search approach that possesses several attractive advantages: (1) Capable of handling non-differentiable, non-linear and multimodal objective functions because of its stochastic search strategy; (2) Parallelizability to cope with computation intensive objective functions without being time consuming by using a vector population where the stochastic perturbation of the population vectors can be done independently; (3) Ease of use, i.e. few control variables to steer the minimization/maximization by DE's self-organizing scheme; and (4) Good convergence properties. The proposed inverse procedure was applied to nonlinear inversion of fundamental-mode Rayleigh wave dispersion curves for near-surface S-wave velocity profiles. To evaluate calculation efficiency and stability of DE, we firstly inverted four noise-free and four noisy synthetic data sets. Secondly, we investigated effects of the number of layers on DE algorithm and made an uncertainty appraisal analysis by DE algorithm. Thirdly, we made a comparative analysis with genetic algorithms (GA) by a synthetic data set to further investigate the performance of the proposed inverse procedure. Finally, we inverted a real-world example from a waste disposal site in NE Italy to examine the applicability of DE on Rayleigh wave dispersion curves. Furthermore, we compared the performance of the proposed approach to that of GA to further evaluate scores of the inverse procedure described here. Results from both synthetic and actual field data demonstrate that differential evolution algorithm applied
Practical analytical backscatter error bars for elastic one-component lidar inversion algorithm.
Rocadenbosch, Francesc; Reba, M Nadzri Md; Sicard, Michaël; Comerón, Adolfo
2010-06-10
We present an analytical formulation to compute the total-backscatter range-dependent error bars from the well-known Klett's elastic-lidar inversion algorithm. A combined error-propagation and statistical formulation approach is used to assess inversion errors in response to the following error sources: observation noise (i.e., signal-to-noise ratio) in the reception channel, the user's uncertainty in the backscatter calibration, and in the (range-dependent) total extinction-to-backscatter ratio provided. The method is validated using a Monte Carlo procedure, where the error bars are computed by inversion of a large population of noisy generated lidar signals, for total optical depths tau < or = 5 and typical user uncertainties, all of which yield a practical tool to compute the sought-after error bars.
Modeling and inversion Matlab algorithms for resistivity, induced polarization and seismic data
NASA Astrophysics Data System (ADS)
Karaoulis, M.; Revil, A.; Minsley, B. J.; Werkema, D. D.
2011-12-01
M. Karaoulis (1), D.D. Werkema (3), A. Revil (1,2), A., B. Minsley (4), (1) Colorado School of Mines, Dept. of Geophysics, Golden, CO, USA. (2) ISTerre, CNRS, UMR 5559, Université de Savoie, Equipe Volcan, Le Bourget du Lac, France. (3) U.S. EPA, ORD, NERL, ESD, CMB, Las Vegas, Nevada, USA . (4) USGS, Federal Center, Lakewood, 10, 80225-0046, CO. Abstract We propose 2D and 3D forward modeling and inversion package for DC resistivity, time domain induced polarization (IP), frequency-domain IP, and seismic refraction data. For the resistivity and IP case, discretization is based on rectangular cells, where each cell has as unknown resistivity in the case of DC modelling, resistivity and chargeability in the time domain IP modelling, and complex resistivity in the spectral IP modelling. The governing partial-differential equations are solved with the finite element method, which can be applied to both real and complex variables that are solved for. For the seismic case, forward modeling is based on solving the eikonal equation using a second-order fast marching method. The wavepaths are materialized by Fresnel volumes rather than by conventional rays. This approach accounts for complicated velocity models and is advantageous because it considers frequency effects on the velocity resolution. The inversion can accommodate data at a single time step, or as a time-lapse dataset if the geophysical data are gathered for monitoring purposes. The aim of time-lapse inversion is to find the change in the velocities or resistivities of each model cell as a function of time. Different time-lapse algorithms can be applied such as independent inversion, difference inversion, 4D inversion, and 4D active time constraint inversion. The forward algorithms are benchmarked against analytical solutions and inversion results are compared with existing ones. The algorithms are packaged as Matlab codes with a simple Graphical User Interface. Although the code is parallelized for multi
NASA Astrophysics Data System (ADS)
Wang, Youming; Chen, Xuefeng; He, Zhengjia
2011-02-01
Structural eigenvalues have been broadly applied in modal analysis, damage detection, vibration control, etc. In this paper, the interpolating multiwavelets are custom designed based on stable completion method to solve structural eigenvalue problems. The operator-orthogonality of interpolating multiwavelets gives rise to highly sparse multilevel stiffness and mass matrices of structural eigenvalue problems and permits the incremental computation of the eigenvalue solution in an efficient manner. An adaptive inverse iteration algorithm using the interpolating multiwavelets is presented to solve structural eigenvalue problems. Numerical examples validate the accuracy and efficiency of the proposed algorithm.
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chen, C. L.
1989-01-01
Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.
NASA Astrophysics Data System (ADS)
Zhang, Zhi-Yong; Tan, Han-Dong; Wang, Kun-Peng; Lin, Chang-Hong; Zhang, Bin; Xie, Mao-Bi
2016-03-01
Traditional two-dimensional (2D) complex resistivity forward modeling is based on Poisson's equation but spectral induced polarization (SIP) data are the coproducts of the induced polarization (IP) and the electromagnetic induction (EMI) effects. This is especially true under high frequencies, where the EMI effect can exceed the IP effect. 2D inversion that only considers the IP effect reduces the reliability of the inversion data. In this paper, we derive differential equations using Maxwell's equations. With the introduction of the Cole-Cole model, we use the finite-element method to conduct 2D SIP forward modeling that considers the EMI and IP effects simultaneously. The data-space Occam method, in which different constraints to the model smoothness and parametric boundaries are introduced, is then used to simultaneously obtain the four parameters of the Cole—Cole model using multi-array electric field data. This approach not only improves the stability of the inversion but also significantly reduces the solution ambiguity. To improve the computational efficiency, message passing interface programming was used to accelerate the 2D SIP forward modeling and inversion. Synthetic datasets were tested using both serial and parallel algorithms, and the tests suggest that the proposed parallel algorithm is robust and efficient.
Estimates of the trace of the inverse of a symmetric matrix using the modified Chebyshev algorithm
NASA Astrophysics Data System (ADS)
Meurant, Gérard
2009-07-01
In this paper we study how to compute an estimate of the trace of the inverse of a symmetric matrix by using Gauss quadrature and the modified Chebyshev algorithm. As auxiliary polynomials we use the shifted Chebyshev polynomials. Since this can be too costly in computer storage for large matrices we also propose to compute the modified moments with a stochastic approach due to Hutchinson (Commun Stat Simul 18:1059-1076, 1989).
NASA Astrophysics Data System (ADS)
Liu, Qing Huo; Zhang, Zhong Qing
2000-07-01
We invert for the axisymmetric conductivity distribution from borehole electromagnetic induction measurements using a two-step linear inversion method based on a fast Fourier and Hankel transform enhanced extended Born approximation. In this method, the inverse problem is first cast as an under- determined linear least-norm problem for the induced electric current density; from the solution of this induced current density, the unknown conductivity distribution is then obtained by solving an over-determined linear problem using the newly developed, fast Fourier and Hankel transform enhanced extended Born approximation. Numerical results show that this inverse method is applicable to a very high conductivity contrast. It is a natural extension of the original two-step linear inversion method of Torres-Verdin and Habashy to axisymmetric media. In the first step, the CPU time costs O(N2). In the second step, the CPU time costs O(N log2 N) where N is the number of unknowns. Because of the fast Fourier and Hankel transform algorithm, this inverse method is actually more efficient than the conventional, brute-force first-order Born approximation.
TOPICAL REVIEW: Inversion algorithms for large-scale geophysical electromagnetic measurements
NASA Astrophysics Data System (ADS)
Abubakar, A.; Habashy, T. M.; Li, M.; Liu, J.
2009-12-01
Low-frequency surface electromagnetic prospecting methods have been gaining a lot of interest because of their capabilities to directly detect hydrocarbon reservoirs and to compliment seismic measurements for geophysical exploration applications. There are two types of surface electromagnetic surveys. The first is an active measurement where we use an electric dipole source towed by a ship over an array of seafloor receivers. This measurement is called the controlled-source electromagnetic (CSEM) method. The second is the Magnetotelluric (MT) method driven by natural sources. This passive measurement also uses an array of seafloor receivers. Both surface electromagnetic methods measure electric and magnetic field vectors. In order to extract maximal information from these CSEM and MT data we employ a nonlinear inversion approach in their interpretation. We present two types of inversion approaches. The first approach is the so-called pixel-based inversion (PBI) algorithm. In this approach the investigation domain is subdivided into pixels, and by using an optimization process the conductivity distribution inside the domain is reconstructed. The optimization process uses the Gauss-Newton minimization scheme augmented with various forms of regularization. To automate the algorithm, the regularization term is incorporated using a multiplicative cost function. This PBI approach has demonstrated its ability to retrieve reasonably good conductivity images. However, the reconstructed boundaries and conductivity values of the imaged anomalies are usually not quantitatively resolved. Nevertheless, the PBI approach can provide useful information on the location, the shape and the conductivity of the hydrocarbon reservoir. The second method is the so-called model-based inversion (MBI) algorithm, which uses a priori information on the geometry to reduce the number of unknown parameters and to improve the quality of the reconstructed conductivity image. This MBI approach can
NASA Astrophysics Data System (ADS)
McKinna, Lachlan I. W.; Fearns, Peter R. C.; Weeks, Scarla J.; Werdell, P. Jeremy; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.
2015-03-01
A semianalytical ocean color inversion algorithm was developed for improving retrievals of inherent optical properties (IOPs) in optically shallow waters. In clear, geometrically shallow waters, light reflected off the seafloor can contribute to the water-leaving radiance signal. This can have a confounding effect on ocean color algorithms developed for optically deep waters, leading to an overestimation of IOPs. The algorithm described here, the Shallow Water Inversion Model (SWIM), uses pre-existing knowledge of bathymetry and benthic substrate brightness to account for optically shallow effects. SWIM was incorporated into the NASA Ocean Biology Processing Group's L2GEN code and tested in waters of the Great Barrier Reef, Australia, using the Moderate Resolution Imaging Spectroradiometer (MODIS) Aqua time series (2002-2013). SWIM-derived values of the total non-water absorption coefficient at 443 nm, at(443), the particulate backscattering coefficient at 443 nm, bbp(443), and the diffuse attenuation coefficient at 488 nm, Kd(488), were compared with values derived using the Generalized Inherent Optical Properties algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA). The results indicated that in clear, optically shallow waters SWIM-derived values of at(443), bbp(443), and Kd(443) were realistically lower than values derived using GIOP and QAA, in agreement with radiative transfer modeling. This signified that the benthic reflectance correction was performing as expected. However, in more optically complex waters, SWIM had difficulty converging to a solution, a likely consequence of internal IOP parameterizations. Whilst a comprehensive study of the SWIM algorithm's behavior was conducted, further work is needed to validate the algorithm using in situ data.
NASA Technical Reports Server (NTRS)
Mckinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.
2015-01-01
A semianalytical ocean color inversion algorithm was developed for improving retrievals of inherent optical properties (IOPs) in optically shallow waters. In clear, geometrically shallow waters, light reflected off the seafloor can contribute to the water-leaving radiance signal. This can have a confounding effect on ocean color algorithms developed for optically deep waters, leading to an overestimation of IOPs. The algorithm described here, the Shallow Water Inversion Model (SWIM), uses pre-existing knowledge of bathymetry and benthic substrate brightness to account for optically shallow effects. SWIM was incorporated into the NASA Ocean Biology Processing Group's L2GEN code and tested in waters of the Great Barrier Reef, Australia, using the Moderate Resolution Imaging Spectroradiometer (MODIS) Aqua time series (2002-2013). SWIM-derived values of the total non-water absorption coefficient at 443 nm, at(443), the particulate backscattering coefficient at 443 nm, bbp(443), and the diffuse attenuation coefficient at 488 nm, Kd(488), were compared with values derived using the Generalized Inherent Optical Properties algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA). The results indicated that in clear, optically shallow waters SWIM-derived values of at(443), bbp(443), and Kd(443) were realistically lower than values derived using GIOP and QAA, in agreement with radiative transfer modeling. This signified that the benthic reflectance correction was performing as expected. However, in more optically complex waters, SWIM had difficulty converging to a solution, a likely consequence of internal IOP parameterizations. Whilst a comprehensive study of the SWIM algorithm's behavior was conducted, further work is needed to validate the algorithm using in situ data.
NASA Astrophysics Data System (ADS)
Liu, Lisheng; Jiang, Zhenhua; Wang, Tingfeng; Guo, Jin
2015-03-01
An angular spectrum propagation (ASP) algorithm with a scaling parameter to simulate optical diffraction propagation through optical systems is studied. The alterable observation size is obtained by adding the scaling parameter to the Collins formula. A directly mathematical inverse transformation of the ASP algorithm (IASP) is proposed to calculate the source optical field from the known observation optical field, and the results are proved more precise. The IASP algorithm is applied to execute the phase retrieval to derive the aberrations of optical systems from intensity profiles measured in the observation plane. The derived aberrations are fitted by Zernike polynomials under the constraint that the wavefront aberrations are smooth. Numerical simulations are performed to test the accuracy of this method.
NASA Astrophysics Data System (ADS)
Llacer, Jorge; Solberg, Timothy D.; Promberger, Claus
2001-10-01
This paper presents a description of tests carried out to compare the behaviour of five algorithms in inverse radiation therapy planning: (1) The Dynamically Penalized Likelihood (DPL), an algorithm based on statistical estimation theory; (2) an accelerated version of the same algorithm; (3) a new fast adaptive simulated annealing (ASA) algorithm; (4) a conjugate gradient method; and (5) a Newton gradient method. A three-dimensional mathematical phantom and two clinical cases have been studied in detail. The phantom consisted of a U-shaped tumour with a partially enclosed 'spinal cord'. The clinical examples were a cavernous sinus meningioma and a prostate case. The algorithms have been tested in carefully selected and controlled conditions so as to ensure fairness in the assessment of results. It has been found that all five methods can yield relatively similar optimizations, except when a very demanding optimization is carried out. For the easier cases, the differences are principally in robustness, ease of use and optimization speed. In the more demanding case, there are significant differences in the resulting dose distributions. The accelerated DPL emerges as possibly the algorithm of choice for clinical practice. An appendix describes the differences in behaviour between the new ASA method and the one based on a patent by the Nomos Corporation.
NASA Astrophysics Data System (ADS)
Wahyudi, Eko Januari
2013-09-01
As advancing application of soft computation technique in oil and gas industry, Genetic Algorithm (GA) also shows contribution in geophysical inverse problems in order to achieve better results and efficiency in computational process. In this paper, I would like to show the progress of my work in inverse modeling of time-lapse gravity data uses value encoding with alphabet formulation. The alphabet formulation designed to provide solution of characterization positive density change (+Δρ) and negative density change (-Δρ) respect to reference value (0 gr/cc). The inversion that utilize discrete model parameter, computed with GA as optimization algorithm. The challenge working with GA is take long time computational process, so the step in designing GA in this paper described through evaluation on GA operators performance test. The performances of several combinations of GA operators (selection, crossover, mutation, and replacement) tested with synthetic model in single-layer reservoir. Analysis on sufficient number of samples shows combination of SUS-MPCO-QSA/G-ND as the most promising results. Quantitative solution with more confidence level to characterize sharp boundary of density change zones was conducted with average calculation of sufficient model samples.
LOTOS code for local earthquake tomographic inversion: benchmarks for testing tomographic algorithms
NASA Astrophysics Data System (ADS)
Koulakov, I. Yu.
2009-04-01
We present the LOTOS-07 code for performing local earthquake tomographic (LET) inversion, which is freely available at www.ivan-art.com/science/LOTOS_07. The initial data for the code are the arrival times from local seismicity and coordinates of the stations. It does not require any information about the sources. The calculations start from absolute location of sources and estimates of an optimal 1D velocity model. Then the sources are relocated simultaneously with the 3D velocity distribution during iterative coupled tomographic inversions. The code allows results to be compared based on node or cell parameterizations. Both Vp-Vs and Vp - Vp/Vs inversion schemes can be performed by the LOTOS code. The working ability of the LOTOS code is illustrated with different real and synthetic datasets. Some of the tests are used to disprove existing stereotypes of LET schemes such as using trade-off curves for evaluation of damping parameters and GAP criterion for selection of events. We also present a series of synthetic datasets with unknown sources and velocity models (www.ivan-art.com/science/benchmark) that can be used as blind benchmarks for testing different tomographic algorithms. We encourage other users of tomography algorithms to join the program on creating benchmarks that can be used to check existing codes. The program codes and testing datasets will be freely distributed during the poster presentation.
NASA Astrophysics Data System (ADS)
Partheepan, G.; Sehgal, D. K.; Pandey, R. K.
2006-12-01
An inverse finite element algorithm is established to extract the tensile constitutive properties such as Young's modulus, yield strength and true stress-true strain diagram of a material in a virtually non-destructive manner. Standard test methods for predicting mechanical properties require the removal of large size material samples from the in-service component, which is impractical. To circumvent this situation, a new dumb-bell shaped miniature specimen has been designed and fabricated which can be used for evaluation of properties for a material or component. Also test fixtures were developed to perform a tension test on this proposed miniature specimen in a testing machine. The studies have been conducted in low carbon steel, die steel and medium carbon steel. The output from the miniature test, namely, load-elongation diagram, is obtained and used for the proposed inverse finite element algorithm to find the material properties. Inverse finite element modelling is carried out using a 2D plane stress analysis. The predicted results are found to be in good agreement with the experimental results.
NASA Astrophysics Data System (ADS)
David, Sabrina; Burion, Steve; Tepe, Alan; Wilfley, Brian; Menig, Daniel; Funk, Tobias
2012-03-01
Iterative reconstruction methods have emerged as a promising avenue to reduce dose in CT imaging. Another, perhaps less well-known, advance has been the development of inverse geometry CT (IGCT) imaging systems, which can significantly reduce the radiation dose delivered to a patient during a CT scan compared to conventional CT systems. Here we show that IGCT data can be reconstructed using iterative methods, thereby combining two novel methods for CT dose reduction. A prototype IGCT scanner was developed using a scanning beam digital X-ray system - an inverse geometry fluoroscopy system with a 9,000 focal spot x-ray source and small photon counting detector. 90 fluoroscopic projections or "superviews" spanning an angle of 360 degrees were acquired of an anthropomorphic phantom mimicking a 1 year-old boy. The superviews were reconstructed with a custom iterative reconstruction algorithm, based on the maximum-likelihood algorithm for transmission tomography (ML-TR). The normalization term was calculated based on flat-field data acquired without a phantom. 15 subsets were used, and a total of 10 complete iterations were performed. Initial reconstructed images showed faithful reconstruction of anatomical details. Good edge resolution and good contrast-to-noise properties were observed. Overall, ML-TR reconstruction of IGCT data collected by a bench-top prototype was shown to be viable, which may be an important milestone in the further development of inverse geometry CT.
A nonlinear model reference adaptive inverse control algorithm with pre-compensator
NASA Astrophysics Data System (ADS)
Xiao, Bin; Yang, Tie-Jun; Liu, Zhi-Gang
2005-12-01
In this paper, the reduced-order modeling (ROM) technology and its corresponding linear theory are expanded from the linear dynamic system to the nonlinear one, and H ∞ control theory is employed in the frequency domain to design some nonlinear system s pre-compensator in some special way. The adaptive model inverse control (AMIC) theory coping with nonlinear system is improved as well. Such is the model reference adaptive inverse control with pre-compensator (PCMRAIC). The aim of that algorithm is to construct a strategy of control as a whole. As a practical example of the application, the numerical simulation has been given on matlab software packages. The numerical result is given. The proposed strategy realizes the linearization control of nonlinear dynamic system. And it carries out a good performance to deal with the nonlinear system.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-03-21
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.
A new stochastic algorithm for inversion of dust aerosol size distribution
NASA Astrophysics Data System (ADS)
Wang, Li; Li, Feng; Yang, Ma-ying
2015-08-01
Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-08-01
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-03-21
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less
Bayesian inversion for finite fault earthquake source models I—theory and algorithm
NASA Astrophysics Data System (ADS)
Minson, S. E.; Simons, M.; Beck, J. L.
2013-09-01
The estimation of finite fault earthquake source models is an inherently underdetermined problem: there is no unique solution to the inverse problem of determining the rupture history at depth as a function of time and space when our data are limited to observations at the Earth's surface. Bayesian methods allow us to determine the set of all plausible source model parameters that are consistent with the observations, our a priori assumptions about the physics of the earthquake source and wave propagation, and models for the observation errors and the errors due to the limitations in our forward model. Because our inversion approach does not require inverting any matrices other than covariance matrices, we can restrict our ensemble of solutions to only those models that are physically defensible while avoiding the need to restrict our class of models based on considerations of numerical invertibility. We only use prior information that is consistent with the physics of the problem rather than some artefice (such as smoothing) needed to produce a unique optimal model estimate. Bayesian inference can also be used to estimate model-dependent and internally consistent effective errors due to shortcomings in the forward model or data interpretation, such as poor Green's functions or extraneous signals recorded by our instruments. Until recently, Bayesian techniques have been of limited utility for earthquake source inversions because they are computationally intractable for problems with as many free parameters as typically used in kinematic finite fault models. Our algorithm, called cascading adaptive transitional metropolis in parallel (CATMIP), allows sampling of high-dimensional problems in a parallel computing framework. CATMIP combines the Metropolis algorithm with elements of simulated annealing and genetic algorithms to dynamically optimize the algorithm's efficiency as it runs. The algorithm is a generic Bayesian Markov Chain Monte Carlo sampler; it works
A new MCMC algorithm for seismic waveform inversion and corresponding uncertainty analysis
NASA Astrophysics Data System (ADS)
Hong, Tiancong; Sen, Mrinal K.
2009-04-01
It is superior to formulate an inverse problem in a Bayesian framework and fully solve it by stochastically constructing the posterior probability density (PPD) distribution using Markov chain Monte Carlo (MCMC) algorithms. The estimated PPD can also be used to compute several measures of dispersion in the model space. However, for realistic application, MCMC methods can be computationally expensive and may lead to inaccurate PPD estimation as well as uncertainty analysis due to the strong non-linearity and high dimensionality. In this paper, to address the fundamental issues of efficiency and accuracy in parameter estimation and PPD sampling, we incorporate some new developments into a standard genetic algorithm (GA) to design more powerful algorithms for the practical geophysical inverse problems such as a non-linear pre-stack seismic waveform inversion. First, a multiscale real-coded hybrid GA is developed to facilitate exploitation of the model space for optimal parameters at a fine scale. It is demonstrated that, by using real-coding and especially multiscaling to trade information between the model vectors defined at different resolutions, we attain a substantial speed-up in computation and obtain accurate parameter estimations. This new optimization method is further adapted to a new multiscale GA based MCMC method, in which multiple MCMC chains defined at different scales are run simultaneously in parallel. To gain the benefits of both the faster convergence of coarse scales and the greater detail of fine scales, realizations of chains at different scales are combined for intelligent proposals that facilitate exploration of the model space at the fine scale. In this study, the new MCMC is justified using an analytical example and its performance on PPD estimation, and uncertainty quantification is demonstrated using a non-linear seismic inverse problem. We find that incorporation of multiscaling in the Bayesian approach shows a great promise in solving
Genetic algorithm for the inverse problem in synthesis of fiber gratings
NASA Astrophysics Data System (ADS)
Skaar, Johannes; Risvik, Knut M.
1998-06-01
A new method for synthesis of fiber gratings with advanced characteristic is proposed. The method is based on an optimizing genetic algorithm, and facilitates the task of weighting the different requirements to the filter spectrum. A classical problem in applied physics and engineering fields is the inverse problem. An example of such a problem is to determine a fiber grating index modulation profile corresponding to a given reflection spectrum. This is not a trivial problem, and a variety of synthesis algorithms has been proposed. For weak gratings, the synthesis problem of fiber gratings reduces to an inverse Fourier transform of the reflection coefficient. This is known as the first-order Born approximation, and applies only for gratings for which the reflectivity is small. Another solution to this problem was found by Song and Shin, who solved the coupled Gel'fand- Levitan-Marchenko (GLM) integral equations that appear in the inverse scattering theory of quantum mechanics. Their method is exact, but is restricted to reflection coefficients that can be expressed as a rational function. An iterative solution to the GLM equations was found by Peral et. al., yielding smoother coupling coefficients that the exact method. The algorithm is converging relatively fast, and gives satisfying results even for high reflectivity gratings. However, when specifying ideal, unachievable filter responses, it is desirable to have a weighting mechanisms, which makes it easier to weight the different requirements. For example, when synthesizing an optical bandpass filter, one may be interested in weighting linear phase more than sharp peaks. because the dispersion may be a more critical parameter. The iterative GLM method does not support such a mechanism in a satisfactory way.
A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem
NASA Astrophysics Data System (ADS)
Park, Taehoon; Park, Won-Kwang
2015-09-01
Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation.
NASA Astrophysics Data System (ADS)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.
Genetic algorithms-based inversion of multimode guided waves for cortical bone characterization
NASA Astrophysics Data System (ADS)
Bochud, N.; Vallet, Q.; Bala, Y.; Follet, H.; Minonzio, J.-G.; Laugier, P.
2016-10-01
Recent progress in quantitative ultrasound has exploited the multimode waveguide response of long bones. Measurements of the guided modes, along with suitable waveguide modeling, have the potential to infer strength-related factors such as stiffness (mainly determined by cortical porosity) and cortical thickness. However, the development of such model-based approaches is challenging, in particular because of the multiparametric nature of the inverse problem. Current estimation methods in the bone field rely on a number of assumptions for pairing the incomplete experimental data with the theoretical guided modes (e.g. semi-automatic selection and classification of the data). The availability of an alternative inversion scheme that is user-independent is highly desirable. Thus, this paper introduces an efficient inversion method based on genetic algorithms using multimode guided waves, in which the mode-order is kept blind. Prior to its evaluation on bone, our proposal is validated using laboratory-controlled measurements on isotropic plates and bone-mimicking phantoms. The results show that the model parameters (i.e. cortical thickness and porosity) estimated from measurements on a few ex vivo human radii are in good agreement with the reference values derived from x-ray micro-computed tomography. Further, the cortical thickness estimated from in vivo measurements at the third from the distal end of the radius is in good agreement with the values delivered by site-matched high-resolution x-ray peripheral computed tomography.
Hesford, Andrew J.; Chew, Weng C.
2010-01-01
The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438
NASA Astrophysics Data System (ADS)
Müller, Detlef; Böckmann, Christine; Kolgotin, Alexei; Schneidenbach, Lars; Chemyakin, Eduard; Rosemann, Julia; Znak, Pavel; Romanov, Anton
2016-10-01
We present a summary on the current status of two inversion algorithms that are used in EARLINET (European Aerosol Research Lidar Network) for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on a manually controlled inversion of optical data which allows for detailed sensitivity studies. The algorithms allow us to derive particle effective radius as well as volume and surface area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light absorption needs to be known with high accuracy. It is an extreme challenge to retrieve the real part with an accuracy better than 0.05 and the imaginary part with accuracy better than 0.005-0.1 or ±50 %. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high- and low-absorbing aerosols. On the basis of a few exemplary simulations with synthetic optical data we discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work. One algorithm was used with the purpose of testing how well microphysical parameters can be derived if the real part of the complex refractive index is known to at least 0.05 or 0.1. The other algorithm was used to find out how well microphysical parameters can be derived if this constraint for the real part is not applied. The optical data used in our study cover a range of Ångström exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested
NASA Astrophysics Data System (ADS)
Al-Ma'shumah, Fathimah; Permana, Dony; Sidarto, Kuntjoro Adji
2015-12-01
Customer Lifetime Value is an important and useful concept in marketing. One of its benefits is to help a company for budgeting marketing expenditure for customer acquisition and customer retention. Many mathematical models have been introduced to calculate CLV considering the customer retention/migration classification scheme. A fairly new class of these models which will be described in this paper uses Markov Chain Models (MCM). This class of models has the major advantage for its flexibility to be modified to several different cases/classification schemes. In this model, the probabilities of customer retention and acquisition play an important role. From Pfeifer and Carraway, 2000, the final formula of CLV obtained from MCM usually contains nonlinear form of the transition probability matrix. This nonlinearity makes the inverse problem of CLV difficult to solve. This paper aims to solve this inverse problem, yielding the approximate transition probabilities for the customers, by applying metaheuristic optimization algorithm developed by Yang, 2013, Flower Pollination Algorithm. The major interpretation of obtaining the transition probabilities are to set goals for marketing teams in keeping the relative frequencies of customer acquisition and customer retention.
Inverse problems with Poisson data: statistical regularization theory, applications and algorithms
NASA Astrophysics Data System (ADS)
Hohage, Thorsten; Werner, Frank
2016-09-01
Inverse problems with Poisson data arise in many photonic imaging modalities in medicine, engineering and astronomy. The design of regularization methods and estimators for such problems has been studied intensively over the last two decades. In this review we give an overview of statistical regularization theory for such problems, the most important applications, and the most widely used algorithms. The focus is on variational regularization methods in the form of penalized maximum likelihood estimators, which can be analyzed in a general setup. Complementing a number of recent convergence rate results we will establish consistency results. Moreover, we discuss estimators based on a wavelet-vaguelette decomposition of the (necessarily linear) forward operator. As most prominent applications we briefly introduce Positron emission tomography, inverse problems in fluorescence microscopy, and phase retrieval problems. The computation of a penalized maximum likelihood estimator involves the solution of a (typically convex) minimization problem. We also review several efficient algorithms which have been proposed for such problems over the last five years.
An implementation of differential evolution algorithm for inversion of geoelectrical data
NASA Astrophysics Data System (ADS)
Balkaya, Çağlayan
2013-11-01
Differential evolution (DE), a population-based evolutionary algorithm (EA) has been implemented to invert self-potential (SP) and vertical electrical sounding (VES) data sets. The algorithm uses three operators including mutation, crossover and selection similar to genetic algorithm (GA). Mutation is the most important operator for the success of DE. Three commonly used mutation strategies including DE/best/1 (strategy 1), DE/rand/1 (strategy 2) and DE/rand-to-best/1 (strategy 3) were applied together with a binomial type crossover. Evolution cycle of DE was realized without boundary constraints. For the test studies performed with SP data, in addition to both noise-free and noisy synthetic data sets two field data sets observed over the sulfide ore body in the Malachite mine (Colorado) and over the ore bodies in the Neem-Ka Thana cooper belt (India) were considered. VES test studies were carried out using synthetically produced resistivity data representing a three-layered earth model and a field data set example from Gökçeada (Turkey), which displays a seawater infiltration problem. Mutation strategies mentioned above were also extensively tested on both synthetic and field data sets in consideration. Of these, strategy 1 was found to be the most effective strategy for the parameter estimation by providing less computational cost together with a good accuracy. The solutions obtained by DE for the synthetic cases of SP were quite consistent with particle swarm optimization (PSO) which is a more widely used population-based optimization algorithm than DE in geophysics. Estimated parameters of SP and VES data were also compared with those obtained from Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing (SA) without cooling to clarify uncertainties in the solutions. Comparison to the M-H algorithm shows that DE performs a fast approximate posterior sampling for the case of low-dimensional inverse geophysical problems.
NASA Astrophysics Data System (ADS)
Belkebir, Kamal; Saillard, Marc
2005-12-01
This special section deals with the reconstruction of scattering objects from experimental data. A few years ago, inspired by the Ipswich database [1 4], we started to build an experimental database in order to validate and test inversion algorithms against experimental data. In the special section entitled 'Testing inversion algorithms against experimental data' [5], preliminary results were reported through 11 contributions from several research teams. (The experimental data are free for scientific use and can be downloaded from the web site.) The success of this previous section has encouraged us to go further and to design new challenges for the inverse scattering community. Taking into account the remarks formulated by several colleagues, the new data sets deal with inhomogeneous cylindrical targets and transverse electric (TE) polarized incident fields have also been used. Among the four inhomogeneous targets, three are purely dielectric, while the last one is a `hybrid' target mixing dielectric and metallic cylinders. Data have been collected in the anechoic chamber of the Centre Commun de Ressources Micro-ondes in Marseille. The experimental setup as well as the layout of the files containing the measurements are presented in the contribution by J-M Geffrin, P Sabouroux and C Eyraud. The antennas did not change from the ones used previously [5], namely wide-band horn antennas. However, improvements have been achieved by refining the mechanical positioning devices. In order to enlarge the scope of applications, both TE and transverse magnetic (TM) polarizations have been carried out for all targets. Special care has been taken not to move the target under test when switching from TE to TM measurements, ensuring that TE and TM data are available for the same configuration. All data correspond to electric field measurements. In TE polarization the measured component is orthogonal to the axis of invariance. Contributions A Abubakar, P M van den Berg and T M
NASA Technical Reports Server (NTRS)
Petty, Grant W.; Stettner, David R.
1994-01-01
This paper discusses certain aspects of a new inversion based algorithm for the retrieval of rain rate over the open ocean from the special sensor microwave/imager (SSM/I) multichannel imagery. This algorithm takes a more detailed physical approach to the retrieval problem than previously discussed algorithms that perform explicit forward radiative transfer calculations based on detailed model hydrometer profiles and attempt to match the observations to the predicted brightness temperature.
Cardiac ablation catheter guidance by means of a single equivalent moving dipole inverse algorithm
Lee, Kichang; Lv, Wener; Ter-Ovanesyan, Evgeny; Barley, Maya E.; Voysey, Graham E.; Galea, Anna; Hirschman, Gordon; LeRoy, Kristen; Marini, Robert P.; Barrett, Conor; Armoundas, Antonis A.; Cohen, Richard J.
2015-01-01
We developed and evaluated a novel system for guiding radio-frequency catheter ablation therapy of ventricular tachycardia. This guidance system employs an Inverse Solution Guidance Algorithm (ISGA) utilizing a single equivalent moving dipole (SEMD) localization method. The method and system were evaluated in both a saline-tank phantom model and in-vivo animal (swine) experiments. A catheter with two platinum electrodes spaced 3 mm apart was used as the dipole source in the phantom study. A 40 Hz sinusoidal signal was applied to the electrode pair. In the animal study, four to eight electrodes were sutured onto the right ventricle. These electrodes were connected to a stimulus generator delivering one millisecond duration pacing pulses. Signals were recorded from 64 electrodes, located either on the inner surface of the saline-tank or the body surface of the pig, and then processed by the ISGA to localize the physical or bioelectrical SEMD. In the phantom studies, the guidance algorithm was used to advance a catheter tip to the location of the source dipole. The distance from the final position of the catheter tip to the position of the target dipole was 2.22 ± 0.78 mm in real space and 1.38± 0.78 mm in image space (computational space). The ISGA successfully tracked the locations of electrodes sutured on the ventricular myocardium and the movement of an endocardial catheter placed in the animal’s right ventricle. In conclusion, we successfully demonstrated the feasibility of using a SEMD inverse algorithm to guide a cardiac ablation catheter. PMID:23448231
Adaptive Inverse Hyperbolic Tangent Algorithm for Dynamic Contrast Adjustment in Displaying Scenes
NASA Astrophysics Data System (ADS)
Yu, Cheng-Yi; Ouyang, Yen-Chieh; Wang, Chuin-Mu; Chang, Chein-I.
2010-12-01
Contrast has a great influence on the quality of an image in human visual perception. A poorly illuminated environment can significantly affect the contrast ratio, producing an unexpected image. This paper proposes an Adaptive Inverse Hyperbolic Tangent (AIHT) algorithm to improve the display quality and contrast of a scene. Because digital cameras must maintain the shadow in a middle range of luminance that includes a main object such as a face, a gamma function is generally used for this purpose. However, this function has a severe weakness in that it decreases highlight contrast. To mitigate this problem, contrast enhancement algorithms have been designed to adjust contrast to tune human visual perception. The proposed AIHT determines the contrast levels of an original image as well as parameter space for different contrast types so that not only the original histogram shape features can be preserved, but also the contrast can be enhanced effectively. Experimental results show that the proposed algorithm is capable of enhancing the global contrast of the original image adaptively while extruding the details of objects simultaneously.
NASA Astrophysics Data System (ADS)
Müller, D.; Böckmann, C.; Kolgotin, A.; Schneidenbach, L.; Chemyakin, E.; Rosemann, J.; Znak, P.; Romanov, A.
2015-12-01
We present a summary on the current status of two inversion algorithms that are used in EARLINET for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithms allow us to derive particle effective radius, and volume and surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. We discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work on the basis of a few exemplary simulations with synthetic optical data. The optical data used in our study cover a range of Ångström exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested aerosol scenarios that are considered highly unlikely, e.g., the lidar ratios fall outside the commonly accepted range of values measured with Raman lidar, even though the underlying microphysical particle properties are not uncommon. The goal of this part of the study is to test robustness of the algorithms toward their ability to identify aerosol types that have not been measured so far, but cannot be ruled out based on our current knowledge of
Dogrusoz, Yesim Serinagaoglu; Gavgani, Alireza Mazloumi
2013-04-01
In inverse electrocardiography, the goal is to estimate cardiac electrical sources from potential measurements on the body surface. It is by nature an ill-posed problem, and regularization must be employed to obtain reliable solutions. This paper employs the multiple constraint solution approach proposed in Brooks et al. (IEEE Trans Biomed Eng 46(1):3-18, 1999) and extends its practical applicability to include more than two constraints by finding appropriate values for the multiple regularization parameters. Here, we propose the use of real-valued genetic algorithms for the estimation of multiple regularization parameters. Theoretically, it is possible to include as many constraints as necessary and find the corresponding regularization parameters using this approach. We have shown the feasibility of our method using two and three constraints. The results indicate that GA could be a good approach for the estimation of multiple regularization parameters.
Inversion Algorithms for Water Vapor Radiometers Operating at 20.7 and 31.4 Ghz
NASA Technical Reports Server (NTRS)
Resch, G. M.
1984-01-01
Eight water vapor radiometers (WVRs) were constructed as research and development tools to support the Advanced System Programs in the Deep Space Network and the Crustal Dynamics Project. These instruments are intended to operate at the stations of the Deep Space Network (DSN), various radio observatories, and obile facilities that participate in very long baseline interferometric (VLBI) experiments. It is expected that the WVRs will operate in a wide range of meteorological conditions. Several algorithms are discussed that are used to estimate the line-of-sight path delay due to water vapor and columnar liquid water rom the observed microwave brightness temperatures provided by the WVRs. In particular, systematic effects due to site and seasonal variations are examined. The accuracy of the estimation as indicated by a simulation calculation is approximately 0.3 cm for a noiseless WVR in clear and moderately cloudy weather. With a realistic noise model of WVR behavior, the inversion accuracy is approximately 0.6 cm.
Self-potential data inversion through a Genetic-Price algorithm
NASA Astrophysics Data System (ADS)
Di Maio, R.; Rani, P.; Piegari, E.; Milano, L.
2016-09-01
A global optimization method based on a Genetic-Price hybrid Algorithm (GPA) is proposed for identifying the source parameters of self-potential (SP) anomalies. The effectiveness of the proposed approach is tested on synthetic SP data generated by simple polarized structures, like sphere, vertical cylinder, horizontal cylinder and inclined sheet. An extensive numerical analysis on signals affected by different percentage of white Gaussian random noise shows that the GPA is able to provide fast and accurate estimations of the true parameters in all tested examples. In particular, the calculation of the root-mean squared error between the true and inverted SP parameter sets is found to be crucial for the identification of the source anomaly shape. Finally, applications of the GPA to self-potential field data are presented and discussed in light of the results provided by other sophisticated inversion methods.
Combinatorial Algorithms for Computing Column Space Bases ThatHave Sparse Inverses
Pinar, Ali; Chow, Edmond; Pothen, Alex
2005-03-18
This paper presents a combinatorial study on the problem ofconstructing a sparse basis forthe null-space of a sparse, underdetermined, full rank matrix, A. Such a null-space is suitable forsolving solving many saddle point problems. Our approach is to form acolumn space basis of A that has a sparse inverse, by selecting suitablecolumns of A. This basis is then used to form a sparse null-space basisin fundamental form. We investigate three different algorithms forcomputing the column space basis: Two greedy approaches that rely onmatching, and a third employing a divide and conquer strategy implementedwith hypergraph partitioning followed by the greedy approach. We alsodiscuss the complexity of selecting a column basis when it is known thata block diagonal basis exists with a small given block size.
NASA Astrophysics Data System (ADS)
Xiang, Shiming; Zhang, Haijiang
2016-11-01
It is known full-waveform inversion (FWI) is generally ill-conditioned and various strategies including pre-conditioning and regularizing the inversion system have been proposed to obtain a reliable estimation of the velocity model. Here, we propose a new edge-guided strategy for FWI in frequency domain to efficiently and reliably estimate velocity models with structures of the size similar to the seismic wavelength. The edges of the velocity model at the current iteration are first detected by the Canny edge detection algorithm that is widely used in image processing. Then, the detected edges are used for guiding the calculation of FWI gradient as well as enforcing edge-preserving total variation (TV) regularization for next iteration of FWI. Bilateral filtering is further applied to remove noise but keep edges of the FWI gradient. The proposed edge-guided FWI in the frequency domain with edge-guided TV regularization and bilateral filtering is designed to preserve model edges that are recovered from previous iterations as well as from lower frequency waveforms when FWI is conducted from lower to higher frequencies. The new FWI method is validated using the complex Marmousi model that contains several steeply dipping fault zones and hundreds of horizons. Compared to FWI without edge guidance, our proposed edge-guided FWI recovers velocity model anomalies and edges much better. Unlike previous image-guided FWI or edge-guided TV regularization strategies, our method does not require migrating seismic data, thus is more efficient for real applications.
NASA Astrophysics Data System (ADS)
Li, Cong; Lei, Jianshe
2014-10-01
In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.
NASA Astrophysics Data System (ADS)
Li, Cong; Lei, Jianshe
2014-09-01
In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.
NASA Astrophysics Data System (ADS)
Qi, Zhipeng; Li, Xiu; Lu, Xushan; Zhang, Yingying; Yao, Weihua
2015-04-01
We introduce a new and potentially useful method for wave field inverse transformation and its application in transient electromagnetic method (TEM) 3D interpretation. The diffusive EM field is known to have a unique integral representation in terms of a fictitious wave field that satisfies a wave equation. The continuous imaging of TEM can be accomplished using the imaging methods in seismic interpretation after the diffusion equation is transformed into a fictitious wave equation. The interpretation method based on the imaging of a fictitious wave field could be used as a fast 3D inversion method. Moreover, the fictitious wave field possesses some wave field features making it possible for the application of a wave field interpretation method in TEM to improve the prospecting resolution. Wave field transformation is a key issue in the migration imaging of a fictitious wave field. The equation in the wave field transformation belongs to the first class Fredholm integration equation, which is a typical ill-posed equation. Additionally, TEM has a large dynamic time range, which also facilitates the weakness of this ill-posed problem. The wave field transformation is implemented by using pre-conditioned regularized conjugate gradient method. The continuous imaging of a fictitious wave field is implemented by using Kirchhoff integration. A synthetic aperture and deconvolution algorithm is also introduced to improve the interpretation resolution. We interpreted field data by the method proposed in this paper, and obtained a satisfying interpretation result.
NASA Astrophysics Data System (ADS)
Ying, Sibin; Ai, Jianliang; Luo, Changhang; Wang, Peng
2006-11-01
Non-linear Dynamic Inversion (NDI) is a technique for control law design, which is based on the feedback linearization and achieving desired dynamic response characteristics. NDI requires an ideal and precise model, however, there must be some errors due to the modeling error or actuator faults, therefore the control law designed by NDI has less robustness. Combining with structured singular value μ synthesis method, the system's robustness can be improved notably. The designed controller, which uses the structured singular value μ synthesis method, has high dimensions, and the dimensions must be reduced when we calculate it. This paper presents a new method for the design of robust flight control, which uses structured singular value μ synthesis based on genetic algorithm. The designed controller, which uses this method, can reduce the dimensions obviously compared with the normal method of structured singular value synthesis, so it is easier for application. The presented method is applied to robustness controller design of some super maneuverable fighter. The simulation results show that the dynamic inversion control law achieves a high level of performance in post-stall maneuver condition, and the whole control system has perfect robustness and anti-disturbance ability.
Three-dimensional inverse modelling of magnetic anomaly sources based on a genetic algorithm
NASA Astrophysics Data System (ADS)
Montesinos, Fuensanta G.; Blanco-Montenegro, Isabel; Arnoso, José
2016-04-01
We present a modelling method to estimate the 3-D geometry and location of homogeneously magnetized sources from magnetic anomaly data. As input information, the procedure needs the parameters defining the magnetization vector (intensity, inclination and declination) and the Earth's magnetic field direction. When these two vectors are expected to be different in direction, we propose to estimate the magnetization direction from the magnetic map. Then, using this information, we apply an inversion approach based on a genetic algorithm which finds the geometry of the sources by seeking the optimum solution from an initial population of models in successive iterations through an evolutionary process. The evolution consists of three genetic operators (selection, crossover and mutation), which act on each generation, and a smoothing operator, which looks for the best fit to the observed data and a solution consisting of plausible compact sources. The method allows the use of non-gridded, non-planar and inaccurate anomaly data and non-regular subsurface partitions. In addition, neither constraints for the depth to the top of the sources nor an initial model are necessary, although previous models can be incorporated into the process. We show the results of a test using two complex synthetic anomalies to demonstrate the efficiency of our inversion method. The application to real data is illustrated with aeromagnetic data of the volcanic island of Gran Canaria (Canary Islands).
An inverse kinematics algorithm for a highly redundant variable-geometry-truss manipulator
NASA Technical Reports Server (NTRS)
Naccarato, Frank; Hughes, Peter
1989-01-01
A new class of robotic arm consists of a periodic sequence of truss substructures, each of which has several variable-length members. Such variable-geometry-truss manipulator (VGTMs) are inherently highly redundant and promise a significant increase in dexterity over conventional anthropomorphic manipulators. This dexterity may be exploited for both obstacle avoidance and controlled deployment in complex workspaces. The inverse kinematics problem for such unorthodox manipulators, however, becomes complex because of the large number of degrees of freedom, and conventional solutions to the inverse kinematics problem become inefficient because of the high degree of redundancy. A solution is presented to this problem based on a spline-like reference curve for the manipulator's shape. Such an approach has a number of advantages: (1) direct, intuitive manipulation of shape; (2) reduced calculation time; and (3) direct control over the effective degree of redundancy of the manipulator. Furthermore, although the algorithm was developed primarily for variable-geometry-truss manipulators, it is general enough for application to a number of manipulator designs.
NASA Astrophysics Data System (ADS)
Monnier, J.; Couderc, F.; Dartus, D.; Larnier, K.; Madec, R.; Vila, J.-P.
2016-11-01
The 2D shallow water equations adequately model some geophysical flows with wet-dry fronts (e.g. flood plain or tidal flows); nevertheless deriving accurate, robust and conservative numerical schemes for dynamic wet-dry fronts over complex topographies remains a challenge. Furthermore for these flows, data are generally complex, multi-scale and uncertain. Robust variational inverse algorithms, providing sensitivity maps and data assimilation processes may contribute to breakthrough shallow wet-dry front dynamics modelling. The present study aims at deriving an accurate, positive and stable finite volume scheme in presence of dynamic wet-dry fronts, and some corresponding inverse computational algorithms (variational approach). The schemes and algorithms are assessed on classical and original benchmarks plus a real flood plain test case (Lèze river, France). Original sensitivity maps with respect to the (friction, topography) pair are performed and discussed. The identification of inflow discharges (time series) or friction coefficients (spatially distributed parameters) demonstrate the algorithms efficiency.
Evaluation of a Geothermal Prospect Using a Stochastic Joint Inversion Algorithm
NASA Astrophysics Data System (ADS)
Tompson, A. F.; Mellors, R. J.; Ramirez, A.; Dyer, K.; Yang, X.; Trainor-Guitton, W.; Wagoner, J. L.
2013-12-01
A stochastic joint inverse algorithm to analyze diverse geophysical and hydrologic data for a geothermal prospect is developed. The purpose is to improve prospect evaluation by finding an ensemble of hydrothermal flow models that are most consistent with multiple types of data sets. The staged approach combines Bayesian inference within a Markov Chain Monte Carlo (MCMC) global search algorithm. The method is highly flexible and capable of accommodating multiple and diverse datasets as a means to maximize the utility of all available data to understand system behavior. An initial application is made at a geothermal prospect located near Superstition Mountain in the western Salton Trough in California. Readily available data include three thermal gradient exploration boreholes, borehole resistivity logs, magnetotelluric and gravity geophysical surveys, surface heat flux measurements, and other nearby hydrologic and geologic information. Initial estimates of uncertainty in structural or parametric characteristics of the prospect are used to drive large numbers of simulations of hydrothermal fluid flow and related geophysical processes using random realizations of the conceptual geothermal system. Uncertainty in the results is represented within a ranked subset of model realizations that best match all available data within a specified norm or tolerance. Statistical (posterior) characteristics of these solutions reflect reductions in the perceived (prior) uncertainties. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-641792.
Identify Structural Flaw Location and Type with an Inverse Algorithm of Resonance Inspection
Xu, Wei; Lai, Canhai; Sun, Xin
2015-10-20
To evaluate the fitness-for-service of a structural component and to quantify its remaining useful life, aging and service-induced structural flaws must be quantitatively determined in service or during scheduled maintenance shutdowns. Resonance inspection (RI), a non-destructive evaluation (NDE) technique, distinguishes the anomalous parts from the good parts based on changes in the natural frequency spectra. Known for its numerous advantages, i.e., low inspection cost, high testing speed, and broad applicability to complex structures, RI has been widely used in the automobile industry for quality inspection. However, compared to other contemporary direct visualization-based NDE methods, a more widespread application of RI faces a fundamental challenge because such technology is unable to quantify the flaw details, e.g. location, dimensions, and types. In this study, the applicability of a maximum correlation-based inverse RI algorithm developed by the authors is further studied for various flaw cases. It is demonstrated that a variety of common structural flaws, i.e. stiffness degradation, voids, and cracks, can be accurately retrieved by this algorithm even when multiple different types of flaws coexist. The quantitative relations between the damage identification results and the flaw characteristics are also developed to assist the evaluation of the actual state of health of the engineering structures.
MASS SUBSTRUCTURE IN ABELL 3128
McCleary, J.; Dell’Antonio, I.; Huwe, P.
2015-05-20
We perform a detailed two-dimensional weak gravitational lensing analysis of the nearby (z = 0.058) galaxy cluster Abell 3128 using deep ugrz imaging from the Dark Energy Camera (DECam). We have designed a pipeline to remove instrumental artifacts from DECam images and stack multiple dithered observations without inducing a spurious ellipticity signal. We develop a new technique to characterize the spatial variation of the point-spread function that enables us to circularize the field to better than 0.5% and thereby extract the intrinsic galaxy ellipticities. By fitting photometric redshifts to sources in the observation, we are able to select a sample of background galaxies for weak-lensing analysis free from low-redshift contaminants. Photometric redshifts are also used to select a high-redshift galaxy subsample with which we successfully isolate the signal from an interloping z = 0.44 cluster. We estimate the total mass of Abell 3128 by fitting the tangential ellipticity of background galaxies with the weak-lensing shear profile of a Navarro–Frenk–White (NFW) halo and also perform NFW fits to substructures detected in the 2D mass maps of the cluster. This study yields one of the highest resolution mass maps of a low-z cluster to date and is the first step in a larger effort to characterize the redshift evolution of mass substructures in clusters.
Mass Substructure in Abell 3128
NASA Astrophysics Data System (ADS)
McCleary, J.; dell'Antonio, I.; Huwe, P.
2015-05-01
We perform a detailed two-dimensional weak gravitational lensing analysis of the nearby (z = 0.058) galaxy cluster Abell 3128 using deep ugrz imaging from the Dark Energy Camera (DECam). We have designed a pipeline to remove instrumental artifacts from DECam images and stack multiple dithered observations without inducing a spurious ellipticity signal. We develop a new technique to characterize the spatial variation of the point-spread function that enables us to circularize the field to better than 0.5% and thereby extract the intrinsic galaxy ellipticities. By fitting photometric redshifts to sources in the observation, we are able to select a sample of background galaxies for weak-lensing analysis free from low-redshift contaminants. Photometric redshifts are also used to select a high-redshift galaxy subsample with which we successfully isolate the signal from an interloping z = 0.44 cluster. We estimate the total mass of Abell 3128 by fitting the tangential ellipticity of background galaxies with the weak-lensing shear profile of a Navarro-Frenk-White (NFW) halo and also perform NFW fits to substructures detected in the 2D mass maps of the cluster. This study yields one of the highest resolution mass maps of a low-z cluster to date and is the first step in a larger effort to characterize the redshift evolution of mass substructures in clusters.
NASA Astrophysics Data System (ADS)
Harker, Brian J.
The measurement of vector magnetic fields on the sun is one of the most important diagnostic tools for characterizing solar activity. The ubiquitous solar wind is guided into interplanetary space by open magnetic field lines in the upper solar atmosphere. Highly-energetic solar flares and Coronal Mass Ejections (CMEs) are triggered in lower layers of the solar atmosphere by the driving forces at the visible "surface" of the sun, the photosphere. The driving forces there tangle and interweave the vector magnetic fields, ultimately leading to an unstable field topology with large excess magnetic energy, and this excess energy is suddenly and violently released by magnetic reconnection, emitting intense broadband radiation that spans the electromagnetic spectrum, accelerating billions of metric tons of plasma away from the sun, and finally relaxing the magnetic field to lower-energy states. These eruptive flaring events can have severe impacts on the near-Earth environment and the human technology that inhabits it. This dissertation presents a novel inversion method for inferring the properties of the vector magnetic field from telescopic measurements of the polarization states (Stokes vector) of the light received from the sun, in an effort to develop a method that is fast, accurate, and reliable. One of the long-term goals of this work is to develop such a method that is capable of rapidly-producing characterizations of the magnetic field from time-sequential data, such that near real-time projections of the complexity and flare- productivity of solar active regions can be made. This will be a boon to the field of solar flare forecasting, and should help mitigate the harmful effects of space weather on mankind's space-based endeavors. To this end, I have developed an inversion method based on genetic algorithms (GA) that have the potential for achieving such high-speed analysis.
NASA Astrophysics Data System (ADS)
Fernández Martínez, Juan L.; García Gonzalo, Esperanza; Fernández Álvarez, José P.; Kuzma, Heidi A.; Menéndez Pérez, César O.
2010-05-01
PSO is an optimization technique inspired by the social behavior of individuals in nature (swarms) that has been successfully used in many different engineering fields. In addition, the PSO algorithm can be physically interpreted as a stochastic damped mass-spring system. This analogy has served to introduce the PSO continuous model and to deduce a whole family of PSO algorithms using different finite-differences schemes. These algorithms are characterized in terms of convergence by their respective first and second order stability regions. The performance of these new algorithms is first checked using synthetic functions showing a degree of ill-posedness similar to that found in many geophysical inverse problems having their global minimum located on a very narrow flat valley or surrounded by multiple local minima. Finally we present the application of these PSO algorithms to the analysis and solution of a VES inverse problem associated with a seawater intrusion in a coastal aquifer in southern Spain. PSO family members are successfully compared to other well known global optimization algorithms (binary genetic algorithms and simulated annealing) in terms of their respective convergence curves and the sea water intrusion depth posterior histograms.
NASA Astrophysics Data System (ADS)
Bao, Xingxian; Cao, Aixia; Zhang, Jing
2016-07-01
Modal parameters estimation plays an important role for structural health monitoring. Accurately estimating the modal parameters of structures is more challenging as the measured vibration response signals are contaminated with noise. This study develops a mathematical algorithm of solving the partially described inverse singular value problem (PDISVP) combined with the complex exponential (CE) method to estimate the modal parameters. The PDISVP solving method is to reconstruct an L2-norm optimized (filtered) data matrix from the measured (noisy) data matrix, when the prescribed data constraints are one or several sets of singular triplets of the matrix. The measured data matrix is Hankel structured, which is constructed based on the measured impulse response function (IRF). The reconstructed matrix must maintain the Hankel structure, and be lowered in rank as well. Once the filtered IRF is obtained, the CE method can be applied to extract the modal parameters. Two physical experiments, including a steel cantilever beam with 10 accelerometers mounted, and a steel plate with 30 accelerometers mounted, excited by an impulsive load, respectively, are investigated to test the applicability of the proposed scheme. In addition, the consistency diagram is proposed to exam the agreement among the modal parameters estimated from those different accelerometers. Results indicate that the PDISVP-CE method can significantly remove noise from measured signals and accurately estimate the modal frequencies and damping ratios.
Wang, Hong; Wang, Xi-cheng
2014-02-21
Metabolism is a very important cellular process and its malfunction contributes to human disease. Therefore, building dynamic models for metabolic networks with experimental data in order to analyze biological process rationally has attracted a lot of attention. Owing to the technical limitations, some unknown parameters contained in models need to be estimated effectively by means of the computational method. Generally, problems of parameter estimation of nonlinear biological network are known to be ill condition and multimodal. In particular, with the increasing amount and enlarging the scope of parameters, many optimization algorithms often fail to find a global solution. In this paper, two-stage variable factor Bregman regularization homotopy method is proposed. Discrete homotopy is used to identify the possible extreme region and continuous homotopy is executed for the purpose of stability of path tracing in the special region. Meanwhile, Latin hypercube sampling is introduced to get the good initial guess value and a perturbation strategy is developed to jump out of the local optimum. Three metabolic network inverse problems are investigated to demonstrate the effectiveness of the proposed method. PMID:24060619
NASA Astrophysics Data System (ADS)
Li, Zhanhui; Huang, Qinghua; Xie, Xingbing; Tang, Xingong; Chang, Liao
2016-08-01
We present a generic 1D forward modeling and inversion algorithm for transient electromagnetic (TEM) data with an arbitrary horizontal transmitting loop and receivers at any depth in a layered earth. Both the Hankel and sine transforms required in the forward algorithm are calculated using the filter method. The adjoint-equation method is used to derive the formulation of data sensitivity at any depth in non-permeable media. The inversion algorithm based on this forward modeling algorithm and sensitivity formulation is developed using the Gauss-Newton iteration method combined with the Tikhonov regularization. We propose a new data-weighting method to minimize the initial model dependence that enhances the convergence stability. On a laptop with a CPU of i7-5700HQ@3.5 GHz, the inversion iteration of a 200 layered input model with a single receiver takes only 0.34 s, while it increases to only 0.53 s for the data from four receivers at a same depth. For the case of four receivers at different depths, the inversion iteration runtime increases to 1.3 s. Modeling the data with an irregular loop and an equal-area square loop indicates that the effect of the loop geometry is significant at early times and vanishes gradually along the diffusion of TEM field. For a stratified earth, inversion of data from more than one receiver is useful in noise reducing to get a more credible layered earth. However, for a resistive layer shielded below a conductive layer, increasing the number of receivers on the ground does not have significant improvement in recovering the resistive layer. Even with a down-hole TEM sounding, the shielded resistive layer cannot be recovered if all receivers are above the shielded resistive layer. However, our modeling demonstrates remarkable improvement in detecting the resistive layer with receivers in or under this layer.
NASA Astrophysics Data System (ADS)
Hetmaniok, Edyta
2016-07-01
In this paper the procedure for solving the inverse problem for the binary alloy solidification in the casting mould is presented. Proposed approach is based on the mathematical model suitable for describing the investigated solidification process, the lever arm model describing the macrosegregation process, the finite element method for solving the direct problem and the artificial bee colony algorithm for minimizing the functional expressing the error of approximate solution. Goal of the discussed inverse problem is the reconstruction of heat transfer coefficient and distribution of temperature in investigated region on the basis of known measurements of temperature.
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
A new damping factor algorithm based on line search of the local minimum point for inverse approach
NASA Astrophysics Data System (ADS)
Zhang, Yaqi; Liu, Weijie; Lu, Fang; Zhang, Xiangkui; Hu, Ping
2013-05-01
The influence of damping factor on the convergence and computational efficiency of the inverse approach was studied through a series of practical examples. A new selection algorithm of the damping (relaxation) factor which takes into account of both robustness and calculation efficiency is proposed, then the computer program is implemented and tested on Siemens PLM NX | One-Step. The result is compared with the traditional Armijo rule through six examples such as U-beam, square box and cylindrical cup et al, confirming the effectiveness of proposed algorithm.
Song, Yang; Zhang, Bin; He, Anzhi
2006-11-01
A novel algebraic iterative algorithm based on deflection tomography is presented. This algorithm is derived from the essentials of deflection tomography with a linear expansion of the local basis functions. By use of this algorithm the tomographic problem is finally reduced to the solution of a set of linear equations. The algorithm is demonstrated by mapping a three-peak Gaussian simulative temperature field. Compared with reconstruction results obtained by other traditional deflection algorithms, its reconstruction results provide a significant improvement in reconstruction accuracy, especially in cases with noisy data added. In the density diagnosis of a hypersonic wind tunnel, this algorithm is adopted to reconstruct density distributions of an axial symmetry flow field. One cross section of the reconstruction results is selected to be compared with the inverse Abel transform algorithm. Results show that the novel algorithm can achieve an accuracy equivalent to the inverse Abel transform algorithm. However, the novel algorithm is more versatile because it is applicable to arbitrary kinds of distribution.
Kinugawa, Tohru
2014-02-15
This paper presents a simple but nontrivial generalization of Abel's mechanical problem, based on the extended isochronicity condition and the superposition principle. There are two primary aims. The first one is to reveal the linear relation between the transit-time T and the travel-length X hidden behind the isochronicity problem that is usually discussed in terms of the nonlinear equation of motion (d{sup 2}X)/(dt{sup 2}) +(dU)/(dX) =0 with U(X) being an unknown potential. Second, the isochronicity condition is extended for the possible Abel-transform approach to designing the isochronous trajectories of charged particles in spectrometers and/or accelerators for time-resolving experiments. Our approach is based on the integral formula for the oscillatory motion by Landau and Lifshitz [Mechanics (Pergamon, Oxford, 1976), pp. 27–29]. The same formula is used to treat the non-periodic motion that is driven by U(X). Specifically, this unknown potential is determined by the (linear) Abel transform X(U) ∝ A[T(E)], where X(U) is the inverse function of U(X), A=(1/√(π))∫{sub 0}{sup E}dU/√(E−U) is the so-called Abel operator, and T(E) is the prescribed transit-time for a particle with energy E to spend in the region of interest. Based on this Abel-transform approach, we have introduced the extended isochronicity condition: typically, τ = T{sub A}(E) + T{sub N}(E) where τ is a constant period, T{sub A}(E) is the transit-time in the Abel type [A-type] region spanning X > 0 and T{sub N}(E) is that in the Non-Abel type [N-type] region covering X < 0. As for the A-type region in X > 0, the unknown inverse function X{sub A}(U) is determined from T{sub A}(E) via the Abel-transform relation X{sub A}(U) ∝ A[T{sub A}(E)]. In contrast, the N-type region in X < 0 does not ensure this linear relation: the region is covered with a predetermined potential U{sub N}(X) of some arbitrary choice, not necessarily obeying the Abel-transform relation. In discussing
NASA Technical Reports Server (NTRS)
Bayo, Eduardo; Ledesma, Ragnar
1993-01-01
A technique is presented for solving the inverse dynamics of flexible planar multibody systems. This technique yields the non-causal joint efforts (inverse dynamics) as well as the internal states (inverse kinematics) that produce a prescribed nominal trajectory of the end effector. A non-recursive global Lagrangian approach is used in formulating the equations for motion as well as in solving the inverse dynamics equations. Contrary to the recursive method previously presented, the proposed method solves the inverse problem in a systematic and direct manner for both open-chain as well as closed-chain configurations. Numerical simulation shows that the proposed procedure provides an excellent tracking of the desired end effector trajectory.
Zeng, C.; Xia, J.; Miller, R.D.; Tsoflias, G.P.
2011-01-01
Conventional surface wave inversion for shallow shear (S)-wave velocity relies on the generation of dispersion curves of Rayleigh waves. This constrains the method to only laterally homogeneous (or very smooth laterally heterogeneous) earth models. Waveform inversion directly fits waveforms on seismograms, hence, does not have such a limitation. Waveforms of Rayleigh waves are highly related to S-wave velocities. By inverting the waveforms of Rayleigh waves on a near-surface seismogram, shallow S-wave velocities can be estimated for earth models with strong lateral heterogeneity. We employ genetic algorithm (GA) to perform waveform inversion of Rayleigh waves for S-wave velocities. The forward problem is solved by finite-difference modeling in the time domain. The model space is updated by generating offspring models using GA. Final solutions can be found through an iterative waveform-fitting scheme. Inversions based on synthetic records show that the S-wave velocities can be recovered successfully with errors no more than 10% for several typical near-surface earth models. For layered earth models, the proposed method can generate one-dimensional S-wave velocity profiles without the knowledge of initial models. For earth models containing lateral heterogeneity in which case conventional dispersion-curve-based inversion methods are challenging, it is feasible to produce high-resolution S-wave velocity sections by GA waveform inversion with appropriate priori information. The synthetic tests indicate that the GA waveform inversion of Rayleigh waves has the great potential for shallow S-wave velocity imaging with the existence of strong lateral heterogeneity. ?? 2011 Elsevier B.V.
Rapid Inversion of Angular Deflection Data for Certain Axisymmetric Refractive Index Distributions
NASA Technical Reports Server (NTRS)
Rubinstein, R.; Greenberg, P. S.
1994-01-01
Certain functions useful for representing axisymmetric refractive-index distributions are shown to have exact solutions for Abel transformation of the resulting angular deflection data. An advantage of this procedure over direct numerical Abel inversion is that least-squares curve fitting is a smoothing process that reduces the noise sensitivity of the computation
NASA Astrophysics Data System (ADS)
Zhang, Wei; Zhao, Chunhui; He, Xing; Zhang, Weidong
2016-05-01
In this paper, the structure feature of the inverse of a multi-input/multi-output square transfer function matrix is explored. Instead of complicated advanced mathematical tools, we only use basic results of complex analysis in the analysing procedure. By employing the Laurent expression, an elegant structure form of the expansion is obtained for the transfer function matrix inverse. This expansion form is the key of deriving an analytical solution to the inner-outer factorisation for both stable plants and unstable plants. Different from other computation algorithm, the obtained inner-outer factorisation is given in an analytical form. The solution is exact and without approximation. Numerical examples are provided to verify the correctness of the obtained results.
Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang
2016-01-01
Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938
Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang
2016-01-01
Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter.
Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim
2014-02-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.
Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang
2016-01-01
Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938
NASA Astrophysics Data System (ADS)
Boukabara, S. A.; Garrett, K.
2014-12-01
A one-dimensional variational retrieval system has been developed, capable of producing temperature and water vapor profiles in clear, cloudy and precipitating conditions. The algorithm, known as the Microwave Integrated Retrieval System (MiRS), is currently running operationally at the National Oceanic and Atmospheric Administration (NOAA) National Environmental Satellite Data and Information Service (NESDIS), and is applied to a variety of data from the AMSU-A/MHS sensors on board the NOAA-18, NOAA-19, and MetOp-A/B polar satellite platforms, as well as SSMI/S on board both DMSP F-16 and F18, and from the NPP ATMS sensor. MiRS inverts microwave brightness temperatures into atmospheric temperature and water vapor profiles, along with hydrometeors and surface parameters, simultaneously. This atmosphere/surface coupled inversion allows for more accurate retrievals in the lower tropospheric layers by accounting for the surface emissivity impact on the measurements. It also allows the inversion of the soundings in all-weather conditions thanks to the incorporation of the hydrometeors parameters in the inverted state vector as well as to the inclusion of the emissivity in the same state vector, which is accounted for dynamically for the highly variable surface conditions found under precipitating atmospheres. The inversion is constrained in precipitating conditions by the inclusion of covariances for hydrometeors, to take advantage of the natural correlations that exist between temperature and water vapor with liquid and ice cloud along with rain water. In this study, we present a full assessment of temperature and water vapor retrieval performances in all-weather conditions and over all surface types (ocean, sea-ice, land, and snow) using matchups with radiosonde as well as Numerical Weather Prediction and other satellite retrieval algorithms as references. An emphasis is placed on retrievals in cloudy and precipitating atmospheres, including extreme weather events
NASA Astrophysics Data System (ADS)
Padhi, Amit; Mallick, Subhashis
2014-03-01
Inversion of band- and offset-limited single component (P wave) seismic data does not provide robust estimates of subsurface elastic parameters and density. Multicomponent seismic data can, in principle, circumvent this limitation but adds to the complexity of the inversion algorithm because it requires simultaneous optimization of multiple objective functions, one for each data component. In seismology, these multiple objectives are typically handled by constructing a single objective given as a weighted sum of the objectives of individual data components and sometimes with additional regularization terms reflecting their interdependence; which is then followed by a single objective optimization. Multi-objective problems, inclusive of the multicomponent seismic inversion are however non-linear. They have non-unique solutions, known as the Pareto-optimal solutions. Therefore, casting such problems as a single objective optimization provides one out of the entire set of the Pareto-optimal solutions, which in turn, may be biased by the choice of the weights. To handle multiple objectives, it is thus appropriate to treat the objective as a vector and simultaneously optimize each of its components so that the entire Pareto-optimal set of solutions could be estimated. This paper proposes such a novel multi-objective methodology using a non-dominated sorting genetic algorithm for waveform inversion of multicomponent seismic data. The applicability of the method is demonstrated using synthetic data generated from multilayer models based on a real well log. We document that the proposed method can reliably extract subsurface elastic parameters and density from multicomponent seismic data both when the subsurface is considered isotropic and transversely isotropic with a vertical symmetry axis. We also compute approximate uncertainty values in the derived parameters. Although we restrict our inversion applications to horizontally stratified models, we outline a practical
Abel's Theorem Simplifies Reduction of Order
ERIC Educational Resources Information Center
Green, William R.
2011-01-01
We give an alternative to the standard method of reduction or order, in which one uses one solution of a homogeneous, linear, second order differential equation to find a second, linearly independent solution. Our method, based on Abel's Theorem, is shorter, less complex and extends to higher order equations.
Photometric Observations of the Binary Nuclei of Three Abell Planetary Nebulae
NASA Astrophysics Data System (ADS)
Afşar, M.; Ibanoǧlu, C.
2004-07-01
CCD photometric observations of the three Abell planetary nebulae (Abell 63, Abell 46 and Abell 41) nuclei are presented. These systems are binary systems which allow us to derive model-independent parameters. Also the results of the light curve solution of UU Sge (binary nucleus of Abell 63) are discussed.
Parallel algorithm for Bayesian inversion of surface wave dispersions and its applications
NASA Astrophysics Data System (ADS)
Kim, S.; Rhie, J.
2013-12-01
We present a procedure of the Bayesian inversion to estimate the shear wave velocity profiles and their uncertainties from surface wave dispersion (SWD) data. The presented method is intended to efficiently obtain the posterior probability density (PPD) in parallelized computations using the Metropolis-coupled Markov chain Monte Carlo (MC3) technique and random-scale parameterization. Inversions of the SWD data using the standard procedure of the Markov chain Monte Carlo often fail to converge within a limited number of iterations because chains could be temporarily trapped at local minima. The MC3 technique enhances search capabilities in the parameter space by using multiple ';heated' and standard (cold) chains, and by exchanging the heating states between the chains. For the model parameterization, the random-scale scheme is proposed in which thicknesses of layers are randomly perturbed from the predefined thicknesses. By doing this, all chains have the same and fixed dimension of model and possible artifacts by uniform parameterization can be minimized . We illustrate the performance of the presented method by synthetic tests using fundamental mode of SWD data. In the tests, the PPDs and their averaged models with standard deviations are compared. Results of the synthetic experiments clearly show that the MC3 and the random-scale parameterization are effective. In the presented framework, joint inversions with other geophysical datasets are easily implemented. Thus, we further explore for the joint inversion with receiver functions here. Similar with previous studies of joint inversion using the receiver functions, our synthetic tests show that the resolution is improved to sharp boundaries at depths. After synthetic experiments, our method is applied to real SWD data of group and phase velocities and also jointly with receiver functions, which are observed in the region of the Mount Baekdu (Changbai) volcano in northeastern China.
NASA Astrophysics Data System (ADS)
Monteiller, Vadim; Beller, Stephen; Nolet, Guust; Operto, Stephane; Brossier, Romain; Métivier, Ludovic; Paul, Anne; Virieux, Jean
2014-05-01
The current development of dense seismic arrays and high performance computing make feasible today application of full-waveform inversion (FWI) on teleseismic data for high-resolution lithospheric imaging. In teleseismic configuration, the source is to first-order a plane-wave that impinges the base of the lithospheric target located below the receiver array. In this setting, FWI aims to exploit not only the forward-scattered waves propagating up to the receiver but also second-order arrivals that are back-scattered from the free-surface and the reflectors before their recordings on the surface. FWI requires using full-wave methods modeling such as finite-difference or finite-element methods. In this framework, careful design of FWI algorithms is topical to mitigate as much as possible the computational burden of multi-source full-waveform modeling. In this presentation, we review some key specifications that might be considered for versatile FWI implementation. An abstraction level between the forward and inverse problems that allows for the interfacing of different modeling engines with the inversion. This requires the subsurface meshings that are used to perform seismic modeling and update the subsurface models during inversion to be fully independent through some back-and-forth projection processes. The subsurface parameterization should be carefully chosen during multi-parameter FWI as it controls the trade-off between parameters of different nature. A versatile FWI algorithm should be designed such that different subsurface parameterizations for the model update can be easily implemented. The gradient of the misfit function should be computed as easily as possible with the adjoint-state method in parallel environment. This first requires the gradient to be independent to the discretization method that is used to perform seismic modeling. Second, the incident and adjoint wavefields should be computed with the same numerical scheme, even if the forward problem
NASA Astrophysics Data System (ADS)
Tompson, A. F. B.; Mellors, R. J.; Dyer, K.; Yang, X.; Chen, M.; Trainor Guitton, W.; Wagoner, J. L.; Ramirez, A. L.
2014-12-01
A stochastic joint inverse algorithm is used to analyze diverse geophysical and hydrologic data associated with a geothermal prospect. The approach uses a Markov Chain Monte Carlo (MCMC) global search algorithm to develop an ensemble of hydrothermal groundwater flow models that are most consistent with the observations. The algorithm utilizes an initial conceptual model descriptive of structural (geology), parametric (permeability) and hydrothermal (saturation, temperature) characteristics of the geologic system. Initial (a-priori) estimates of uncertainty in these characteristics are used to drive simulations of hydrothermal fluid flow and related geophysical processes in a large number of random realizations of the conceptual geothermal system spanning these uncertainties. The process seeks to improve the conceptual model by developing a ranked subset of model realizations that best match all available data within a specified norm or tolerance. Statistical (posterior) characteristics of these solutions reflect reductions in the a-priori uncertainties. The algorithm has been tested on a geothermal prospect located at Superstition Mountain, California and has been successful in creating a suite of models compatible with available temperature, surface resistivity, and magnetotelluric (MT) data. Although the MCMC method is highly flexible and capable of accommodating multiple and diverse datasets, a typical inversion may require the evaluation of thousands of possible model runs whose sophistication and complexity may evolve with the magnitude of data considered. As a result, we are testing the use of sensitivity analyses to better identify critical uncertain variables, lower order surrogate models to streamline computational costs, and value of information analyses to better assess optimal use of related data. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL
Bakhos, Tania; Saibaba, Arvind K.; Kitanidis, Peter K.
2015-10-15
We consider the problem of estimating parameters in large-scale weakly nonlinear inverse problems for which the underlying governing equations is a linear, time-dependent, parabolic partial differential equation. A major challenge in solving these inverse problems using Newton-type methods is the computational cost associated with solving the forward problem and with repeated construction of the Jacobian, which represents the sensitivity of the measurements to the unknown parameters. Forming the Jacobian can be prohibitively expensive because it requires repeated solutions of the forward and adjoint time-dependent parabolic partial differential equations corresponding to multiple sources and receivers. We propose an efficient method based on a Laplace transform-based exponential time integrator combined with a flexible Krylov subspace approach to solve the resulting shifted systems of equations efficiently. Our proposed solver speeds up the computation of the forward and adjoint problems, thus yielding significant speedup in total inversion time. We consider an application from Transient Hydraulic Tomography (THT), which is an imaging technique to estimate hydraulic parameters related to the subsurface from pressure measurements obtained by a series of pumping tests. The algorithms discussed are applied to a synthetic example taken from THT to demonstrate the resulting computational gains of this proposed method.
NASA Astrophysics Data System (ADS)
Jesús Moral García, Francisco; Rebollo Castillo, Francisco Javier; Monteiro Santos, Fernando
2016-04-01
Maps of apparent electrical conductivity of the soil are commonly used in precision agriculture to indirectly characterize some important properties like salinity, water, and clay content. Traditionally, these studies are made through an empirical relationship between apparent electrical conductivity and properties measured in soil samples collected at a few locations in the experimental area and at a few selected depths. Recently, some authors have used not the apparent conductivity values but the soil bulk conductivity (in 2D or 3D) calculated from measured apparent electrical conductivity through the application of an inversion method. All the published works used data collected with electromagnetic (EM) instruments. We present a new software to invert the apparent electrical conductivity data collected with VERIS 3100 and 3150 (or the more recent version with three pairs of electrodes) using the 1D spatially constrained inversion method (1D SCI). The software allows the calculation of the distribution of the bulk electrical conductivity in the survey area till a depth of 1 m. The algorithm is applied to experimental data and correlations with clay and water content have been established using soil samples collected at some boreholes. Keywords: Digital soil mapping; inversion modelling; VERIS; soil apparent electrical conductivity.
NASA Technical Reports Server (NTRS)
Dubovik, O; Herman, M.; Holdak, A.; Lapyonok, T.; Taure, D.; Deuze, J. L.; Ducos, F.; Sinyuk, A.
2011-01-01
The proposed development is an attempt to enhance aerosol retrieval by emphasizing statistical optimization in inversion of advanced satellite observations. This optimization concept improves retrieval accuracy relying on the knowledge of measurement error distribution. Efficient application of such optimization requires pronounced data redundancy (excess of the measurements number over number of unknowns) that is not common in satellite observations. The POLDER imager on board the PARASOL microsatellite registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. The completeness of such observations is notably higher than for most currently operating passive satellite aerosol sensors. This provides an opportunity for profound utilization of statistical optimization principles in satellite data inversion. The proposed retrieval scheme is designed as statistically optimized multi-variable fitting of all available angular observations obtained by the POLDER sensor in the window spectral channels where absorption by gas is minimal. The total number of such observations by PARASOL always exceeds a hundred over each pixel and the statistical optimization concept promises to be efficient even if the algorithm retrieves several tens of aerosol parameters. Based on this idea, the proposed algorithm uses a large number of unknowns and is aimed at retrieval of extended set of parameters affecting measured radiation.
Qiu, Xiao-han; Zhang, Yu-jun; Yin, Gao-fang; Shi, Chao-yi; Yu, Xiao-ya; Zhao, Nan-jing; Liu, Wen-qing
2015-08-01
The fast chlorophyll fluorescence induction curve contains rich information of photosynthesis. It can reflect various information of vegetation, such as, the survival status, the pathological condition and the physiology trends under the stress state. Through the acquisition of algae fluorescence and induced optical signal, the fast phase of chlorophyll fluorescence kinetics curve was fitted. Based on least square fitting method, we introduced adaptive minimum error approaching method for fast multivariate nonlinear regression fitting toward chlorophyll fluorescence kinetics curve. We realized Fo (fixedfluorescent), Fm (maximum fluorescence yield), σPSII (PSII functional absorption cross section) details parameters inversion and the photosynthetic parameters inversion of Chlorella pyrenoidosa. And we also studied physiological variation of Chlorella pyrenoidosa under the stress of Cu(2+).
Qiu, Xiao-han; Zhang, Yu-jun; Yin, Gao-fang; Shi, Chao-yi; Yu, Xiao-ya; Zhao, Nan-jing; Liu, Wen-qing
2015-08-01
The fast chlorophyll fluorescence induction curve contains rich information of photosynthesis. It can reflect various information of vegetation, such as, the survival status, the pathological condition and the physiology trends under the stress state. Through the acquisition of algae fluorescence and induced optical signal, the fast phase of chlorophyll fluorescence kinetics curve was fitted. Based on least square fitting method, we introduced adaptive minimum error approaching method for fast multivariate nonlinear regression fitting toward chlorophyll fluorescence kinetics curve. We realized Fo (fixedfluorescent), Fm (maximum fluorescence yield), σPSII (PSII functional absorption cross section) details parameters inversion and the photosynthetic parameters inversion of Chlorella pyrenoidosa. And we also studied physiological variation of Chlorella pyrenoidosa under the stress of Cu(2+). PMID:26672292
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software.
Fabregat-Traver, Diego; Sharapov, Sodbo Zh; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the 'omics' context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL. PMID:25717363
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software
Fabregat-Traver, Diego; Sharapov, Sodbo Zh.; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the ’omics’ context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL. PMID:25717363
NASA Astrophysics Data System (ADS)
Eladj, Said; bansir, fateh; ouadfeul, sid Ali
2016-04-01
The application of genetic algorithm starts with an initial population of chromosomes representing a "model space". Chromosome chains are preferentially Reproduced based on Their fitness Compared to the total population. However, a good chromosome has a Greater opportunity to Produce offspring Compared To other chromosomes in the population. The advantage of the combination HGA / SAA is the use of a global search approach on a large population of local maxima to Improve Significantly the performance of the method. To define the parameters of the Hybrid Genetic Algorithm Steepest Ascent Auto Statics (HGA / SAA) job, we Evaluated by testing in the first stage of "Steepest Ascent," the optimal parameters related to the data used. 1- The number of iterations "Number of hill climbing iteration" is equal to 40 iterations. This parameter defines the participation of the algorithm "SA", in this hybrid approach. 2- The minimum eigenvalue for SA '= 0.8. This is linked to the quality of data and S / N ratio. To find an implementation performance of hybrid genetic algorithms in the inversion for estimating of the residual static corrections, tests Were Performed to determine the number of generation of HGA / SAA. Using the values of residual static corrections already calculated by the Approaches "SAA and CSAA" learning has Proved very effective in the building of the cross-correlation table. To determine the optimal number of generation, we Conducted a series of tests ranging from [10 to 200] generations. The application on real seismic data in southern Algeria allowed us to judge the performance and capacity of the inversion with this hybrid method "HGA / SAA". This experience Clarified the influence of the corrections quality estimated from "SAA / CSAA" and the optimum number of generation hybrid genetic algorithm "HGA" required to have a satisfactory performance. Twenty (20) generations Were enough to Improve continuity and resolution of seismic horizons. This Will allow
NASA Astrophysics Data System (ADS)
Hunziker, J.; Thorbecke, J.; Slob, E. C.
2014-12-01
Commonly, electromagnetic measurements for exploring and monitoring hydrocarbon reservoirs are inverted for the subsurface conductivity distribution by minimizing the difference between the actual data and a forward modeled dataset. The convergence of the inversion process to the correct solution strongly depends on the shape of the solution space. Since this is a non-linear problem, there exist a multitude of minima of which only the global one provides the correct conductivity values. To easily find the global minimum we desire it to have a broad cone of attraction, while it should also feature a very narrow bottom in order to obtain the subsurface conductivity with high resolution. In this study, we aim to determine which combination of input data corresponds to a favorable shape of the solution space. Since the solution space is N-dimensional, with N being the number of unknown subsurface parameters, plotting it is out of the question. In our approach, we use a genetic algorithm (Goldberg, 1989) to probe the solution space. Such algorithms have the advantage that every run of the same problem will end up at a different solution. Most of these solutions are expected to lie close to the global minimum. A situation where only few runs end up in the global minimum indicates that the solution space consists of a lot of local minima or that the cone of attraction of the global minimum is small. If a lot of runs end up with a similar data-misfit but with a large spread of the subsurface medium parameters in one or more direction, it can be concluded that the chosen data-input is not sensitive with respect to that direction. Compared to the study of Hunziker et al. 2014, we allow also to invert for subsurface boundaries and include more combinations of input datasets. The results so far suggest that it is essential to include the magnetic field in the inversion process in order to find the anisotropic conductivity values. ReferencesGoldberg, D. E., 1989. Genetic
Comparison of algorithms for non-linear inverse 3D electrical tomography reconstruction.
Molinari, Marc; Cox, Simon J; Blott, Barry H; Daniell, Geoffrey J
2002-02-01
Non-linear electrical impedance tomography reconstruction algorithms usually employ the Newton-Raphson iteration scheme to image the conductivity distribution inside the body. For complex 3D problems, the application of this method is not feasible any more due to the large matrices involved and their high storage requirements. In this paper we demonstrate the suitability of an alternative conjugate gradient reconstruction algorithm for 3D tomographic imaging incorporating adaptive mesh refinement and requiring less storage space than the Newton-Raphson scheme. We compare the reconstruction efficiency of both algorithms for a simple 3D head model. The results show that an increase in speed of about 30% is achievable with the conjugate gradient-based method without loss of accuracy.
NASA Astrophysics Data System (ADS)
Göktürkler, G.; Balkaya, Ç.
2012-10-01
Three naturally inspired meta-heuristic algorithms—the genetic algorithm (GA), simulated annealing (SA) and particle swarm optimization (PSO)—were used to invert some of the self-potential (SP) anomalies originated by some polarized bodies with simple geometries. Both synthetic and field data sets were considered. The tests with the synthetic data comprised of the solutions with both noise-free and noisy data; in the tests with the field data some SP anomalies observed over a copper belt (India), graphite deposits (Germany) and metallic sulfide (Turkey) were inverted. The model parameters included the electric dipole moment, polarization angle, depth, shape factor and origin of the anomaly. The estimated parameters were compared with those from previous studies using various optimization algorithms, mainly least-squares approaches, on the same data sets. During the test studies the solutions by GA, PSO and SA were characterized as being consistent with each other; a good starting model was not a requirement to reach the global minimum. It can be concluded that the global optimization algorithms considered in this study were able to yield compatible solutions with those from widely used local optimization algorithms.
Li, Mao; Wittek, Adam; Miller, Karol
2014-01-01
Biomechanical modeling methods can be used to predict deformations for medical image registration and particularly, they are very effective for whole-body computed tomography (CT) image registration because differences between the source and target images caused by complex articulated motions and soft tissues deformations are very large. The biomechanics-based image registration method needs to deform the source images using the deformation field predicted by finite element models (FEMs). In practice, the global and local coordinate systems are used in finite element analysis. This involves the transformation of coordinates from the global coordinate system to the local coordinate system when calculating the global coordinates of image voxels for warping images. In this paper, we present an efficient numerical inverse isoparametric mapping algorithm to calculate the local coordinates of arbitrary points within the eight-noded hexahedral finite element. Verification of the algorithm for a nonparallelepiped hexahedral element confirms its accuracy, fast convergence, and efficiency. The algorithm's application in warping of the whole-body CT using the deformation field predicted by means of a biomechanical FEM confirms its reliability in the context of whole-body CT registration. PMID:24828796
Bracarda, Sergio; Sisani, Michele; Marrocolo, Francesca; Hamzaj, Alketa; del Buono, Sabrina; De Simone, Valeria
2014-03-01
Metastatic renal cell carcinoma (mRCC), considered almost an orphan disease only six years ago, appears today a very dynamic pathology. The recently switch to the actual overcrowded scenario defined by seven active drugs has driven physicians to an incertitude status, due to difficulties in defining the best possible treatment strategy. This situation is mainly related to the absence of predictive biomarkers for any available or new therapy. Such issue, associated with the nearly absence of published face-to-face studies, draws a complex picture frame. In order to solve this dilemma, decisional algorithms tailored on drug efficacy data and patient profile are recognized as very useful tools. These approaches try to select the best therapy suitable for every patient profile. On the contrary, the present review has the "goal" to suggest a reverse approach: basing on the pivotal studies, post-marketing surveillance reports and our experience, we defined the polarizing toxicity (the most frequent toxicity in the light of clinical experience) for every single therapy, creating a new algorithm able to identify the patient profile, mainly comorbidities, unquestionably unsuitable for each single agent presently available for either the first- or the second-line therapy. The GOAL inverse decision-making algorithm, proposed at the end of this review, allows to select the best therapy for mRCC by reducing the risk of limiting toxicities. PMID:24309065
NASA Technical Reports Server (NTRS)
Kurtz, M. J.; Huchra, J. P.; Beers, T. C.; Geller, M. J.; Gioia, I. M.
1985-01-01
X-ray and optical observations of the cluster of galaxies Abell 744 are presented. The X-ray flux (assuming H(0) = 100 km/s per Mpc) is about 9 x 10 to the 42nd erg/s. The X-ray source is extended, but shows no other structure. Photographic photometry (in Kron-Cousins R), calibrated by deep CCD frames, is presented for all galaxies brighter than 19th magnitude within 0.75 Mpc of the cluster center. The luminosity function is normal, and the isopleths show little evidence of substructure near the cluster center. The cluster has a dominant central galaxy, which is classified as a normal brightest-cluster elliptical on the basis of its luminosity profile. New redshifts were obtained for 26 galaxies in the vicinity of the cluster center; 20 appear to be cluster members. The spatial distribution of redshifts is peculiar; the dispersion within the 150 kpc core radius is much greater than outside. Abell 744 is similar to the nearby cluster Abell 1060.
NASA Astrophysics Data System (ADS)
Yoon, Kyung-Beom; Park, Won-Hee
2015-04-01
The convective heat transfer coefficient and surface emissivity before and after flame occurrence on a wood specimen surface and the flame heat flux were estimated using the repulsive particle swarm optimization algorithm and cone heater test results. The cone heater specified in the ISO 5660 standards was used, and six cone heater heat fluxes were tested. Preservative-treated Douglas fir 21 mm in thickness was used as the wood specimen in the tests. This study confirmed that the surface temperature of the specimen, which was calculated using the convective heat transfer coefficient, surface emissivity and flame heat flux on the wood specimen by a repulsive particle swarm optimization algorithm, was consistent with the measured temperature. Considering the measurement errors in the surface temperature of the specimen, the applicability of the optimization method considered in this study was evaluated.
An algorithm for inverse synthetic aperture imaging lidar based on sparse signal representation
NASA Astrophysics Data System (ADS)
Ren, X. Z.; Sun, X. M.
2014-12-01
In actual applications of inverse synthetic aperture imaging lidar, the issue of sparse aperture data arises when continuous measurements are impossible or the collected data during some periods are not valid. Hence, the imaging results obtained by traditional methods are limited by high sidelobes. Considering the sparse structure of actual target space in high frequency radar application, a novel imaging method based on sparse signal representation is proposed in this paper. Firstly, the range image is acquired by traditional pulse compression of the optical heterodyne process. Then, the redundant dictionary is constructed through the sparse azimuth sampling positions and the signal form after the range compression. Finally, the imaging results are obtained by solving an ill-posed problem based on sparse regularization. Simulation results confirm the effectiveness of the proposed method.
Arnold, Alexander; Bruhns, Otto T; Mosler, Jörn
2011-07-21
A novel finite element formulation suitable for computing efficiently the stiffness distribution in soft biological tissue is presented in this paper. For that purpose, the inverse problem of finite strain hyperelasticity is considered and solved iteratively. In line with Arnold et al (2010 Phys. Med. Biol. 55 2035), the computing time is effectively reduced by using adaptive finite element methods. In sharp contrast to previous approaches, the novel mesh adaption relies on an r-adaption (re-allocation of the nodes within the finite element triangulation). This method allows the detection of material interfaces between healthy and diseased tissue in a very effective manner. The evolution of the nodal positions is canonically driven by the same minimization principle characterizing the inverse problem of hyperelasticity. Consequently, the proposed mesh adaption is variationally consistent. Furthermore, it guarantees that the quality of the numerical solution is improved. Since the proposed r-adaption requires only a relatively coarse triangulation for detecting material interfaces, the underlying finite element spaces are usually not rich enough for predicting the deformation field sufficiently accurately (the forward problem). For this reason, the novel variational r-refinement is combined with the variational h-adaption (Arnold et al 2010) to obtain a variational hr-refinement algorithm. The resulting approach captures material interfaces well (by using r-adaption) and predicts a deformation field in good agreement with that observed experimentally (by using h-adaption).
NASA Astrophysics Data System (ADS)
Arnold, Alexander; Bruhns, Otto T.; Mosler, Jörn
2011-07-01
A novel finite element formulation suitable for computing efficiently the stiffness distribution in soft biological tissue is presented in this paper. For that purpose, the inverse problem of finite strain hyperelasticity is considered and solved iteratively. In line with Arnold et al (2010 Phys. Med. Biol. 55 2035), the computing time is effectively reduced by using adaptive finite element methods. In sharp contrast to previous approaches, the novel mesh adaption relies on an r-adaption (re-allocation of the nodes within the finite element triangulation). This method allows the detection of material interfaces between healthy and diseased tissue in a very effective manner. The evolution of the nodal positions is canonically driven by the same minimization principle characterizing the inverse problem of hyperelasticity. Consequently, the proposed mesh adaption is variationally consistent. Furthermore, it guarantees that the quality of the numerical solution is improved. Since the proposed r-adaption requires only a relatively coarse triangulation for detecting material interfaces, the underlying finite element spaces are usually not rich enough for predicting the deformation field sufficiently accurately (the forward problem). For this reason, the novel variational r-refinement is combined with the variational h-adaption (Arnold et al 2010) to obtain a variational hr-refinement algorithm. The resulting approach captures material interfaces well (by using r-adaption) and predicts a deformation field in good agreement with that observed experimentally (by using h-adaption).
NASA Astrophysics Data System (ADS)
Palacios, S. L.; Schafer, C. B.; Broughton, J.; Guild, L. S.; Kudela, R. M.
2013-12-01
There is a need in the Biological Oceanography community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand energy flow through ecosystems, to track the fate of carbon in the ocean, and to detect and monitor-for harmful algal blooms (HABs). The ocean color community has responded to this demand with the development of phytoplankton functional type (PFT) discrimination algorithms. These PFT algorithms fall into one of three categories depending on the science application: size-based, biogeochemical function, and taxonomy. The new PFT algorithm Phytoplankton Detection with Optics (PHYDOTax) is an inversion algorithm that discriminates taxon-specific biomass to differentiate among six taxa found in the California Current System: diatoms, dinoflagellates, haptophytes, chlorophytes, cryptophytes, and cyanophytes. PHYDOTax was developed and validated in Monterey Bay, CA for the high resolution imaging spectrometer, Spectroscopic Aerial Mapping System with On-board Navigation (SAMSON - 3.5 nm resolution). PHYDOTax exploits the high spectral resolution of an imaging spectrometer and the improved spatial resolution that airborne data provides for coastal areas. The objective of this study was to apply PHYDOTax to a relatively lower resolution imaging spectrometer to test the algorithm's sensitivity to atmospheric correction, to evaluate capability with other sensors, and to determine if down-sampling spectral resolution would degrade its ability to discriminate among phytoplankton taxa. This study is a part of the larger Hyperspectral Infrared Imager (HyspIRI) airborne simulation campaign which is collecting Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imagery aboard NASA's ER-2 aircraft during three seasons in each of two years over terrestrial and marine targets in California. Our aquatic component seeks to develop and test algorithms to retrieve water quality properties (e.g. HABs and river plumes) in both marine and in
NASA Technical Reports Server (NTRS)
Palacios, Sherry L.; Schafer, Chris; Broughton, Jennifer; Guild, Liane S.; Kudela, Raphael M.
2013-01-01
There is a need in the Biological Oceanography community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand energy flow through ecosystems, to track the fate of carbon in the ocean, and to detect and monitor-for harmful algal blooms (HABs). The ocean color community has responded to this demand with the development of phytoplankton functional type (PFT) discrimination algorithms. These PFT algorithms fall into one of three categories depending on the science application: size-based, biogeochemical function, and taxonomy. The new PFT algorithm Phytoplankton Detection with Optics (PHYDOTax) is an inversion algorithm that discriminates taxon-specific biomass to differentiate among six taxa found in the California Current System: diatoms, dinoflagellates, haptophytes, chlorophytes, cryptophytes, and cyanophytes. PHYDOTax was developed and validated in Monterey Bay, CA for the high resolution imaging spectrometer, Spectroscopic Aerial Mapping System with On-board Navigation (SAMSON - 3.5 nm resolution). PHYDOTax exploits the high spectral resolution of an imaging spectrometer and the improved spatial resolution that airborne data provides for coastal areas. The objective of this study was to apply PHYDOTax to a relatively lower resolution imaging spectrometer to test the algorithm's sensitivity to atmospheric correction, to evaluate capability with other sensors, and to determine if down-sampling spectral resolution would degrade its ability to discriminate among phytoplankton taxa. This study is a part of the larger Hyperspectral Infrared Imager (HyspIRI) airborne simulation campaign which is collecting Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imagery aboard NASA's ER-2 aircraft during three seasons in each of two years over terrestrial and marine targets in California. Our aquatic component seeks to develop and test algorithms to retrieve water quality properties (e.g. HABs and river plumes) in both marine and in
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming
2014-12-01
The Ant Colony Optimization algorithm based on the probability density function (PDF-ACO) is applied to estimate the bimodal aerosol particle size distribution (PSD). The direct problem is solved by the modified Anomalous Diffraction Approximation (ADA, as an approximation for optically large and soft spheres, i.e., χ≫1 and |m-1|≪1) and the Beer-Lambert law. First, a popular bimodal aerosol PSD and three other bimodal PSDs are retrieved in the dependent model by the multi-wavelength extinction technique. All the results reveal that the PDF-ACO algorithm can be used as an effective technique to investigate the bimodal PSD. Then, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution function to retrieve the bimodal PSDs under the independent model. Finally, the J-SB and M-β functions are applied to recover actual measurement aerosol PSDs over Beijing and Shanghai obtained from the aerosol robotic network (AERONET). The numerical simulation and experimental results demonstrate that these two general functions, especially the J-SB function, can be used as a versatile distribution function to retrieve the bimodal aerosol PSD when no priori information about the PSD is available.
NASA Astrophysics Data System (ADS)
Auvinen, Harri; Oikarinen, Liisa; Kyrölä, Erkki
2002-07-01
Stratospheric ozone can be measured with good global coverage and good vertical resolution by continuous scanning of the limb of the sunlit atmosphere. In the near future there will be several satellite instruments exploiting this limb-scanning method using the UV-visible region of the spectrum, e.g., the Optical Spectrograph and Infrared Imaging System (OSIRIS) launched on the Odin satellite in February 2001, and the Scanning Imaging Absorption Spectrometer for Atmospheric Chartography launched on Envisat in March 2002. Envisat also carries the Global Ozone Monitoring by Occultation of Stars instrument, which will measure limb-scattered sunlight under bright limb occultation conditions. In this paper we present an inversion method to retrieve vertical ozone profiles from limb scatter measurements. The method uses a modified onion-peeling approach. Multiple scattering is taken into account by a precalculated total to single-scattering radiance ratios tabulated as a function of wavelength, tangent altitude, and several other relevant parameters. The sensitivity of the retrieval method is studied using the OSIRIS instrument as an example. Constituent retrieval errors are estimated by applying the method to simulated OSIRIS measurements.
NASA Astrophysics Data System (ADS)
Zhao, Jingtao; Peng, Suping; Du, Wenfeng
2016-02-01
We consider sparsity-constraint inversion method for detecting seismic small-scale discontinuities, such as edges, faults and cavities, which provide rich information about petroleum reservoirs. However, where there is karstification and interference caused by macro-scale fault systems, these seismic small-scale discontinuities are hard to identify when using currently available discontinuity-detection methods. In the subsurface, these small-scale discontinuities are separately and sparsely distributed and their seismic responses occupy a very small part of seismic image. Considering these sparsity and non-smooth features, we propose an effective L 2-L 0 norm model for improvement of their resolution. First, we apply a low-order plane-wave destruction method to eliminate macro-scale smooth events. Then, based the residual data, we use a nonlinear structure-enhancing filter to build a L 2-L 0 norm model. In searching for its solution, an efficient and fast convergent penalty decomposition method is employed. The proposed method can achieve a significant improvement in enhancing seismic small-scale discontinuities. Numerical experiment and field data application demonstrate the effectiveness and feasibility of the proposed method in studying the relevant geology of these reservoirs.
NASA Astrophysics Data System (ADS)
Gurarslan, Gurhan; Karahan, Halil
2015-09-01
In this study, an accurate model was developed for solving problems of groundwater-pollution-source identification. In the developed model, the numerical simulations of flow and pollutant transport in groundwater were carried out using MODFLOW and MT3DMS software. The optimization processes were carried out using a differential evolution algorithm. The performance of the developed model was tested on two hypothetical aquifer models using real and noisy observation data. In the first model, the release histories of the pollution sources were determined assuming that the numbers, locations and active stress periods of the sources are known. In the second model, the release histories of the pollution sources were determined assuming that there is no information on the sources. The results obtained by the developed model were found to be better than those reported in literature.
SelInv - An Algorithm for Selected Inversion of a Sparse Symmetric Matrix
Lin, Lin; Yang, Chao; Meza, Juan C.; Lu, Jianfeng; Ying, Lexing; E, Weinan
2009-10-16
We describe an efficient implementation of an algorithm for computing selected elements of a general sparse symmetric matrix A that can be decomposed as A = LDL^T, where L is lower triangular and D is diagonal. Our implementation, which is called SelInv, is built on top of an efficient supernodal left-looking LDL^T factorization of A. We discuss how computational efficiency can be gained by making use of a relative index array to handle indirect addressing. We report the performance of SelInv on a collection of sparse matrices of various sizes and nonzero structures. We also demonstrate how SelInv can be used in electronic structure calculations.
Tauberian theorems for Abel summability of sequences of fuzzy numbers
NASA Astrophysics Data System (ADS)
Yavuz, Enes; ćoşkun, Hüsamettin
2015-09-01
We give some conditions under which Abel summable sequences of fuzzy numbers are convergent. As corollaries we obtain the results given in [E. Yavuz, Ö. Talo, Abel summability of sequences of fuzzy numbers, Soft computing 2014, doi: 10.1007/s00500-014-1563-7].
NASA Astrophysics Data System (ADS)
Liu, Yi; Yin, Zengshan; Yang, Zhongdong; Zheng, Yuquan; Yan, Changxiang; Tian, Xiangjun; Yang, Dongxu
2016-04-01
After 5 years development, The Chinese carbon dioxide observation satellite (TanSat), the first scientific experimental CO2 satellite of China, step into the pre-launch phase. The characters of pre-launch carbon dioxide spectrometer have been optimized during the laboratory test and calibration. Radiometric calibration shows a SNR of 440 (O2A 0.76um band), 300 (CO2 1.61um band) and 180 (CO2 2.06um band) on average in the typical radiance condition. Instrument line shape was calibrated automatically in using a well design testing system with laser control and record. After a series of test and calibration in laboratory, the instrumental performances meet the design requirements. TanSat will be launched on August 2016. The optimal estimation theory was involved in TanSat XCO2 retrieval algorithm in a full physics way with simulation of the radiance transfer in atmosphere. Gas absorption, aerosol and cirrus scattering and surface reflectance associate with wavelength dispersion have been considered in inversion for better correction the interference errors to XCO2. In order to simulate the radiance transfer precisely and efficiently, we develop a fast vector radiative transfer simulation method. Application of TanSat algorithm on GOSAT observation (ATANGO) is appropriate to evaluate the performance of algorithm. Validated with TCCON measurements, the ATANGO product achieves a 1.5 ppm precision. A Chinese carbon cycle data- assimilation system Tan-Tracker is developed based on the atmospheric chemical transport model GEOS-Chem. Tan-Tracker is a dual-pass data-assimilation system in which both CO2 concentrations and CO2 fluxes are simultaneously assimilated from atmospheric observations. A validation network has been established around China to support a series of CO2 satellite of China, which include 3 IFS-125HR and 4 Optical Spectrum Analyzer etc.
NASA Astrophysics Data System (ADS)
Kanao, M.; Shibutani, T.
2005-12-01
Seismic shear velocity models of the crust and the uppermost mantle were studied by teleseismic receiver function analyses beneath the permanent stations of the Federation of Digital Seismographic Networks (FDSN) at Antarctic continental margins. In order to eliminate the starting model dependency, a non-linear Genetic Algorithm (GA) was introduced in the time domain inversion of the receiver functions. A plenty of velocity models with an acceptable fit to the receiver function waveforms were generated during the inversion, and a stable model was produced by employing a weighted average of the best 1,000 models encountered in the development of the GA. The shear velocity model beneath the MAW (67.6S, 62.9E) has a sharp Moho boundary at 44 km depth that might have involved in a reworked metamorphic event of adjacent Archaean Napier Complex. A fairly sharp Moho was identified about 28 km depth beneath DRV (66.7S, 140.0E), with a middle grade variation of the crustal velocities that might have been caused by the Early Proterozoic metamorphism. A similar sharp Moho has been found at 40 km beneath SYO (69.0S, 39.6E). Thus Moho depth is consistent with that from refraction / wide-angle reflection surveys around the station. Fairly complicated velocity variations within the crust may have a relationship with lithology of granulite facies metamorphic rocks in the shallow crust associated with Pan-African events. Broadening low velocity zones about 30 km depths with transitional crust-mantle boundary at VNDA (77.5S, 161.9E), might be caused by the rift system besides the Trans Antarctic Mountains. As for the Antarctic Peninsular, very broad Moho was found around 36 km depths around PMSA (64.8S, 64.0W). The evidence of velocity variations within the crust reflects the tectonic histories of each terrain where these permanent stations are located.
Semenov, Alexander; Zaikin, Oleg
2016-01-01
In this paper we propose an approach for constructing partitionings of hard variants of the Boolean satisfiability problem (SAT). Such partitionings can be used for solving corresponding SAT instances in parallel. For the same SAT instance one can construct different partitionings, each of them is a set of simplified versions of the original SAT instance. The effectiveness of an arbitrary partitioning is determined by the total time of solving of all SAT instances from it. We suggest the approach, based on the Monte Carlo method, for estimating time of processing of an arbitrary partitioning. With each partitioning we associate a point in the special finite search space. The estimation of effectiveness of the particular partitioning is the value of predictive function in the corresponding point of this space. The problem of search for an effective partitioning can be formulated as a problem of optimization of the predictive function. We use metaheuristic algorithms (simulated annealing and tabu search) to move from point to point in the search space. In our computational experiments we found partitionings for SAT instances encoding problems of inversion of some cryptographic functions. Several of these SAT instances with realistic predicted solving time were successfully solved on a computing cluster and in the volunteer computing project SAT@home. The solving time agrees well with estimations obtained by the proposed method. PMID:27190753
Semenov, Alexander; Zaikin, Oleg
2016-01-01
In this paper we propose an approach for constructing partitionings of hard variants of the Boolean satisfiability problem (SAT). Such partitionings can be used for solving corresponding SAT instances in parallel. For the same SAT instance one can construct different partitionings, each of them is a set of simplified versions of the original SAT instance. The effectiveness of an arbitrary partitioning is determined by the total time of solving of all SAT instances from it. We suggest the approach, based on the Monte Carlo method, for estimating time of processing of an arbitrary partitioning. With each partitioning we associate a point in the special finite search space. The estimation of effectiveness of the particular partitioning is the value of predictive function in the corresponding point of this space. The problem of search for an effective partitioning can be formulated as a problem of optimization of the predictive function. We use metaheuristic algorithms (simulated annealing and tabu search) to move from point to point in the search space. In our computational experiments we found partitionings for SAT instances encoding problems of inversion of some cryptographic functions. Several of these SAT instances with realistic predicted solving time were successfully solved on a computing cluster and in the volunteer computing project SAT@home. The solving time agrees well with estimations obtained by the proposed method.
NASA Astrophysics Data System (ADS)
Ge, Xinmin; Wang, Hua; Fan, Yiren; Cao, Yingchang; Chen, Hua; Huang, Rui
2016-01-01
With more information than the conventional one dimensional (1D) longitudinal relaxation time (T1) and transversal relaxation time (T2) spectrums, a two dimensional (2D) T1-T2 spectrum in a low field nuclear magnetic resonance (NMR) is developed to discriminate the relaxation components of fluids such as water, oil and gas in porous rock. However, the accuracy and efficiency of the T1-T2 spectrum are limited by the existing inversion algorithms and data acquisition schemes. We introduce a joint method to inverse the T1-T2 spectrum, which combines iterative truncated singular value decomposition (TSVD) and a parallel particle swarm optimization (PSO) algorithm to get fast computational speed and stable solutions. We reorganize the first kind Fredholm integral equation of two kernels to a nonlinear optimization problem with non-negative constraints, and then solve the ill-conditioned problem by the iterative TSVD. Truncating positions of the two diagonal matrices are obtained by the Akaike information criterion (AIC). With the initial values obtained by TSVD, we use a PSO with parallel structure to get the global optimal solutions with a high computational speed. We use the synthetic data with different signal to noise ratio (SNR) to test the performance of the proposed method. The result shows that the new inversion algorithm can achieve favorable solutions for signals with SNR larger than 10, and the inversion precision increases with the decrease of the components of the porous rock.
NASA Astrophysics Data System (ADS)
Auken, Esben; Christiansen, Anders Vest; Kirkegaard, Casper; Fiandaca, Gianluca; Schamper, Cyril; Behroozmand, Ahmad Ali; Binley, Andrew; Nielsen, Emil; Effersø, Flemming; Christensen, Niels Bøie; Sørensen, Kurt; Foged, Nikolaj; Vignoli, Giulio
2015-07-01
We present an overview of a mature, robust and general algorithm providing a single framework for the inversion of most electromagnetic and electrical data types and instrument geometries. The implementation mainly uses a 1D earth formulation for electromagnetics and magnetic resonance sounding (MRS) responses, while the geoelectric responses are both 1D and 2D and the sheet's response models a 3D conductive sheet in a conductive host with an overburden of varying thickness and resistivity. In all cases, the focus is placed on delivering full system forward modelling across all supported types of data. Our implementation is modular, meaning that the bulk of the algorithm is independent of data type, making it easy to add support for new types. Having implemented forward response routines and file I/O for a given data type provides access to a robust and general inversion engine. This engine includes support for mixed data types, arbitrary model parameter constraints, integration of prior information and calculation of both model parameter sensitivity analysis and depth of investigation. We present a review of our implementation and methodology and show four different examples illustrating the versatility of the algorithm. The first example is a laterally constrained joint inversion (LCI) of surface time domain induced polarisation (TDIP) data and borehole TDIP data. The second example shows a spatially constrained inversion (SCI) of airborne transient electromagnetic (AEM) data. The third example is an inversion and sensitivity analysis of MRS data, where the electrical structure is constrained with AEM data. The fourth example is an inversion of AEM data, where the model is described by a 3D sheet in a layered conductive host.
Are Abell Clusters Correlated with Gamma-Ray Bursts?
NASA Technical Reports Server (NTRS)
Hurley, K.; Hartmann, D.; Kouveliotou, C.; Fishman, G.; Laros, J.; Cline, T.; Boer, M.
1997-01-01
A recent study has presented marginal statistical evidence that gamma-ray burst (GRB) sources are correlated with Abell clusters, based on analyses of bursts in the BATSE 3B catalog. Using precise localization information from the Third Interplanetary Network, we have reanalyzed this possible correlation. We find that most of the Abell clusters that are in the relatively large 3B error circles are not in the much smaller IPN/BATSE error regions. We believe that this argues strongly against an Abell cluster-GRB correlation.
The Abell 85 BCG: A Nucleated, Coreless Galaxy
NASA Astrophysics Data System (ADS)
Madrid, Juan P.; Donzelli, Carlos J.
2016-03-01
New high-resolution r-band imaging of the brightest cluster galaxy (BCG) in Abell 85 (Holm 15A) was obtained using the Gemini Multi Object Spectrograph. These data were taken with the aim of deriving an accurate surface brightness profile of the BCG of Abell 85, in particular, its central region. The new Gemini data show clear evidence of a previously unreported nuclear emission that is evident as a distinct light excess in the central kiloparsec of the surface brightness profile. We find that the light profile is never flat nor does it present a downward trend toward the center of the galaxy. That is, the new Gemini data show a different physical reality from the featureless, “evacuated core” recently claimed for the Abell 85 BCG. After trying different models, we find that the surface brightness profile of the BCG of Abell 85 is best fit by a double Sérsic model.
The genus curve of the Abell clusters
NASA Technical Reports Server (NTRS)
Rhoads, James E.; Gott, J. Richard, III; Postman, Marc
1994-01-01
We study the topology of large-scale structure through a genus curve measurement of the recent Abell catalog redshift survey of Postman, Huchra, and Geller (1992). The structure is found to be spongelike near median density and to exhibit isolated superclusters and voids at high and low densities, respectively. The genus curve shows a slight shift toward 'meatball' topology, but remains consistent with the hypothesis of Gaussian random phase initial conditions. The amplitude of the genus curve corresponds to a power-law spectrum with index n = 0.21(sub -0.47 sup +0.43) on scales of 48/h Mpc or to a cold dark matter power spectrum with omega h = 0.36(sub -0.17 sup +0.46).
NASA Astrophysics Data System (ADS)
Chen, Ying; Lo, Joseph Y.; Baker, Jay A.; Dobbins, James T., III
2006-03-01
Breast cancer is a major problem and the most common cancer among women. The nature of conventional mammpgraphy makes it very difficult to distinguish a cancer from overlying breast tissues. Digital Tomosynthesis refers to a three-dimensional imaging technique that allows reconstruction of an arbitrary set of planes in the breast from limited-angle series of projection images as the x-ray source moves. Several tomosynthesis algorithms have been proposed, including Matrix Inversion Tomosynthesis (MITS) and Filtered Back Projection (FBP) that have been investigated in our lab. MITS shows better high frequency response in removing out-of-plane blur, while FBP shows better low frequency noise propertities. This paper presents an effort to combine MITS and FBP for better breast tomosynthesis reconstruction. A high-pass Gaussian filter was designed and applied to three-slice "slabbing" MITS reconstructions. A low-pass Gaussian filter was designed and applied to the FBP reconstructions. A frequency weighting parameter was studied to blend the high-passed MITS with low-passed FBP frequency components. Four different reconstruction methods were investigated and compared with human subject images: 1) MITS blended with Shift-And-Add (SAA), 2) FBP alone, 3) FBP with applied Hamming and Gaussian Filters, and 4) Gaussian Frequency Blending (GFB) of MITS and FBP. Results showed that, compared with FBP, Gaussian Frequency Blending (GFB) has better performance for high frequency content such as better reconstruction of micro-calcifications and removal of high frequency noise. Compared with MITS, GFB showed more low frequency breast tissue content.
NASA Astrophysics Data System (ADS)
Fang, Hongjian; Zhang, Haijiang; Yao, Huajian; Allam, Amir; Zigone, Dimitri; Ben-Zion, Yehuda; Thurber, Clifford; vanÂ derÂ Hilst, Robert D.
2016-05-01
We introduce a new algorithm for joint inversion of body wave and surface wave data to get better 3-D P wave (Vp) and S wave (Vs) velocity models by taking advantage of the complementary strengths of each data set. Our joint inversion algorithm uses a one-step inversion of surface wave traveltime measurements at different periods for 3-D Vs and Vp models without constructing the intermediate phase or group velocity maps. This allows a more straightforward modeling of surface wave traveltime data with the body wave arrival times. We take into consideration the sensitivity of surface wave data with respect to Vp in addition to its large sensitivity to Vs, which means both models are constrained by two different data types. The method is applied to determine 3-D crustal Vp and Vs models using body wave and Rayleigh wave data in the Southern California plate boundary region, which has previously been studied with both double-difference tomography method using body wave arrival times and ambient noise tomography method with Rayleigh and Love wave group velocity dispersion measurements. Our approach creates self-consistent and unique models with no prominent gaps, with Rayleigh wave data resolving shallow and large-scale features and body wave data constraining relatively deeper structures where their ray coverage is good. The velocity model from the joint inversion is consistent with local geological structures and produces better fits to observed seismic waveforms than the current Southern California Earthquake Center (SCEC) model.
The magnitude-redshift relation for 561 Abell clusters
NASA Technical Reports Server (NTRS)
Postman, M.; Huchra, J. P.; Geller, M. J.; Henry, J. P.
1985-01-01
The Hubble diagram for the 561 Abell clusters with measured redshifts has been examined using Abell's (1958) corrected photo-red magnitudes for the tenth-ranked cluster member (m10). After correction for the Scott effect and K dimming, the data are in good agreement with a linear magnitude-redshift relation with a slope of 0.2 out to z = 0.1. New redshift data are also presented for 20 Abell clusters. Abell's m10 is suitable for redshift estimation for clusters with m10 of no more than 16.5. At fainter m10, the number of foreground galaxies expected within an Abell radius is large enough to make identification of the tenth-ranked galaxy difficult. Interlopers bias the estimated redshift toward low values at high redshift. Leir and van den Bergh's (1977) redshift estimates suffer from this same bias but to a smaller degree because of the use of multiple cluster parameters. Constraints on deviations of cluster velocities from the mean cosmological flow require greater photometric accuracy than is provided by Abell's m10 magnitudes.
Chand, S.; Minshull, T.A.; Priest, J.A.; Best, A.I.; Clayton, C.R.I.; Waite, W.F.
2006-01-01
The presence of gas hydrate in marine sediments alters their physical properties. In some circumstances, gas hydrate may cement sediment grains together and dramatically increase the seismic P- and S-wave velocities of the composite medium. Hydrate may also form a load-bearing structure within the sediment microstructure, but with different seismic wave attenuation characteristics, changing the attenuation behaviour of the composite. Here we introduce an inversion algorithm based on effective medium modelling to infer hydrate saturations from velocity and attenuation measurements on hydrate-bearing sediments. The velocity increase is modelled as extra binding developed by gas hydrate that strengthens the sediment microstructure. The attenuation increase is modelled through a difference in fluid flow properties caused by different permeabilities in the sediment and hydrate microstructures. We relate velocity and attenuation increases in hydrate-bearing sediments to their hydrate content, using an effective medium inversion algorithm based on the self-consistent approximation (SCA), differential effective medium (DEM) theory, and Biot and squirt flow mechanisms of fluid flow. The inversion algorithm is able to convert observations in compressional and shear wave velocities and attenuations to hydrate saturation in the sediment pore space. We applied our algorithm to a data set from the Mallik 2L–38 well, Mackenzie delta, Canada, and to data from laboratory measurements on gas-rich and water-saturated sand samples. Predictions using our algorithm match the borehole data and water-saturated laboratory data if the proportion of hydrate contributing to the load-bearing structure increases with hydrate saturation. The predictions match the gas-rich laboratory data if that proportion decreases with hydrate saturation. We attribute this difference to differences in hydrate formation mechanisms between the two environments.
NASA Astrophysics Data System (ADS)
Li, Tao; Mallick, Subhashis
2015-02-01
Consideration of azimuthal anisotropy, at least to an orthorhombic symmetry is important in exploring the naturally fractured and unconventional hydrocarbon reservoirs. Full waveform inversion of multicomponent seismic data can, in principle, provide more robust estimates of subsurface elastic parameters and density than the inversion of single component (P wave) seismic data. In addition, azimuthally dependent anisotropy can only be resolved by carefully studying the multicomponent seismic displacement data acquired and processed along different azimuths. Such an analysis needs an inversion algorithm capable of simultaneously optimizing multiple objectives, one for each data component along each azimuth. These multicomponent and multi-azimuthal seismic inversions are non-linear with non-unique solutions; it is therefore appropriate to treat the objectives as a vector and simultaneously optimize each of its components such that the optimal set of solutions could be obtained. The fast non-dominated sorting genetic algorithm (NSGA II) is a robust stochastic global search method capable of handling multiple objectives, but its computational expense increases with increasing number of objectives and the number of model parameters to be inverted for. In addition, an accurate extraction of subsurface azimuthal anisotropy requires multicomponent seismic data acquired at a fine spatial resolution along many source-to-receiver azimuths. Because routine acquisition of such data is prohibitively expensive, they are typically available along two or at most three azimuthal orientations at a spatial resolution where such an inversion could be applied. This paper proposes a novel multi-objective methodology using a parallelized version of NSGA II for waveform inversion of multicomponent seismic displacement data along two azimuths. By scaling the objectives prior to ranking, redefining the crowding distance as functions of the scaled objective and the model spaces, and varying
The Filtered Abel Transform and Its Application in Combustion Diagnostics
NASA Technical Reports Server (NTRS)
Simons, Stephen N. (Technical Monitor); Yuan, Zeng-Guang
2003-01-01
Many non-intrusive combustion diagnosis methods generate line-of-sight projections of a flame field. To reconstruct the spatial field of the measured properties, these projections need to be deconvoluted. When the spatial field is axisymmetric, commonly used deconvolution method include the Abel transforms, the onion peeling method and the two-dimensional Fourier transform method and its derivatives such as the filtered back projection methods. This paper proposes a new approach for performing the Abel transform method is developed, which possesses the exactness of the Abel transform and the flexibility of incorporating various filters in the reconstruction process. The Abel transform is an exact method and the simplest among these commonly used methods. It is evinced in this paper that all the exact reconstruction methods for axisymmetric distributions must be equivalent to the Abel transform because of its uniqueness and exactness. Detailed proof is presented to show that the two dimensional Fourier methods when applied to axisymmetric cases is identical to the Abel transform. Discrepancies among various reconstruction method stem from the different approximations made to perform numerical calculations. An equation relating the spectrum of a set of projection date to that of the corresponding spatial distribution is obtained, which shows that the spectrum of the projection is equal to the Abel transform of the spectrum of the corresponding spatial distribution. From the equation, if either the projection or the distribution is bandwidth limited, the other is also bandwidth limited, and both have the same bandwidth. If the two are not bandwidth limited, the Abel transform has a bias against low wave number components in most practical cases. This explains why the Abel transform and all exact deconvolution methods are sensitive to high wave number noises. The filtered Abel transform is based on the fact that the Abel transform of filtered projection data is equal
Chandra View of Galaxy Cluster Abell 2554
NASA Astrophysics Data System (ADS)
kıyami Erdim, Muhammed; Hudaverdi, Murat
2016-07-01
We study the structure of the galaxy cluster Abell 2554 at z = 0.11, which is a member of Aquarius Super cluster using the Chandra archival data. The X-ray peak coincides with a bright elliptical cD galaxy. Slightly elongated X-ray plasma has an average temperature and metal abundance values of ˜6 keV and 0.28 solar, respectively. We observe small-scale temperature variations in the ICM. There is a significantly hot wall-like structure with 9 keV at the SE and also radio-lope locates at the tip of this hot region. A2554 is also part of a trio-cluster. Its close neighbors A2550 (at SW) and A2556 (at SE) have only 2 Mpc and 1.5 Mpc separations with A2554. Considering the temperature fluctuations and the dynamical environment of super cluster, we examine the possible ongoing merger scenarios within A2554.
NASA Astrophysics Data System (ADS)
Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Guillemot, C.; Murray, J. R.
2015-12-01
The earthquake early warning (EEW) systems in California and elsewhere can greatly benefit from algorithms that generate estimates of finite-fault parameters. These estimates could significantly improve real-time shaking calculations and yield important information for immediate disaster response. Minson et al. (2015) determined that combining FinDer's seismic-based algorithm (Böse et al., 2012) with BEFORES' geodetic-based algorithm (Minson et al., 2014) yields a more robust and informative joint solution than using either algorithm alone. FinDer examines the distribution of peak ground accelerations from seismic stations and determines the best finite-fault extent and strike from template matching. BEFORES employs a Bayesian framework to search for the best slip inversion over all possible fault geometries in terms of strike and dip. Using FinDer and BEFORES together generates estimates of finite-fault extent, strike, dip, preferred slip, and magnitude. To yield the quickest, most flexible, and open-source version of the joint algorithm, we translated BEFORES and FinDer from Matlab into C++. We are now developing a C++ Application Protocol Interface for these two algorithms to be connected to the seismic and geodetic data flowing from the EEW system. The interface that is being developed will also enable communication between the two algorithms to generate the joint solution of finite-fault parameters. Once this interface is developed and implemented, the next step will be to run test seismic and geodetic data through the system via the Earthworm module, Tank Player. This will allow us to examine algorithm performance on simulated data and past real events.
NASA Astrophysics Data System (ADS)
Ansari, R.; Campagne, J. E.; Colom, P.; Ferrari, C.; Magneville, Ch.; Martin, J. M.; Moniez, M.; Torrentó, A. S.
2016-02-01
We have observed regions of three galaxy clusters at z˜[0.06÷0.09] (Abell85, Abell1205, Abell2440) with the Nançay radiotelescope (NRT) to search for 21 cm emission and to fully characterize the FPGA based BAORadio digital backend. We have tested the new BAORadio data acquisition system by observing sources in parallel with the NRT standard correlator (ACRT) back-end over several months. BAORadio enables wide band instantaneous observation of the [1250,1500] MHz frequency range, as well as the use of powerful RFI mitigation methods thanks to its fine time sampling. A number of questions related to instrument stability, data processing and calibration are discussed. We have obtained the radiometer curves over the integration time range [0.01,10 000] seconds and we show that sensitivities of few mJy over most of the wide frequency band can be reached with the NRT. It is clearly shown that in blind line search, which is the context of H I intensity mapping for Baryon Acoustic Oscillations, the new acquisition system and processing pipeline outperforms the standard one. We report a positive detection of 21 cm emission at 3 σ-level from galaxies in the outer region of Abell85 at ≃1352 MHz (14400 km/s) corresponding to a line strength of ≃0.8 Jy km/s. We also observe an excess power around ≃1318 MHz (21600 km/s), although at lower statistical significance, compatible with emission from Abell1205 galaxies. Detected radio line emissions have been cross matched with optical catalogs and we have derived hydrogen mass estimates.
The Dark Matter filament between Abell 222/223
NASA Astrophysics Data System (ADS)
Dietrich, Jörg P.; Werner, Norbert; Clowe, Douglas; Finoguenov, Alexis; Kitching, Tom; Miller, Lance; Simionescu, Aurora
2016-10-01
Weak lensing detections and measurements of filaments have been elusive for a long time. The reason is that the low density contrast of filaments generally pushes the weak lensing signal to unobservably low scales. To nevertheless map the dark matter in filaments exquisite data and unusual systems are necessary. SuprimeCam observations of the supercluster system Abell 222/223 provided the required combination of excellent seeing images and a fortuitous alignment of the filament with the line-of-sight. This boosted the lensing signal to a detectable level and led to the first weak lensing mass measurement of a large-scale structure filament. The filament connecting Abell 222 and Abell 223 is now the only one traced by the galaxy distribution, dark matter, and X-ray emission from the hottest phase of the warm-hot intergalactic medium. The combination of these data allows us to put the first constraints on the hot gas fraction in filaments.
NASA Technical Reports Server (NTRS)
Aires, F.; Rossow, W. B.; Scott, N. A.; Chedin, A.; Hansen, James E. (Technical Monitor)
2001-01-01
A fast temperature water vapor and ozone atmospheric profile retrieval algorithm is developed for the high spectral resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. Compression and de-noising of IASI observations are performed using Principal Component Analysis. This preprocessing methodology also allows, for a fast pattern recognition in a climatological data set to obtain a first guess. Then, a neural network using first guess information is developed to retrieve simultaneously temperature, water vapor and ozone atmospheric profiles. The performance of the resulting fast and accurate inverse model is evaluated with a large diversified data set of radiosondes atmospheres including rare events.
A 1400-MHz survey of 1478 Abell clusters of galaxies
NASA Technical Reports Server (NTRS)
Owen, F. N.; White, R. A.; Hilldrup, K. C.; Hanisch, R. J.
1982-01-01
Observations of 1478 Abell clusters of galaxies with the NRAO 91-m telescope at 1400 MHz are reported. The measured beam shape was deconvolved from the measured source Gaussian fits in order to estimate the source size and position angle. All detected sources within 0.5 corrected Abell cluster radii are listed, including the cluster number, richness class, distance class, magnitude of the tenth brightest galaxy, redshift estimate, corrected cluster radius in arcmin, right ascension and error, declination and error, total flux density and error, and angular structure for each source.
The Merger Dynamics of Abell 2061
NASA Astrophysics Data System (ADS)
Bailey, Avery; Sarazin, Craig L.; Clarke, Tracy E.; Chatzikos, Marios; Hogge, Taylor; Wik, Daniel R.; Rudnick, Lawrence; Farnsworth, Damon; Van Weeren, Reinout J.; Brown, Shea
2016-04-01
Abell 2061, a galaxy cluster at a redshift of z=.0784 in the Corona Borealis Supercluster, displays features in both the X-ray and radio indicative of merger activity. Observations by the GBT and the Westerbork Northern Sky Survey (WENSS) have indicated the presence of an extended, central radio halo/relic coincident with the cluster's main X-ray emission and a bright radio relic to the SW of the center of the cluster. Previous observations by ROSAT, Beppo-SAX, and Chandra show an elongated structure (referred to as the ‘Plume’), emitting in the soft X-ray and stretching to the NE of the cluster’s center. The Beppo-SAX and Chandra observations also suggest the presence of a hard X-ray shock slightly NE of the cluster’s center. Here we present the details of an August 2013 XMM-Newton observation of A2061 which has greater field of view and longer exposure (48.6 ks) than the previous Chandra observation. We present images displaying the cluster’s soft and hard X-ray emission and also a temperature map of the cluster. This temperature map highlights the presence of a previously unseen cool region of the cluster which we hypothesize to be the cool core of one of the subclusters involved in this merger. We also discuss the structural similarity of this cluster with a simulated high mass-ratio offset cluster merger taken from the Simulation Library of Astrophysical cluster Mergers (SLAM). This simulation would suggest that the Plume is gas from the cool core of a subcluster which is now falling back into the center of the cluster after initial core passage.
LensPerfect Analysis of Abell 1689
NASA Astrophysics Data System (ADS)
Coe, Dan A.
2007-12-01
I present the first massmap to perfectly reproduce the position of every gravitationally-lensed multiply-imaged galaxy detected to date in ACS images of Abell 1689. This massmap was obtained using a powerful new technique made possible by a recent advance in the field of Mathematics. It is the highest resolution assumption-free Dark Matter massmap to date, with the resolution being limited only by the number of multiple images detected. We detect 8 new multiple image systems and identify multiple knots in individual galaxies to constrain a grand total of 168 knots within 135 multiple images of 42 galaxies. No assumptions are made about mass tracing light, and yet the brightest visible structures in A1689 are reproduced in our massmap, a few with intriguing positional offsets. Our massmap probes radii smaller than that resolvable in current Dark Matter simulations of galaxy clusters. And at these radii, we observe slight deviations from the NFW and Sersic profiles which describe simulated Dark Matter halos so well. While we have demonstrated that our method is able to recover a known input massmap (to limited resolution), further tests are necessary to determine the uncertainties of our mass profile and positions of massive subclumps. I compile the latest weak lensing data from ACS, Subaru, and CFHT, and attempt to fit a single profile, either NFW or Sersic, to both the observed weak and strong lensing. I confirm the finding of most previous authors, that no single profile fits extremely well to both simultaneously. Slight deviations are revealed, with the best fits slightly over-predicting the mass profile at both large and small radius. Our easy-to-use software, called LensPerfect, will be made available soon. This research was supported by the European Commission Marie Curie International Reintegration Grant 017288-BPZ and the PNAYA grant AYA2005-09413-C02.
NASA Astrophysics Data System (ADS)
Sellitto, P.; Del Frate, F.
2014-07-01
Atmospheric temperature profiles are inferred from passive satellite instruments, using thermal infrared or microwave observations. Here we investigate on the feasibility of the retrieval of height resolved temperature information in the ultraviolet spectral region. The temperature dependence of the absorption cross sections of ozone in the Huggins band, in particular in the interval 320-325 nm, is exploited. We carried out a sensitivity analysis and demonstrated that a non-negligible information on the temperature profile can be extracted from this small band. Starting from these results, we developed a neural network inversion algorithm, trained and tested with simulated nadir EnviSat-SCIAMACHY ultraviolet observations. The algorithm is able to retrieve the temperature profile with root mean square errors and biases comparable to existing retrieval schemes that use thermal infrared or microwave observations. This demonstrates, for the first time, the feasibility of temperature profiles retrieval from space-borne instruments operating in the ultraviolet.
Mass Profile of Abell 2204 An X-Ray Analysis of Abell 2204 using XMM-Newton Data
Lau, Travis
2003-09-05
The vast majority of the matter in the universe is of an unknown type. This matter is called dark matter by astronomers. The dark matter manifests itself only through gravitational interaction and is otherwise undetectable. The distribution of this matter in can be better understood by studying the mass profile of galaxy clusters. The X-ray emissions of the galaxy cluster Abell 2204 were analyzed using archived data from the XMM-Newton space telescope. We analyze a 40ks observation of Abell 2204 and present a radial temperature and radial mass profile based on hydrostatic equilibrium calculations.
NASA Astrophysics Data System (ADS)
Gilat Schmidt, Taly; Sidky, Emil Y.
2015-03-01
Photon-counting detectors with pulse-height analysis have shown promise for improved spectral CT imaging. This study investigated a novel spectral CT reconstruction method that directly estimates basis-material images from the measured energy-bin data (i.e., `one-step' reconstruction). The proposed algorithm can incorporate constraints to stabilize the reconstruction and potentially reduce noise. The algorithm minimizes the error between the measured energy-bin data and the data estimated from the reconstructed basis images. A total variation (TV) constraint was also investigated for additional noise reduction. The proposed one-step algorithm was applied to simulated data of an anthropomorphic phantom with heterogeneous tissue composition. Reconstructed water, bone, and gadolinium basis images were compared for the proposed one-step algorithm and the conventional `two-step' method of decomposition followed by reconstruction. The unconstrained algorithm provided a 30% to 60% reduction in noise standard deviation compared to the two-step algorithm. The fTV =0.8 constraint provided a small reduction in noise (˜ 1%) compared to the unconstrained reconstruction. Images reconstructed with the fTV =0.5 constraint demonstrated 77% to 94% standard deviation reduction compared to the two-step reconstruction, however with increased blurring. There were no significant differences in the mean values reconstructed by the investigated algorithms. Overall, the proposed one-step spectral CT reconstruction algorithm provided three-material-decomposition basis images with reduced noise compared to the conventional two-step approach. When using a moderate TV constraint factor (fTV = 0.8), a 30%-60% reduction in noise standard deviation was achieved while preserving the edge profile for this simulated phantom.
Bao Yidong; Hu Sibo; Lang Zhikui; Hu Ping
2005-08-05
A fast simulation scheme for 3D curved binder flanging and blank shape prediction of sheet metal based on one-step inverse finite element method is proposed, in which the total plasticity theory and proportional loading assumption are used. The scheme can be actually used to simulate 3D flanging with complex curve binder shape, and suitable for simulating any type of flanging model by numerically determining the flanging height and flanging lines. Compared with other methods such as analytic algorithm and blank sheet-cut return method, the prominent advantage of the present scheme is that it can directly predict the location of the 3D flanging lines when simulating the flanging process. Therefore, the prediction time of flanging lines will be obviously decreased. Two typical 3D curve binder flanging including stretch and shrink characters are simulated in the same time by using the present scheme and incremental FE non-inverse algorithm based on incremental plasticity theory, which show the validity and high efficiency of the present scheme.
NASA Astrophysics Data System (ADS)
Voznyuk, I.; Litman, A.; Tortel, H.
2015-08-01
A Quasi-Newton method for reconstructing the constitutive parameters of three-dimensional (3D) penetrable scatterers from scattered field measurements is presented. This method is adapted for handling large-scale electromagnetic problems while keeping the memory requirement and the time flexibility as low as possible. The forward scattering problem is solved by applying the finite-element tearing and interconnecting full-dual-primal (FETI-FDP2) method which shares the same spirit as the domain decomposition methods for finite element methods. The idea is to split the computational domain into smaller non-overlapping sub-domains in order to simultaneously solve local sub-problems. Various strategies are proposed in order to efficiently couple the inversion algorithm with the FETI-FDP2 method: a separation into permanent and non-permanent subdomains is performed, iterative solvers are favorized for resolving the interface problem and a marching-on-in-anything initial guess selection further accelerates the process. The computational burden is also reduced by applying the adjoint state vector methodology. Finally, the inversion algorithm is confronted to measurements extracted from the 3D Fresnel database.
NASA Astrophysics Data System (ADS)
Sourbier, F.; Operto, S.; Virieux, J.
2006-12-01
We present a distributed-memory parallel algorithm for 2D visco-acoustic full-waveform inversion of wide-angle seismic data. Our code is written in fortran90 and use MPI for parallelism. The algorithm was applied to real wide-angle data set recorded by 100 OBSs with a 1-km spacing in the eastern-Nankai trough (Japan) to image the deep structure of the subduction zone. Full-waveform inversion is applied sequentially to discrete frequencies by proceeding from the low to the high frequencies. The inverse problem is solved with a classic gradient method. Full-waveform modeling is performed with a frequency-domain finite-difference method. In the frequency-domain, solving the wave equation requires resolution of a large unsymmetric system of linear equations. We use the massively parallel direct solver MUMPS (http://www.enseeiht.fr/irit/apo/MUMPS) for distributed-memory computer to solve this system. The MUMPS solver is based on a multifrontal method for the parallel factorization. The MUMPS algorithm is subdivided in 3 main steps: a symbolic analysis step that performs re-ordering of the matrix coefficients to minimize the fill-in of the matrix during the subsequent factorization and an estimation of the assembly tree of the matrix. Second, the factorization is performed with dynamic scheduling to accomodate numerical pivoting and provides the LU factors distributed over all the processors. Third, the resolution is performed for multiple sources. To compute the gradient of the cost function, 2 simulations per shot are required (one to compute the forward wavefield and one to back-propagate residuals). The multi-source resolutions can be performed in parallel with MUMPS. In the end, each processor stores in core a sub-domain of all the solutions. These distributed solutions can be exploited to compute in parallel the gradient of the cost function. Since the gradient of the cost function is a weighted stack of the shot and residual solutions of MUMPS, each processor
NASA Astrophysics Data System (ADS)
Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.
2012-04-01
We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to
Retrieval Performance and Indexing Differences in ABELL and MLAIB
ERIC Educational Resources Information Center
Graziano, Vince
2012-01-01
Searches for 117 British authors are compared in the Annual Bibliography of English Language and Literature (ABELL) and the Modern Language Association International Bibliography (MLAIB). Authors are organized by period and genre within the early modern era. The number of records for each author was subdivided by format, language of publication,…
Wang, Jing; Guan, Huaiqun; Solberg, Timothy
2011-07-15
Purpose: A statistical projection restoration algorithm based on the penalized weighted least-squares (PWLS) criterion can substantially improve the image quality of low-dose CBCT images. The performance of PWLS is largely dependent on the choice of the penalty parameter. Previously, the penalty parameter was chosen empirically by trial and error. In this work, the authors developed an inverse technique to calculate the penalty parameter in PWLS for noise suppression of low-dose CBCT in image guided radiotherapy (IGRT). Methods: In IGRT, a daily CBCT is acquired for the same patient during a treatment course. In this work, the authors acquired the CBCT with a high-mAs protocol for the first session and then a lower mAs protocol for the subsequent sessions. The high-mAs projections served as the goal (ideal) toward, which the low-mAs projections were to be smoothed by minimizing the PWLS objective function. The penalty parameter was determined through an inverse calculation of the derivative of the objective function incorporating both the high and low-mAs projections. Then the parameter obtained can be used for PWLS to smooth the noise in low-dose projections. CBCT projections for a CatPhan 600 and an anthropomorphic head phantom, as well as for a brain patient, were used to evaluate the performance of the proposed technique. Results: The penalty parameter in PWLS was obtained for each CBCT projection using the proposed strategy. The noise in the low-dose CBCT images reconstructed from the smoothed projections was greatly suppressed. Image quality in PWLS-processed low-dose CBCT was comparable to its corresponding high-dose CBCT. Conclusions: A technique was proposed to estimate the penalty parameter for PWLS algorithm. It provides an objective and efficient way to obtain the penalty parameter for image restoration algorithms that require predefined smoothing parameters.
Fox, Andrew; Williams, Mathew; Richardson, Andrew D.; Cameron, David; Gove, Jeffrey H.; Quaife, Tristan; Ricciuto, Daniel M; Reichstein, Markus; Tomelleri, Enrico; Trudinger, Cathy; Van Wijk, Mark T.
2009-10-01
We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) ofCO2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration,were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving>80% success rate and mean NEE confidence intervals <110 gCm-2 year-1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence
Sarode, Ketan Dinkar; Kumar, V Ravi; Kulkarni, B D
2016-05-01
An efficient inverse problem approach for parameter estimation, state and structure identification from dynamic data by embedding training functions in a genetic algorithm methodology (ETFGA) is proposed for nonlinear dynamical biosystems using S-system canonical models. Use of multiple shooting and decomposition approach as training functions has been shown for handling of noisy datasets and computational efficiency in studying the inverse problem. The advantages of the methodology are brought out systematically by studying it for three biochemical model systems of interest. By studying a small-scale gene regulatory system described by a S-system model, the first example demonstrates the use of ETFGA for the multifold aims of the inverse problem. The estimation of a large number of parameters with simultaneous state and network identification is shown by training a generalized S-system canonical model with noisy datasets. The results of this study bring out the superior performance of ETFGA on comparison with other metaheuristic approaches. The second example studies the regulation of cAMP oscillations in Dictyostelium cells now assuming limited availability of noisy data. Here, flexibility of the approach to incorporate partial system information in the identification process is shown and its effect on accuracy and predictive ability of the estimated model are studied. The third example studies the phenomenological toy model of the regulation of circadian oscillations in Drosophila that follows rate laws different from S-system power-law. For the limited noisy data, using a priori information about properties of the system, we could estimate an alternate S-system model that showed robust oscillatory behavior with predictive abilities. PMID:26968929
AVO inversion based on inverse operator estimation in trust region
NASA Astrophysics Data System (ADS)
Yin, Xing-Yao; Deng, Wei; Zong, Zhao-Yun
2016-04-01
Amplitude variation with offset (AVO) inversion is widely utilized in exploration geophysics, especially for reservoir prediction and fluid identification. Inverse operator estimation in the trust region algorithm is applied for solving AVO inversion problems in which optimization and inversion directly are integrated. The L1 norm constraint is considered on the basis of reasonable initial model in order to improve effciency and stability during the AVO inversion process. In this study, high-order Zoeppritz approximation is utilized to establish the inversion objective function in which variation of {{v}\\text{p}}/{{v}\\text{s}} with time is taken into consideration. A model test indicates that the algorithm has a relatively higher stability and accuracy than the damp least-squares algorithm. Seismic data inversion is feasible and inversion values of three parameters ({{v}\\text{p}},{{v}\\text{s}},ρ ) maintain good consistency with logging curves.
NASA Technical Reports Server (NTRS)
Smith, G. L.; Green, R. N.; Avis, L. M.; Suttles, J. T.; Wielicki, B. A.; Raschke, E.; Davies, R.
1986-01-01
The Earth Radiation Budget Experiment carries a three-channel scanning radiometer and a set of nadir-looking wide and medium field-of-view instruments for measuring the radiation emitted from earth and the solar radiation reflected from earth. This paper describes the algorithms which are used to compute the radiant exitances at a reference level ('top of the atmosphere') from these measurements. Methods used to analyze data from previous radiation budget experiments are reviewed, and the rationale for the present algorithms is developed. The scanner data are converted to radiances by use of spectral factors, which account for imperfect spectral response of the optics. These radiances are converted to radiant exitances at the reference level by use of directional models, which account for anisotropy of the radiation as it leaves the earth. The spectral factors and directional models are selected on the basis of the scene, which is identified on the basis of the location and the long-wave and shortwave radiances. These individual results are averaged over 2.5 x 2.5 deg regions. Data from the wide and medium field-of-view instruments are analyzed by use of the traditional shape factor method and also by use of a numerical filter, which permits resolution enhancement along the orbit track.
Lin, Lin; Yang, Chao; Lu, Jiangfeng; Ying, Lexing; E, Weinan
2009-09-25
We present an efficient parallel algorithm and its implementation for computing the diagonal of $H^-1$ where $H$ is a 2D Kohn-Sham Hamiltonian discretized on a rectangular domain using a standard second order finite difference scheme. This type of calculation can be used to obtain an accurate approximation to the diagonal of a Fermi-Dirac function of $H$ through a recently developed pole-expansion technique \\cite{LinLuYingE2009}. The diagonal elements are needed in electronic structure calculations for quantum mechanical systems \\citeHohenbergKohn1964, KohnSham 1965,DreizlerGross1990. We show how elimination tree is used to organize the parallel computation and how synchronization overhead is reduced by passing data level by level along this tree using the technique of local buffers and relative indices. We analyze the performance of our implementation by examining its load balance and communication overhead. We show that our implementation exhibits an excellent weak scaling on a large-scale high performance distributed parallel machine. When compared with standard approach for evaluating the diagonal a Fermi-Dirac function of a Kohn-Sham Hamiltonian associated a 2D electron quantum dot, the new pole-expansion technique that uses our algorithm to compute the diagonal of $(H-z_i I)^-1$ for a small number of poles $z_i$ is much faster, especially when the quantum dot contains many electrons.
X-Ray Imaging-Spectroscopy of Abell 1835
NASA Technical Reports Server (NTRS)
Peterson, J. R.; Paerels, F. B. S.; Kaastra, J. S.; Arnaud, M.; Reiprich T. H.; Fabian, A. C.; Mushotzky, R. F.; Jernigan, J. G.; Sakelliou, I.
2000-01-01
We present detailed spatially-resolved spectroscopy results of the observation of Abell 1835 using the European Photon Imaging Cameras (EPIC) and the Reflection Grating Spectrometers (RGS) on the XMM-Newton observatory. Abell 1835 is a luminous (10(exp 46)ergs/s), medium redshift (z = 0.2523), X-ray emitting cluster of galaxies. The observations support the interpretation that large amounts of cool gas are present in a multi-phase medium surrounded by a hot (kT(sub e) = 8.2 keV) outer envelope. We detect O VIII Ly(alpha) and two Fe XXIV complexes in the RGS spectrum. The emission measure of the cool gas below kT(sub e) = 2.7 keV is much lower than expected from standard cooling-flow models, suggesting either a more complicated cooling process than simple isobaric radiative cooling or differential cold absorption of the cooler gas.
NASA Astrophysics Data System (ADS)
Ganapol, B. D.; Furfaro, R.; Johnson, L. F.; Herwitz, S. R.
2003-12-01
Over the past two years, NASA has had great interest in exploring the economic potential of deploying UAVs (Unmanned Aerial Vehicles) as long-duration platforms equipped with high resolution imaging systems for commercial agricultural applications. In October 2002, a team in the Ecosystem Science and Technology Branch at NASA/Ames Research Center prepared and successfully flew a UAV, equipped with off-the-shelf camera systems, over coffee plantations at Kauai (Hawaii). The idea is to help growers to find the best possible harvesting strategy. The most important information that needs to be conveyed to the growers is the percentage of ripe, unripe and overripe cherries in the field. It is of vital importance to devise a robust and reliable "intelligent "algorithm capable of predicting the amount of ripe cherries present in any digital image coming from the onboard cameras. During the campaign, the two UAV camera systems produced digital images that contain information about the down-looking plantation field. These images need to be processed to extract information concerning the percentage of ripe (yellow) cherries. To date, no robust automated algorithm has been developed to perform this task. Currently, every image is viewed by human eyes on a case by case basis. We propose a neural network algorithm that can automate the process in an intelligent way. Biologically inspired Neural Networks are made of elements called "neurons" that can simulate the brain activity during a learning process. The idea is to design an appropriate neural network that learns the relation between the reflectance coming from an image and the percentage of cherries present in a coffee field. We envision a situation in which reflectance from digital images at different wavebands is processed by a trained neural network and the percentage of the different cherries estimated. The key factor is training the network to recognize the reflectance/cherry percentage relation. Over the past few
The GenABEL Project for statistical genomics
Karssen, Lennart C.; van Duijn, Cornelia M.; Aulchenko, Yurii S.
2016-01-01
Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the “core team”, facilitating agile statistical omics methodology development and fast dissemination. PMID:27347381
The GenABEL Project for statistical genomics.
Karssen, Lennart C; van Duijn, Cornelia M; Aulchenko, Yurii S
2016-01-01
Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the "core team", facilitating agile statistical omics methodology development and fast dissemination.
The GenABEL Project for statistical genomics.
Karssen, Lennart C; van Duijn, Cornelia M; Aulchenko, Yurii S
2016-01-01
Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the "core team", facilitating agile statistical omics methodology development and fast dissemination. PMID:27347381
NASA Astrophysics Data System (ADS)
Bezada, Maximiliano J.; Zelt, Colin A.
2011-05-01
Crustal density models derived from seismic velocity models by means of velocity-density conversions typically reproduce the main features of the observed gravity anomaly over the area but often show significant misfits. Given the uncertainty in the relationship between velocity and density, seismically derived density models should be regarded as an initial estimate of the true subsurface density structure. In this paper, we present a method for estimating the adjustments necessary to a seismically derived density model to improve the fit to gravity data. The method combines the Genetic Algorithm paradigm with linear inversion as a way to approach the non-linear and linear aspects of the problem. The models are divided into three layers representing the sedimentary column, the crystalline crust and the lithospheric mantle; the depths of these layers are determined from the seismic velocity model. Each of the layers is divided into a number of provinces and a density adjustment (Δρ) value is found for each province so that the residual gravity (difference between the observed gravity anomaly and the anomaly calculated for the seismically derived model) is minimized while keeping Δρ between predefined bounds. The preferred position of the province boundaries is found through the artificial evolution of a population of solutions. Given the stochastic nature of the algorithm and the non-uniqueness of the problem, different realizations can yield different solutions. By performing multiple realizations we can analyse a set of solutions by taking their mean and standard deviation, providing not only an estimate of the Δρ distribution in the subsurface but also an estimate of the associated uncertainty. Synthetic tests prove the ability of the algorithm to accurately recover the location of province boundaries and the Δρ values for a known model when using noise-free synthetic data. When noise is added to the data, the algorithm broadly recovers the features that
A wide-field spectroscopic survey of Abell 1689 and Abell 1835 with VIMOS
NASA Astrophysics Data System (ADS)
Czoske, Oliver
2004-12-01
Spectroscopic surveys can add a third dimension, velocity, to the galaxy distribution in and PoS(BDMH2004)099 around clusters. The largest wide-field spectroscopic samples at present exist for near-by clusters. Czoske et al. (2001: A&A 372, 391; 2002: A&A 386, 31) present a catalogue of redshifts for 300 cluster members with V < 22 in Cl0024+1654 at z = 0.395, the largest currently available cluster ˜ redshift catalogue at such a high redshift. In that case, it was only the redshift information ex- tending to large cluster-centric distances which revealed the complex structure of what appeared in other observations to be a relaxed rich cluster. The recent advent of high-multiplex spectrographs on 8 10 meter class telescopes has made it possible to obtain large numbers of high-quality spectra of galaxies and around clusters of galaxies in a short amount of time. The data described by Czoske et al. (2001) were obtained over the course of four years. Samples larger by a factor of 2 . . . 3 can now be obtained in ˜ 10 hours of observation time. Here I present the first results from a spectroscopic survey of the two X-ray luminous clusters Abell 1689 (z = 0.185) and Abell 1835 (z = 0.25). We use the VIsible imaging Multi-Object Spectrograph (VIMOS) on VLT UT3/Melipal. The field of view of VIMOS available for spectroscopy consists of four quadrants of ˜ 7 × 7 , the separa- tion between the quadrants is ˜ 2 . Using the LR-Blue grism, one can place ˜ 100 . . . 150 slits per quadrant. The resulting spectra cover the wavelength range 3700 . . . 6700 Å with a resolution R 200. We use as the basis for object selection panoramic multi-colour images obtained with the CFH12k camera on CFHT (Czoske, 2002, PhD thesis), covering 40 × 30 in BRI for A1689 and VRI for A1835. The input catalogue has been cleaned of stars. We attempted to cover the entire CFH12k field of view by using 10 VIMOS pointings for each cluster. Due to technical problems with VIMOS only 8 and 9 masks
RADIO AND DEEP CHANDRA OBSERVATIONS OF THE DISTURBED COOL CORE CLUSTER ABELL 133
Randall, S. W.; Nulsen, P. E. J.; Forman, W. R.; Murray, S. S.; Clarke, T. E.; Owers, M. S.; Sarazin, C. L.
2010-10-10
We present results based on new Chandra and multi-frequency radio observations of the disturbed cool core cluster Abell 133. The diffuse gas has a complex bird-like morphology, with a plume of emission extending from two symmetric wing-like features. The plume is capped with a filamentary radio structure that has been previously classified as a radio relic. X-ray spectral fits in the region of the relic indicate the presence of either high-temperature gas or non-thermal emission, although the measured photon index is flatter than would be expected if the non-thermal emission is from inverse Compton scattering of the cosmic microwave background by the radio-emitting particles. We find evidence for a weak elliptical X-ray surface brightness edge surrounding the core, which we show is consistent with a sloshing cold front. The plume is consistent with having formed due to uplift by a buoyantly rising radio bubble, now seen as the radio relic, and has properties consistent with buoyantly lifted plumes seen in other systems (e.g., M87). Alternatively, the plume may be a gas sloshing spiral viewed edge-on. Results from spectral analysis of the wing-like features are inconsistent with the previous suggestion that the wings formed due to the passage of a weak shock through the cool core. We instead conclude that the wings are due to X-ray cavities formed by displacement of X-ray gas by the radio relic. The central cD galaxy contains two small-scale cold gas clumps that are slightly offset from their optical and UV counterparts, suggestive of a galaxy-galaxy merger event. On larger scales, there is evidence for cluster substructure in both optical observations and the X-ray temperature map. We suggest that the Abell 133 cluster has recently undergone a merger event with an interloping subgroup, initialing gas sloshing in the core. The torus of sloshed gas is seen close to edge-on, leading to the somewhat ragged appearance of the elliptical surface brightness edge. We show
NASA Astrophysics Data System (ADS)
Edwards, L. O. V.; Alpert, H. S.; Trierweiler, I. L.; Abraham, T.; Beizer, V. G.
2016-09-01
We present the first results from an integral field unit (IFU) spectroscopic survey of a ˜75 kpc region around three brightest cluster galaxies (BCGs), combining over 100 IFU fibres to study the intracluster light (ICL). We fit population synthesis models to estimate age and metallicity. For Abell 85 and Abell 2457, the ICL is best-fit with a fraction of old, metal-rich stars like in the BCG, but requires 30-50 per cent young and metal-poor stars, a component not found in the BCGs. This is consistent with the ICL having been formed by a combination of interactions with less massive, younger, more metal-poor cluster members in addition to stars that form the BCG. We find that the three galaxies are in different stages of evolution and may be the result of different formation mechanisms. The BCG in Abell 85 is near a relatively young, metal-poor galaxy, but the dynamical friction time-scale is long and the two are unlikely to be undergoing a merger. The outer regions of Abell 2457 show a higher relative fraction of metal-poor stars, and we find one companion, with a higher fraction of young, metal-poor stars than the BCG, which is likely to merge within a gigayear. Several luminous red galaxies are found at the centre of the cluster IIZw108, with short merger time-scales, suggesting that the system is about to embark on a series of major mergers to build up a dominant BCG. The young, metal-poor component found in the ICL is not found in the merging galaxies.
NASA Astrophysics Data System (ADS)
Vanderstraeten, Barbara; DeGersem, Werner; Duthoy, Wim; DeNeve, Wilfried; Thierens, Hubert
2006-08-01
The development of new biological imaging technologies offers the opportunity to further individualize radiotherapy. Biologically conformal radiation therapy (BCRT) implies the use of the spatial distribution of one or more radiobiological parameters to guide the IMRT dose prescription. Our aim was to implement BCRT in an algorithmic segmentation-based planning approach. A biology-based segmentation tool was developed to generate initial beam segments that reflect the biological signal intensity pattern. The weights and shapes of the initial segments are optimized by means of an objective function that minimizes the root mean square deviation between the actual and intended dose values within the PTV. As proof of principle, [18F]FDG-PET-guided BCRT plans for two different levels of dose escalation were created for an oropharyngeal cancer patient. Both plans proved to be dosimetrically feasible without violating the planning constraints for the expanded spinal cord and the contralateral parotid gland as organs at risk. The obtained biological conformity was better for the first (2.5 Gy per fraction) than for the second (3 Gy per fraction) dose escalation level.
Internal dynamics of Abell 1240: a galaxy cluster with symmetric double radio relics
NASA Astrophysics Data System (ADS)
Barrena, R.; Girardi, M.; Boschin, W.; Dasí, M.
2009-08-01
Context: The mechanisms giving rise to diffuse radio emission in galaxy clusters, and in particular their connection with cluster mergers, are still debated. Aims: We aim to obtain new insights into the internal dynamics of the cluster Abell 1240, which appears to contain two roughly symmetric radio relics, separated by ~2 h_70-1 Mpc. Methods: Our analysis is based mainly on redshift data for 145 galaxies mostly acquired at the Telescopio Nazionale Galileo and on new photometric data acquired at the Isaac Newton Telescope. We also use X-ray data from the Chandra archive and photometric data from the Sloan Digital Sky Survey (Data Release 7). We combine galaxy velocities and positions to select 89 cluster galaxies and analyze the internal dynamics of the Abell 1237 + Abell 1240 cluster complex, Abell 1237 being a close companion of Abell 1240 in its southern direction. Results: We estimate similar redshifts for Abell 1237 and Abell 1240, < z > = 0.1935 and < z > = 0.1948, respectively. For Abell 1237, we estimate a line-of-sight (LOS) velocity dispersion of σV ~ 740 km s-1and a mass of M ~ 6 × 1014 h_70-1 M⊙. For Abell 1240, we estimate a LOS σV ~ 870 km s-1and a mass in the range M ~ 0.9-1.9 × 1015 h_70-1 M⊙, which takes account of its complex dynamics. Abell 1240 is shown to have a bimodal structure with two galaxy clumps roughly aligned along its N-S direction, the same as defined by the elongation of its X-ray surface brightness and the axis of symmetry of the relics. The two brightest galaxies of Abell 1240, associated with the northern and southern clumps, are separated by a LOS rest-frame velocity difference Vrf ~ 400 km s-1and a projected distance D ~ 1.2 h_70-1 Mpc. The two-body model agrees with the hypothesis that we are looking at a cluster merger that occurred largely in the plane of the sky, the two galaxy clumps being separated by a rest-frame velocity difference Vrf ~ 2000 km s-1at a time of 0.3 Gyr after the crossing core, while Abell 1237
NASA Astrophysics Data System (ADS)
Fiorucci, I.; Muscari, G.; de Zafra, R. L.
2011-07-01
The Ground-Based Millimeter-wave Spectrometer (GBMS) was designed and built at the State University of New York at Stony Brook in the early 1990s and since then has carried out many measurement campaigns of stratospheric O3, HNO3, CO and N2O at polar and mid-latitudes. Its HNO3 data set shed light on HNO3 annual cycles over the Antarctic continent and contributed to the validation of both generations of the satellite-based JPL Microwave Limb Sounder (MLS). Following the increasing need for long-term data sets of stratospheric constituents, we resolved to establish a long-term GMBS observation site at the Arctic station of Thule (76.5° N, 68.8° W), Greenland, beginning in January 2009, in order to track the long- and short-term interactions between the changing climate and the seasonal processes tied to the ozone depletion phenomenon. Furthermore, we updated the retrieval algorithm adapting the Optimal Estimation (OE) method to GBMS spectral data in order to conform to the standard of the Network for the Detection of Atmospheric Composition Change (NDACC) microwave group, and to provide our retrievals with a set of averaging kernels that allow more straightforward comparisons with other data sets. The new OE algorithm was applied to GBMS HNO3 data sets from 1993 South Pole observations to date, in order to produce HNO3 version 2 (v2) profiles. A sample of results obtained at Antarctic latitudes in fall and winter and at mid-latitudes is shown here. In most conditions, v2 inversions show a sensitivity (i.e., sum of column elements of the averaging kernel matrix) of 100 ± 20 % from 20 to 45 km altitude, with somewhat worse (better) sensitivity in the Antarctic winter lower (upper) stratosphere. The 1σ uncertainty on HNO3 v2 mixing ratio vertical profiles depends on altitude and is estimated at ~15 % or 0.3 ppbv, whichever is larger. Comparisons of v2 with former (v1) GBMS HNO3 vertical profiles, obtained employing the constrained matrix inversion method, show that
Two-dimensional charged particle image inversion using a polar basis function expansion
Garcia, Gustavo A.; Nahon, Laurent; Powis, Ivan
2004-11-01
We present an inversion method called pBasex aimed at reconstructing the original Newton sphere of expanding charged particles from its two-dimensional projection by fitting a set of basis functions with a known inverse Abel integral. The basis functions have been adapted to the polar symmetry of the photoionization process to optimize the energy and angular resolution while minimizing the CPU time and the response to the cartesian noise that could be given by the detection system. The method presented here only applies to systems with a unique axis of symmetry although it can be adapted to overcome this restriction. It has been tested on both simulated and experimental noisy images and compared to the Fourier-Hankel algorithm and the original Cartesian basis set used by [Dribinski et al.Rev. Sci. Instrum. 73, 2634 (2002)], and appears to give a better performance where odd Legendre polynomials are involved, while in the images where only even terms are present the method has been shown to be faster and simpler without compromising its accuracy.
NASA Astrophysics Data System (ADS)
Ansari, Hamid Reza
2014-09-01
In this paper we propose a new method for predicting rock porosity based on a combination of several artificial intelligence systems. The method focuses on one of the Iranian carbonate fields in the Persian Gulf. Because there is strong heterogeneity in carbonate formations, estimation of rock properties experiences more challenge than sandstone. For this purpose, seismic colored inversion (SCI) and a new approach of committee machine are used in order to improve porosity estimation. The study comprises three major steps. First, a series of sample-based attributes is calculated from 3D seismic volume. Acoustic impedance is an important attribute that is obtained by the SCI method in this study. Second, porosity log is predicted from seismic attributes using common intelligent computation systems including: probabilistic neural network (PNN), radial basis function network (RBFN), multi-layer feed forward network (MLFN), ε-support vector regression (ε-SVR) and adaptive neuro-fuzzy inference system (ANFIS). Finally, a power law committee machine (PLCM) is constructed based on imperial competitive algorithm (ICA) to combine the results of all previous predictions in a single solution. This technique is called PLCM-ICA in this paper. The results show that PLCM-ICA model improved the results of neural networks, support vector machine and neuro-fuzzy system.
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmüller, U.; Strozzi, T.
2012-12-01
The Lost-Hills oil field located in Kern County,California ranks sixth in total remaining reserves in California. Hundreds of densely packed wells characterize the field with one well every 5000 to 20000 square meters. Subsidence due to oil extraction can be grater than 10 cm/year and is highly variable both in space and time. The RADARSAT-1 SAR satellite collected data over this area with a 24-day repeat during a 2 year period spanning 2002-2004. Relatively high interferometric correlation makes this an excellent region for development and test of deformation time-series inversion algorithms. Errors in deformation time series derived from a stack of differential interferograms are primarily due to errors in the digital terrain model, interferometric baselines, variability in tropospheric delay, thermal noise and phase unwrapping errors. Particularly challenging is separation of non-linear deformation from variations in troposphere delay and phase unwrapping errors. In our algorithm a subset of interferometric pairs is selected from a set of N radar acquisitions based on criteria of connectivity, time interval, and perpendicular baseline. When possible, the subset consists of temporally connected interferograms, otherwise the different groups of interferograms are selected to overlap in time. The maximum time interval is constrained to be less than a threshold value to minimize phase gradients due to deformation as well as minimize temporal decorrelation. Large baselines are also avoided to minimize the consequence of DEM errors on the interferometric phase. Based on an extension of the SVD based inversion described by Lee et al. ( USGS Professional Paper 1769), Schmidt and Burgmann (JGR, 2003), and the earlier work of Berardino (TGRS, 2002), our algorithm combines estimation of the DEM height error with a set of finite difference smoothing constraints. A set of linear equations are formulated for each spatial point that are functions of the deformation velocities
Internal dynamics of Abell 2294: a massive, likely merging cluster
NASA Astrophysics Data System (ADS)
Girardi, M.; Boschin, W.; Barrena, R.
2010-07-01
Context. The mechanisms giving rise to diffuse radio emission in galaxy clusters, and in particular their connection with cluster mergers, are still debated. Aims: We seek to explore the internal dynamics of the cluster Abell 2294, which has been shown to host a radio halo. Methods: Our analysis is mainly based on redshift data for 88 galaxies acquired at the Telescopio Nazionale Galileo. We combine galaxy velocities and positions to select 78 cluster galaxies and analyze its internal dynamics. We also use both photometric data acquired at the Isaac Newton Telescope and X-ray data from the Chandra archive. Results: We re-estimate the redshift of the large, brightest cluster galaxy (BCG) obtaining < z > = 0.1690, which closely agrees with the mean cluster redshift. We estimate a quite large line-of-sight (LOS) velocity dispersion σ_V ~ 1400 km s-1 and X-ray temperature TX ~ 10 keV. Our optical and X-ray analyses detect substructure. Our results imply that the cluster is composed of two massive subclusters separated by a LOS rest frame velocity difference Vrf ~ 2000 km s-1, very closely projected in the plane of sky along the SE-NW direction. This observational picture, interpreted in terms of the analytical two-body model, suggests that Abell 2294 is a cluster merger elongated mainly in the LOS direction and captured during the bound outgoing phase, a few fractions of Gyr after the core crossing. We find that Abell 2294 is a very massive cluster with a range of M = 2-4 × 1015 h70-1 M⊙, depending on the adopted model. In contrast to previous findings, we find no evidence of Hα emission in the spectrum of the BCG galaxy. Conclusions: The emerging picture of Abell 2294 is that of a massive, quite “normal” merging cluster, like many clusters hosting diffuse radio sources. However, perhaps because of its particular geometry, more data are needed for reach a definitive, more quantitative conclusion.
Disentangling Structures in the Cluster of Galaxies Abell 133
NASA Technical Reports Server (NTRS)
Way, Michael J.; DeVincenzi, Donald (Technical Monitor)
2002-01-01
A dynamical analysis of the structure of the cluster of galaxies Abell 133 will be presented using multi-wavelength data combined from multiple space and earth based observations. New and familiar statistical clustering techniques are used in combination in an attempt to gain a fully consistent picture of this interesting nearby cluster of galaxies. The type of analysis presented should be typical of cluster studies in the future, especially those to come from the surveys like the Sloan Digital Sky Survey and the 2DF.
Disentangling the ICL with the CHEFs: Abell 2744 as a Case Study
NASA Astrophysics Data System (ADS)
Jiménez-Teja, Y.; Dupke, R.
2016-03-01
Measurements of the intracluster light (ICL) are still prone to methodological ambiguities, and there are multiple techniques in the literature to address them, mostly based on the binding energy, the local density distribution, or the surface brightness. A common issue with these methods is the a priori assumption of a number of hypotheses on either the ICL morphology, its surface brightness level, or some properties of the brightest cluster galaxy (BCG). The discrepancy in the results is high, and numerical simulations just place a boundary on the ICL fraction in present-day galaxy clusters in the range 10%-50%. We developed a new algorithm based on the Chebyshev-Fourier functions to estimate the ICL fraction without relying on any a priori assumption about the physical or geometrical characteristics of the ICL. We are able to not only disentangle the ICL from the galactic luminosity but mark out the limits of the BCG from the ICL in a natural way. We test our technique with the recently released data of the cluster Abell 2744, observed by the Frontier Fields program. The complexity of this multiple merging cluster system and the formidable depth of these images make it a challenging test case to prove the efficiency of our algorithm. We found a final ICL fraction of 19.17 ± 2.87%, which is very consistent with numerical simulations.
The inverse electroencephalography pipeline
NASA Astrophysics Data System (ADS)
Weinstein, David Michael
The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.
The Noble-Abel Stiffened-Gas equation of state
NASA Astrophysics Data System (ADS)
Le Métayer, Olivier; Saurel, Richard
2016-04-01
Hyperbolic two-phase flow models have shown excellent ability for the resolution of a wide range of applications ranging from interfacial flows to fluid mixtures with several velocities. These models account for waves propagation (acoustic and convective) and consist in hyperbolic systems of partial differential equations. In this context, each phase is compressible and needs an appropriate convex equation of state (EOS). The EOS must be simple enough for intensive computations as well as boundary conditions treatment. It must also be accurate, this being challenging with respect to simplicity. In the present approach, each fluid is governed by a novel EOS named "Noble Abel stiffened gas," this formulation being a significant improvement of the popular "Stiffened Gas (SG)" EOS. It is a combination of the so-called "Noble-Abel" and "stiffened gas" equations of state that adds repulsive effects to the SG formulation. The determination of the various thermodynamic functions and associated coefficients is the aim of this article. We first use thermodynamic considerations to determine the different state functions such as the specific internal energy, enthalpy, and entropy. Then we propose to determine the associated coefficients for a liquid in the presence of its vapor. The EOS parameters are determined from experimental saturation curves. Some examples of liquid-vapor fluids are examined and associated parameters are computed with the help of the present method. Comparisons between analytical and experimental saturation curves show very good agreement for wide ranges of temperature for both liquid and vapor.
Reconstructing the projected gravitational potential of Abell 1689 from X-ray measurements
NASA Astrophysics Data System (ADS)
Tchernin, Céline; Majer, Charles L.; Meyer, Sven; Sarli, Eleonora; Eckert, Dominique; Bartelmann, Matthias
2015-02-01
Context. Galaxy clusters can be used as cosmological probes, but to this end, they need to be thoroughly understood. Combining all cluster observables in a consistent way will help us to understand their global properties and their internal structure. Aims: We provide proof of the concept that the projected gravitational potential of galaxy clusters can directly be reconstructed from X-ray observations. We also show that this joint analysis can be used to locally test the validity of the equilibrium assumptions in galaxy clusters. Methods: We used a newly developed reconstruction method, based on Richardson-Lucy deprojection, that allows reconstructing projected gravitational potentials of galaxy clusters directly from X-ray observations. We applied this algorithm to the well-studied cluster Abell 1689 and compared the gravitational potential reconstructed from X-ray observables to the potential obtained from gravitational lensing measurements. We also compared the X-ray deprojected profiles obtained by the Richardson-Lucy deprojection algorithm with the findings from the more conventional onion-peeling technique. Results: Assuming spherical symmetry and hydrostatic equilibrium, the potentials recovered from gravitational lensing and from X-ray emission agree very well beyond 500 kpc. Owing to the fact that the Richardson-Lucy deprojection algorithm allows deprojecting each line of sight independently, this result may indicate that non-gravitational effects and/or asphericity are strong in the central regions of the clusters. Conclusions: We demonstrate the robustness of the potential reconstruction method based on the Richardson-Lucy deprojection algorithm and show that gravitational lensing and X-ray emission lead to consistent gravitational potentials. Our results illustrate the power of combining galaxy-cluster observables in a single, non-parametric, joint reconstruction of consistent cluster potentials that can be used to locally constrain the physical state
NASA Astrophysics Data System (ADS)
Gladwin Pradeep, R.; Chandrasekar, V. K.; Mohanasubha, R.; Senthilvelan, M.; Lakshmanan, M.
2016-07-01
We identify contact transformations which linearize the given equations in the Riccati and Abel chains of nonlinear scalar and coupled ordinary differential equations to the same order. The identified contact transformations are not of Cole-Hopf type and are new to the literature. The linearization of Abel chain of equations is also demonstrated explicitly for the first time. The contact transformations can be utilized to derive dynamical symmetries of the associated nonlinear ODEs. The wider applicability of identifying this type of contact transformations and the method of deriving dynamical symmetries by using them is illustrated through two dimensional generalizations of the Riccati and Abel chains as well.
Combining Strong and Weak Gravitational Lensing in Abell 1689
NASA Astrophysics Data System (ADS)
Limousin, Marceau; Richard, Johan; Jullo, Eric; Kneib, Jean-Paul; Fort, Bernard; Soucail, Geneviève; Elíasdóttir, Árdís; Natarajan, Priyamvada; Ellis, Richard S.; Smail, Ian; Czoske, Oliver; Smith, Graham P.; Hudelot, Patrick; Bardeau, Sébastien; Ebeling, Harald; Egami, Eiichi; Knudsen, Kirsten K.
2007-10-01
We present a reconstruction of the mass distribution of galaxy cluster Abell 1689 at z=0.18 using detected strong lensing features from deep ACS observations and extensive ground based spectroscopy. Earlier analyses have reported up to 32 multiply imaged systems in this cluster, of which only 3 were spectroscopically confirmed. In this work, we present a parametric strong lensing mass reconstruction using 34 multiply imaged systems of which 24 have newly determined spectroscopic redshifts, which is a major step forward in building a robust mass model. In turn, the new spectroscopic data allows a more secure identification of multiply imaged systems. The resultant mass model enables us to reliably predict the redshifts of additional multiply imaged systems for which no spectra are currently available, and to use the location of these systems to further constrain the mass model. Using our strong lensing mass model, we predict on larger scale a shear signal which is consistent with that inferred from our large scale weak lensing analysis derived using CFH12K wide field images. Thanks to a new method for reliably selecting a well defined background lensed galaxy population, we resolve the discrepancy found between the NFW concentration parameters derived from earlier strong and weak lensing analysis. The derived parameters for the best fit NFW profile is found to be c200=7.6+/-1.6 and r200=2.16+/-0.10 h-170 Mpc (corresponding to a 3D mass equal to M200=[1.32+/-0.2]×1015 h70 Msolar). The large number of new constraints incorporated in this work makes Abell 1689 the most reliably reconstructed cluster to date. This well calibrated mass model, which we here make publicly available, will enable us to exploit Abell 1689 efficiently as a gravitational telescope, as well as to potentially constrain cosmology. Based on observations obtained at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council of Canada, the Institut National des
NASA Astrophysics Data System (ADS)
Razali, Azhani Mohd; Abdullah, Jaafar
2015-04-01
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.
Razali, Azhani Mohd Abdullah, Jaafar
2015-04-29
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.
SHOCKING TAILS IN THE MAJOR MERGER ABELL 2744
Owers, Matt S.; Couch, Warrick J.; Nulsen, Paul E. J.; Randall, Scott W.
2012-05-01
We identify four rare 'jellyfish' galaxies in Hubble Space Telescope imagery of the major merger cluster Abell 2744. These galaxies harbor trails of star-forming knots and filaments which have formed in situ in gas tails stripped from the parent galaxies, indicating they are in the process of being transformed by the environment. Further evidence for rapid transformation in these galaxies comes from their optical spectra, which reveal starburst, poststarburst, and active galactic nucleus features. Most intriguingly, three of the jellyfish galaxies lie near intracluster medium features associated with a merging 'Bullet-like' subcluster and its shock front detected in Chandra X-ray images. We suggest that the high-pressure merger environment may be responsible for the star formation in the gaseous tails. This provides observational evidence for the rapid transformation of galaxies during the violent core passage phase of a major cluster merger.
Giant ringlike radio structures around galaxy cluster Abell 3376.
Bagchi, Joydeep; Durret, Florence; Neto, Gastão B Lima; Paul, Surajit
2006-11-01
In the current paradigm of cold dark matter cosmology, large-scale structures are assembling through hierarchical clustering of matter. In this process, an important role is played by megaparsec (Mpc)-scale cosmic shock waves, arising in gravity-driven supersonic flows of intergalactic matter onto dark matter-dominated collapsing structures such as pancakes, filaments, and clusters of galaxies. Here, we report Very Large Array telescope observations of giant ( approximately 2 Mpc by 1.6 Mpc), ring-shaped nonthermal radio-emitting structures, found at the outskirts of the rich cluster of galaxies Abell 3376. These structures may trace the elusive shock waves of cosmological large-scale matter flows, which are energetic enough to power them. These radio sources may also be the acceleration sites where magnetic shocks are possibly boosting cosmic-ray particles with energies of up to 10(18) to 10(19) electron volts.
The central star of the planetary nebula Abell 78
NASA Technical Reports Server (NTRS)
Kaler, J. B.; Feibelman, W. A.
1984-01-01
The ultraviolet spectrum of the nucleus of Abell 78, one of the two planetaries known to contain zones of nearly pure helium, is studied. The line spectrum and wind velocities are examined, the determination of interstellar extinction for assessing circumstellar dust is improved, and the temperature, luminosity, and core mass are derived. The results for A78 are compared with results for A30, and it is concluded that the dust distributions around the two central stars are quite different. The temperature of the A78 core is not as high as previously believed, and almost certainly lies between 67,000 K and 130,000 K. The most likely temperature range is 77,000-84,000 K. The core mass lies between 0.56 and 0.70 solar mass, with the most likely values between 0.56 and 0.58 solar mass.
Shocking Tails in the Major Merger Abell 2744
NASA Astrophysics Data System (ADS)
Owers, Matt S.; Couch, Warrick J.; Nulsen, Paul E. J.; Randall, Scott W.
2012-05-01
We identify four rare "jellyfish" galaxies in Hubble Space Telescope imagery of the major merger cluster Abell 2744. These galaxies harbor trails of star-forming knots and filaments which have formed in situ in gas tails stripped from the parent galaxies, indicating they are in the process of being transformed by the environment. Further evidence for rapid transformation in these galaxies comes from their optical spectra, which reveal starburst, poststarburst, and active galactic nucleus features. Most intriguingly, three of the jellyfish galaxies lie near intracluster medium features associated with a merging "Bullet-like" subcluster and its shock front detected in Chandra X-ray images. We suggest that the high-pressure merger environment may be responsible for the star formation in the gaseous tails. This provides observational evidence for the rapid transformation of galaxies during the violent core passage phase of a major cluster merger.
The Sunyaev-Zeldovich Effect Spectrum of Abell 2163
NASA Technical Reports Server (NTRS)
LaRoque, S.; Reese, E. D.; Holder, G. P.; Carlstrom, J. E.; Holzapfel, W. L.; Joy, M. K.; Grego, L.; Rose, M. Franklin (Technical Monitor)
2001-01-01
We present a measurement of the Sunyaev-Zeldovich effect (SZE) at 30 GHz for the galaxy cluster Abell 2163. Combining this data point with previous measurements at 140, 220, and 270 GHz from the SuZIE and Daibolo experiments, we construct them most complete SZE spectrum to date. The spectrum is fitted to determine the compton y parameter and the peculiar velocity for this cluster; our results are y_0=3.6 x 10(circumflex)4 and v_p=360 km s(circumflex)-1. These results include corrections for contamination by Galactic dust emission; we find the contamination level to be much less than previously reported. The dust emission, while strong, is distributed over much larger angular scales than the cluster signal and contributes little to the measured signal when the proper SZE observing strategy is taken into account.
Black holes a-wandering in Abell 2261
NASA Astrophysics Data System (ADS)
Spolaor, Sarah; Ford, Holland; Gultekin, Kayhan; Lauer, Tod R.; Lazio, T. Joseph W.; Loeb, Abraham; Moustakas, Leonidas A.; Postman, Marc; Taylor, Joanna M.
2016-01-01
The brightest cluster galaxy in Abell 2261 (BCG2261) has an exceptionally large, flat, and asymmetric core, thought to have been shaped by a binary supermassive black hole inspiral and subsequent gravitational recoil. BCG2261 should contain a 10^10 Msun black hole, but it lacks the central cusp that should mark such a massive black hole. Based on the presence of central radio emission, we have explored the core of this galaxy with HST and the VLA to identify the presence and location of the active nucleus in this galaxy's core. We present our exploration of whether this system in fact contains direct evidence of a recoiling binary supermassive black hole. A recoiling core in this system would represent a pointed observational test of three preeminent theoretical predictions: that scouring forms cores, that SMBHs may recoil after coalescence, and that recoil can strongly influence core formation and morphology.
A shock front at the radio relic of Abell 2744
NASA Astrophysics Data System (ADS)
Eckert, D.; Jauzac, M.; Vazza, F.; Owers, M. S.; Kneib, J.-P.; Tchernin, C.; Intema, H.; Knowles, K.
2016-09-01
Radio relics are Mpc-scale diffuse radio sources at the peripheries of galaxy clusters which are thought to trace outgoing merger shocks. We present XMM-Newton and Suzaku observations of the galaxy cluster Abell 2744 (z = 0.306), which reveal the presence of a shock front 1.5 Mpc east of the cluster core. The surface-brightness jump coincides with the position of a known radio relic. Although the surface-brightness jump indicates a weak shock with a Mach number M=1.7_{-0.3}^{+0.5}, the plasma in the post-shock region has been heated to a very high temperature (˜13 keV) by the passage of the shock wave. The low-acceleration efficiency expected from such a weak shock suggests that mildly relativistic electrons have been re-accelerated by the passage of the shock front.
Giant ringlike radio structures around galaxy cluster Abell 3376.
Bagchi, Joydeep; Durret, Florence; Neto, Gastão B Lima; Paul, Surajit
2006-11-01
In the current paradigm of cold dark matter cosmology, large-scale structures are assembling through hierarchical clustering of matter. In this process, an important role is played by megaparsec (Mpc)-scale cosmic shock waves, arising in gravity-driven supersonic flows of intergalactic matter onto dark matter-dominated collapsing structures such as pancakes, filaments, and clusters of galaxies. Here, we report Very Large Array telescope observations of giant ( approximately 2 Mpc by 1.6 Mpc), ring-shaped nonthermal radio-emitting structures, found at the outskirts of the rich cluster of galaxies Abell 3376. These structures may trace the elusive shock waves of cosmological large-scale matter flows, which are energetic enough to power them. These radio sources may also be the acceleration sites where magnetic shocks are possibly boosting cosmic-ray particles with energies of up to 10(18) to 10(19) electron volts. PMID:17082451
ABEL description and implementation of cyber net system
NASA Astrophysics Data System (ADS)
Lu, Jiyuan; Jing, Liang
2013-03-01
Cyber net system is a subclass of Petri Nets. It has more powerful description capability and more complex properties compared with P/T system. Due to its nonlinear relation, it can't use analysis techniques of other net systems directly. This influences the research on cyber net system. In this paper, the author uses hardware description language to describe cyber net system. Simulation analysis is carried out through EDA software tools to disclose properties of the system. This method is introduced in detail through cyber net system model of computing Fibonacci series. ABEL source codes and simulation wave are also presented. The source codes are compiled, optimized, fit design and downloaded to the Programmable Logic Device. Thus ASIC of computing Fibonacci series is obtained. It will break a new path for the analysis and application study of cyber net system.
An inversion method for cometary atmospheres
NASA Astrophysics Data System (ADS)
Hubert, B.; Opitom, C.; Hutsemékers, D.; Jehin, E.; Munhoven, G.; Manfroid, J.; Bisikalo, D. V.; Shematovich, V. I.
2016-10-01
Remote observation of cometary atmospheres produces a measurement of the cometary emissions integrated along the line of sight. This integration is the so-called Abel transform of the local emission rate. The observation is generally interpreted under the hypothesis of spherical symmetry of the coma. Under that hypothesis, the Abel transform can be inverted. We derive a numerical inversion method adapted to cometary atmospheres using both analytical results and least squares fitting techniques. This method, derived under the usual hypothesis of spherical symmetry, allows us to retrieve the radial distribution of the emission rate of any unabsorbed emission, which is the fundamental, physically meaningful quantity governing the observation. A Tikhonov regularization technique is also applied to reduce the possibly deleterious effects of the noise present in the observation and to warrant that the problem remains well posed. Standard error propagation techniques are included in order to estimate the uncertainties affecting the retrieved emission rate. Several theoretical tests of the inversion techniques are carried out to show its validity and robustness. In particular, we show that the Abel inversion of real data is only weakly sensitive to an offset applied to the input flux, which implies that the method, applied to the study of a cometary atmosphere, is only weakly dependent on uncertainties on the sky background which has to be subtracted from the raw observations of the coma. We apply the method to observations of three different comets observed using the TRAPPIST telescope: 103P/ Hartley 2, F6/ Lemmon and A1/ Siding Spring. We show that the method retrieves realistic emission rates, and that characteristic lengths and production rates can be derived from the emission rate for both CN and C2 molecules. We show that the retrieved characteristic lengths can differ from those obtained from a direct least squares fitting over the observed flux of radiation, and
The distribution of dark and luminous matter in the unique galaxy cluster merger Abell 2146
NASA Astrophysics Data System (ADS)
King, Lindsay J.; Clowe, Douglas I.; Coleman, Joseph E.; Russell, Helen R.; Santana, Rebecca; White, Jacob A.; Canning, Rebecca E. A.; Deering, Nicole J.; Fabian, Andrew C.; Lee, Brandyn E.; Li, Baojiu; McNamara, Brian R.
2016-06-01
Abell 2146 (z = 0.232) consists of two galaxy clusters undergoing a major merger. The system was discovered in previous work, where two large shock fronts were detected using the Chandra X-ray Observatory, consistent with a merger close to the plane of the sky, caught soon after first core passage. A weak gravitational lensing analysis of the total gravitating mass in the system, using the distorted shapes of distant galaxies seen with Advanced Camera for Surveys - Wide Field Channel on Hubble Space Telescope, is presented. The highest peak in the reconstruction of the projected mass is centred on the brightest cluster galaxy (BCG) in Abell 2146-A. The mass associated with Abell 2146-B is more extended. Bootstrapped noise mass reconstructions show the mass peak in Abell 2146-A to be consistently centred on the BCG. Previous work showed that BCG-A appears to lag behind an X-ray cool core; although the peak of the mass reconstruction is centred on the BCG, it is also consistent with the X-ray peak given the resolution of the weak lensing mass map. The best-fitting mass model with two components centred on the BCGs yields M200 = 1.1^{+0.3}_{-0.4} × 1015 and 3^{+1}_{-2} × 1014 M⊙ for Abell 2146-A and Abell 2146-B, respectively, assuming a mass concentration parameter of c = 3.5 for each cluster. From the weak lensing analysis, Abell 2146-A is the primary halo component, and the origin of the apparent discrepancy with the X-ray analysis where Abell 2146-B is the primary halo is being assessed using simulations of the merger.
The wonderful apparatus of John Jacob Abel called the "artificial kidney".
Eknoyan, Garabed
2009-01-01
Hemodialysis, which now provides life-saving therapy to millions of individuals, began as an exploratory attempt to sustain the lives of selected patients in the 1950s. That was a century after the formulation of the concept and determination of the laws governing dialysis. The first step in the translation of the laboratory principles of dialysis to living animals was the "vividiffusion" apparatus developed by John Jacob Abel (1859-1938), dubbed the "artificial kidney" in the August 11, 1913 issue of The Times of London reporting the demonstration of vividiffusion by Abel at University College. The detailed article in the January 18, 1914 of the New York Times, reproduced here, is based on the subsequent medical reports published by Abel et al. Tentative attempts of human dialysis in the decade that followed based on the vividiffusion apparatus of Abel and his materials (collodion, hirudin, and glass) met with failure and had to be abandoned. Practical dialysis became possible in the 1940s and thereafter after cellophane, heparin, and teflon became available. Abel worked in an age of great progress and experimental work in the basic sciences that laid the foundations of science-driven medicine. It was a "Heroic Age of Medicine," when medical discoveries and communicating them to the public were assuming increasing importance. This article provides the cultural, social, scientific, and medical background in which Abel worked, developed and reported his wonderful apparatus called the "artificial kidney."
Mass, velocity anisotropy, and pseudo phase-space density profiles of Abell 2142
NASA Astrophysics Data System (ADS)
Munari, E.; Biviano, A.; Mamon, G. A.
2014-06-01
Aims: We aim to compute the mass and velocity anisotropy profiles of Abell 2142 and, from there, the pseudo phase-space density profile Q(r) and the density slope - velocity anisotropy β - γ relation, and then to compare them with theoretical expectations. Methods: The mass profiles were obtained by using three techniques based on member galaxy kinematics, namely the caustic method, the method of dispersion-kurtosis, and MAMPOSSt. Through the inversion of the Jeans equation, it was possible to compute the velocity anisotropy profiles. Results: The mass profiles, as well as the virial values of mass and radius, computed with the different techniques agree with one another and with the estimates coming from X-ray and weak lensing studies. A combined mass profile is obtained by averaging the lensing, X-ray, and kinematics determinations. The cluster mass profile is well fitted by an NFW profile with c = 4.0 ± 0.5. The population of red and blue galaxies appear to have a different velocity anisotropy configuration, since red galaxies are almost isotropic, while blue galaxies are radially anisotropic, with a weak dependence on radius. The Q(r) profile for the red galaxy population agrees with the theoretical results found in cosmological simulations, suggesting that any bias, relative to the dark matter particles, in velocity dispersion of the red component is independent of radius. The β - γ relation for red galaxies matches the theoretical relation only in the inner region. The deviations might be due to the use of galaxies as tracers of the gravitational potential, unlike the non-collisional tracer used in the theoretical relation.
The galaxy population of Abell 1367: photometric and spectroscopic data
NASA Astrophysics Data System (ADS)
Kriwattanawong, W.; Moss, C.; James, P. A.; Carter, D.
2011-03-01
Aims: Photometric and spectroscopic observations of the galaxy population of the galaxy cluster Abell 1367 have been obtained, over a field of 34' × 90', covering the cluster centre out to a radius of ~2.2 Mpc. Optical broad- and narrow-band imaging was used to determine galaxy luminosities, diameters and morphologies, and to study current star formation activity of a sample of cluster galaxies. Near-infrared imaging was obtained to estimate integrated stellar masses, and to aid the determination of mean stellar ages and metallicities for the future investigation of the star formation history of those galaxies. Optical spectroscopic observations were also taken, to confirm cluster membership of galaxies in the sample through their recession velocities. Methods.U, B and R broad-band and Hα narrow-band imaging observations were carried out using the Wide Field Camera (WFC) on the 2.5 m Isaac Newton Telescope on La Palma, covering the field described above. J and K near-infrared imaging was obtained using the Wide Field Camera (WFCAM) on the 3.8 m UK Infrared Telescope on Mauna Kea, covering a somewhat smaller field of 0.75 square degrees on the cluster centre. The spectroscopic observations were carried out using a multifibre spectrograph (WYFFOS) on the 4.2 m William Herschel Telecope on La Palma, over the same field as the optical imaging observations. Results: Our photometric data give optical and near-infrared isophotal magnitudes for 303 galaxies in our survey regions, down to stated diameter and B-band magnitude limits, determined within R24 isophotal diameters. Our spectroscopic data of 328 objects provide 84 galaxies with detections of emission and/or absorption lines. Combining these with published spectroscopic data gives 126 galaxies within our sample for which recession velocities are known. Of these, 72 galaxies are confirmed as cluster members of Abell 1367, 11 of which are identified in this study and 61 are reported in the literature. Hα equivalent
The merging cluster Abell 1758: an optical and dynamical view
NASA Astrophysics Data System (ADS)
Monteiro-Oliveira, Rogerio; Serra Cypriano, Eduardo; Machado, Rubens; Lima Neto, Gastao B.
2015-08-01
The galaxy cluster Abell 1758-North (z=0.28) is a binary system composed by the sub-structures NW and NE. This is supposed to be a post-merging cluster due to observed detachment between the NE BCG and the respective X-ray emitting hot gas clump in a scenario very close to the famous Bullet Cluster. On the other hand, the projected position of the NW BCG coincides with the local hot gas peak. This system was been targeted previously by several studies, using multiple wavelengths and techniques, but there is still no clear picture of the scenario that could have caused this unusual configuration. To help solving this complex puzzle we added some pieces: firstly, we have used deep B, RC and z' Subaru images to perform both weak lensing shear and magnification analysis of A1758 (including here the South component that is not in interaction with A1758-North) modeling each sub-clump as an NFW profile in order to constrain masses and its center positions through MCMC methods; the second piece is the dynamical analysis using radial velocities available in the literature (143) plus new Gemini-GMOS/N measurements (68 new redshifts).From weak lensing we found that independent shear and magnification mass determinations are in excellent agreement between them and combining both we could reduce mass error bar by ~30% compared to shear alone. By combining this two weak-lensing probes we found that the position of both Northern BCGs are consistent with the masses centers within 2σ and and the NE hot gas peak to be offseted of the respective mass peak (M200=5.5 X 1014 M⊙) with very high significance. The most massive structure is NW (M200=7.95 X 1014 M⊙ ) where we observed no detachment between gas, DM and BCG.We have calculated a low line-of-sight velocity difference (<300 km/s) between A1758 NW and NE. We have combined it with the projected velocity of 1600 km/s which was estimated by previous X-ray analysis (David & Kempner 2004) and we have obtained a small angle between
The planetary nebula Abell 48 and its [WN] nucleus
NASA Astrophysics Data System (ADS)
Frew, David J.; Bojičić, I. S.; Parker, Q. A.; Stupar, M.; Wachter, S.; DePew, K.; Danehkar, A.; Fitzgerald, M. T.; Douchin, D.
2014-05-01
We have conducted a detailed multi-wavelength study of the peculiar nebula Abell 48 and its central star. We classify the nucleus as a helium-rich, hydrogen-deficient star of type [WN4-5]. The evidence for either a massive WN or a low-mass [WN] interpretation is critically examined, and we firmly conclude that Abell 48 is a planetary nebula (PN) around an evolved low-mass star, rather than a Population I ejecta nebula. Importantly, the surrounding nebula has a morphology typical of PNe, and is not enriched in nitrogen, and thus not the `peeled atmosphere' of a massive star. We estimate a distance of 1.6 kpc and a reddening, E(B - V) = 1.90 mag, the latter value clearly showing the nebula lies on the near side of the Galactic bar, and cannot be a massive WN star. The ionized mass (˜0.3 M⊙) and electron density (700 cm-3) are typical of middle-aged PNe. The observed stellar spectrum was compared to a grid of models from the Potsdam Wolf-Rayet (PoWR) grid. The best-fitting temperature is 71 kK, and the atmospheric composition is dominated by helium with an upper limit on the hydrogen abundance of 10 per cent. Our results are in very good agreement with the recent study of Todt et al., who determined a hydrogen fraction of 10 per cent and an unusually large nitrogen fraction of ˜5 per cent. This fraction is higher than any other low-mass H-deficient star, and is not readily explained by current post-AGB models. We give a discussion of the implications of this discovery for the late-stage evolution of intermediate-mass stars. There is now tentative evidence for two distinct helium-dominated post-AGB lineages, separate to the helium- and carbon-dominated surface compositions produced by a late thermal pulse. Further theoretical work is needed to explain these recent discoveries.
A shock at the radio relic position in Abell 115
NASA Astrophysics Data System (ADS)
Botteon, A.; Gastaldello, F.; Brunetti, G.; Dallacasa, D.
2016-07-01
We analysed a deep Chandra observation (334 ks) of the galaxy cluster Abell 115 and detected a shock cospatial with the radio relic. The X-ray surface brightness profile across the shock region presents a discontinuity, corresponding to a density compression factor {C}=2.0± 0.1, leading to a Mach number {M}=1.7± 0.1 ({M}=1.4-2 including systematics). Temperatures measured in the upstream and downstream regions are consistent with what expected for such a shock: T_u=4.3^{+1.0}_{-0.6}{keV} and T_d=7.9^{+1.4}_{-1.1}{keV}, respectively, implying a Mach number {M}=1.8^{+0.5}_{-0.4}. So far, only few other shocks discovered in galaxy clusters are consistently detected from both density and temperature jumps. The spatial coincidence between this discontinuity and the radio relic edge strongly supports the view that shocks play a crucial role in powering these synchrotron sources. We suggest that the relic is originated by shock re-acceleration of relativistic electrons rather than acceleration from the thermal pool. The position and curvature of the shock and the associated relic are consistent with an off-axis merger with unequal mass ratio where the shock is expected to bend around the core of the less massive cluster.
The Radio Luminosity Function and Galaxy Evolution of Abell 2256
NASA Astrophysics Data System (ADS)
Forootaninia, Zahra
2015-05-01
This thesis presents a study of the radio luminosity function and the evolution of galaxies in the Abell 2256 cluster (z=0.058, richness class 2). Using the NED database and VLA deep data with an rms sensitivity of 18 mu Jy.beam--1, we identified 257 optical galaxies as members of A2256, of which 83 are radio galaxies. Since A2256 is undergoing a cluster-cluster merger, it is a good candidate to study the radio activity of galaxies in the cluster. We calculated the Univariate and Bivariate radio luminosity functions for A2256, and compared the results to studies on other clusters. We also used the SDSS parameter fracDev to roughly classify galaxies as spirals and ellipticals, and investigated the distribution and structure of galaxies in the cluster. We found that most of the radio galaxies in A2256 are faint, and are distributed towards the outskirts of the cluster. On the other hand, almost all very bright radio galaxies are ellipticals which are located at the center of the cluster. We also found there is an excess in the number of radio spiral galaxies in A2256 compared to the number of radio ellipticals, counting down to a radio luminosity of log(luminosity)=20.135 W/Hz..
Abell 1201: A Minor Merger at Second Core Passage
NASA Astrophysics Data System (ADS)
Ma, Cheng-Jiun; Owers, Matt; Nulsen, Paul E. J.; McNamara, Brian R.; Murray, Stephen S.; Couch, Warrick J.
2012-06-01
We present an analysis of the structures and dynamics of the merging cluster Abell 1201, which has two sloshing cold fronts around a cooling core, and an offset gas core approximately 500 kpc northwest of the center. New Chandra and XMM-Newton data reveal a region of enhanced brightness east of the offset core, with breaks in surface brightness along its boundary to the north and east. This is interpreted as a tail of gas stripped from the offset core. Gas in the offset core and the tail is distinguished from other gas at the same distance from the cluster center chiefly by having higher density, hence lower entropy. In addition, the offset core shows marginally lower temperature and metallicity than the surrounding area. The metallicity in the cool core is high and there is an abrupt drop in metallicity across the southern cold front. We interpret the observed properties of the system, including the placement of the cold fronts, the offset core, and its tail in terms of a simple merger scenario. The offset core is the remnant of a merging subcluster, which first passed pericenter southeast of the center of the primary cluster and is now close to its second pericenter passage, moving at ~= 1000 km s-1. Sloshing excited by the merger gave rise to the two cold fronts and the disposition of the cold fronts reveals that we view the merger from close to the plane of the orbit of the offset core.
Chandra Observations of Point Sources in Abell 2255
NASA Technical Reports Server (NTRS)
Davis, David S.; Miller, Neal A.; Mushotzky, Richard F.
2003-01-01
In our search for "hidden" AGN we present results from a Chandra observation of the nearby cluster Abell 2255. Eight cluster galaxies are associated with point-like X-ray emission, and we classify these galaxies based on their X-ray, radio, and optical properties. At least three are associated with active galactic nuclei (AGN) with no optical signatures of nuclear activity, with a further two being potential AGN. Of the potential AGN, one corresponds to a galaxy with a post-starburst optical spectrum. The remaining three X-ray detected cluster galaxies consist of two starbursts and an elliptical with luminous hot gas. Of the eight cluster galaxies five are associated with luminous (massive) galaxies and the remaining three lie in much lower luminosity systems. We note that the use of X-ray to optical flux ratios for classification of X-ray sources is often misleading, and strengthen the claim that the fraction of cluster galaxies hosting an AGN based on optical data is significantly lower than the fraction based on X-ray and radio data.
ABELL 1201: A MINOR MERGER AT SECOND CORE PASSAGE
Ma Chengjiun; Nulsen, Paul E. J.; McNamara, Brian R.; Murray, Stephen S.; Owers, Matt; Couch, Warrick J.
2012-06-20
We present an analysis of the structures and dynamics of the merging cluster Abell 1201, which has two sloshing cold fronts around a cooling core, and an offset gas core approximately 500 kpc northwest of the center. New Chandra and XMM-Newton data reveal a region of enhanced brightness east of the offset core, with breaks in surface brightness along its boundary to the north and east. This is interpreted as a tail of gas stripped from the offset core. Gas in the offset core and the tail is distinguished from other gas at the same distance from the cluster center chiefly by having higher density, hence lower entropy. In addition, the offset core shows marginally lower temperature and metallicity than the surrounding area. The metallicity in the cool core is high and there is an abrupt drop in metallicity across the southern cold front. We interpret the observed properties of the system, including the placement of the cold fronts, the offset core, and its tail in terms of a simple merger scenario. The offset core is the remnant of a merging subcluster, which first passed pericenter southeast of the center of the primary cluster and is now close to its second pericenter passage, moving at {approx_equal} 1000 km s{sup -1}. Sloshing excited by the merger gave rise to the two cold fronts and the disposition of the cold fronts reveals that we view the merger from close to the plane of the orbit of the offset core.
The Sunyaev-Zeldovich Effect in Abell 370
NASA Technical Reports Server (NTRS)
Grego, Laura; Carlstrom, John E.; Joy, Marshall K.; Reese, Erik D.; Holder, Gilbert P.; Patel, Sandeep; Cooray, Asantha R.; Holzappel, William L.
2000-01-01
We present interferometric measurements of the Sunyaev-Zeldovich (SZ) effect toward the galaxy cluster Abell 370. These measurements, which directly probe the pressure of the cluster's gas, show the gas distribution to be strongly aspherical, as do the X-ray and gravitational lensing observations. We calculate the cluster's gas mass fraction in two ways. We first compare the gas mass derived from the SZ measurements to the lensing-derived gravitational mass near the critical lensing radius. We also calculate the gas mass fraction from the SZ data by deprojecting the three-dimensional gas density distribution and deriving the total mass under the assumption that the gas is in hydrostatic equilibrium (HSE). We test the assumptions in the HSE method by comparing the total cluster mass implied by the two methods and find that they agree within the errors of the measurement. We discuss the possible system- atic errors in the gas mass fraction measurement and the constraints it places on the matter density parameter, Omega(sub M).
The Sunyaev-Zel'dovich Effect Spectrum of Abell 2163
NASA Technical Reports Server (NTRS)
LaRoque, S. J.; Carlstrom, J. E.; Reese, E. D.; Holder, G. P.; Holzapfel, W. L.; Joy, M.; Grego, L.; Six, N. Frank (Technical Monitor)
2002-01-01
We present an interferometric measurement of the Sunyaev-Zel'dovich effect (SZE) at 1 cm for the galaxy cluster Abell 2163. We combine this data point with previous measurements at 1.1, 1.4, and 2.1 mm from the SuZIE experiment to construct the most complete SZE spectrum to date. The intensity in four wavelength bands is fit to determine the Compton y-parameter (y(sub 0)) and the peculiar velocity (v(sub p)) for this cluster. Our results are y(sub 0) = 3.56((sup +0.41+0.27)(sub -0.41-0.19)) X 10(exp -4) and v(sub p) = 410((sup +1030+460) (sub -850-440)) km s(exp -1) where we list statistical and systematic uncertainties, respectively, at 68% confidence. These results include corrections for contamination by Galactic dust emission. We find less contamination by dust emission than previously reported. The dust emission is distributed over much larger angular scales than the cluster signal and contributes little to the measured signal when the details of the SZE observing strategy are taken into account.
Narrow-angle tail radio sources and the distribution of galaxy orbits in Abell clusters
NASA Technical Reports Server (NTRS)
O'Dea, Christopher P.; Sarazin, Craig L.; Owen, Frazer N.
1987-01-01
The present data on the orientations of the tails with respect to the cluster centers of a sample of 70 narrow-angle-tail (NAT) radio sources in Abell clusters show the distribution of tail angles to be inconsistent with purely radial or circular orbits in all the samples, while being consistent with isotropic orbits in (1) the whole sample, (2) the sample of NATs far from the cluster center, and (3) the samples of morphologically regular Abell clusters. Evidence for very radial orbits is found, however, in the sample of NATs near the cluster center. If these results can be generalized to all cluster galaxies, then the presence of radial orbits near the center of Abell clusters suggests that violent relaxation may not have been fully effective even within the cores of the regular clusters.
Nonlocal symmetries of Riccati and Abel chains and their similarity reductions
NASA Astrophysics Data System (ADS)
Bruzon, M. S.; Gandarias, M. L.; Senthilvelan, M.
2012-02-01
We study nonlocal symmetries and their similarity reductions of Riccati and Abel chains. Our results show that all the equations in Riccati chain share the same form of nonlocal symmetry. The similarity reduced Nth order ordinary differential equation (ODE), N = 2, 3, 4, …, in this chain yields (N - 1)th order ODE in the same chain. All the equations in the Abel chain also share the same form of nonlocal symmetry (which is different from the one that exist in Riccati chain) but the similarity reduced Nth order ODE, N = 2, 3, 4, …, in the Abel chain always ends at the (N - 1)th order ODE in the Riccati chain. We describe the method of finding general solution of all the equations that appear in these chains from the nonlocal symmetry.
The nearby Abell clusters. III. Luminosity functions for eight rich clusters
Oegerle, W.R.; Hoessel, J.G. Washburn Observatory, Madison, WI )
1989-11-01
Red photographic data on eight rich Abell clusters are combined with previous results on four other Abell clusters to study the luminosity functions of the clusters. The results produce a mean value of the characteristic galaxy magnitude (M asterisk) that is consistent with previous results. No relation is found between the magnitude of the first-ranked cluster galaxy and M asterisk, suggesting that the value of M asterisk is not changed by dynamical evolution. The faint ends of the luminosity functions for many of the clusters are quite flat, validating the nonuniversality in the parametrization of Schechter (1976) functions for rich clusters of galaxies. 40 refs.
U(1)-invariant membranes: The geometric formulation, Abel, and pendulum differential equations
Zheltukhin, A. A.; Trzetrzelewski, M.
2010-06-15
The geometric approach to study the dynamics of U(1)-invariant membranes is developed. The approach reveals an important role of the Abel nonlinear differential equation of the first type with variable coefficients depending on time and one of the membrane extendedness parameters. The general solution of the Abel equation is constructed. Exact solutions of the whole system of membrane equations in the D=5 Minkowski space-time are found and classified. It is shown that if the radial component of the membrane world vector is only time dependent, then the dynamics is described by the pendulum equation.
THE GALAXY POPULATION OF LOW-REDSHIFT ABELL CLUSTERS
Barkhouse, Wayne A.; Yee, H. K. C.; Lopez-Cruz, Omar E-mail: hyee@astro.utoronto.c
2009-10-01
We present a study of the luminosity and color properties of galaxies selected from a sample of 57 low-redshift Abell clusters. We utilize the non-parametric dwarf-to-giant ratio (DGR) and the blue galaxy fraction (f{sub b} ) to investigate the clustercentric radial-dependent changes in the cluster galaxy population. Composite cluster samples are combined by scaling the counting radius by r {sub 200} to minimize radius selection bias. The separation of galaxies into a red and blue population was achieved by selecting galaxies relative to the cluster color-magnitude relation. The DGR of the red and blue galaxies is found to be independent of cluster richness (B {sub gc}), although the DGR is larger for the blue population at all measured radii. A decrease in the DGR for the red and red+blue galaxies is detected in the cluster core region, while the blue galaxy DGR is nearly independent of radius. The f{sub b} is found not to correlate with B {sub gc}; however, a steady decline toward the inner-cluster region is observed for the giant galaxies. The dwarf galaxy f{sub b} is approximately constant with clustercentric radius except for the inner-cluster core region where f{sub b} decreases. The clustercentric radial dependence of the DGR and the galaxy blue fraction indicates that it is unlikely that a simple scenario based on either pure disruption or pure fading/reddening can describe the evolution of infalling dwarf galaxies; both outcomes are produced by the cluster environment.
The Sunyaev-Zel'dovich Effect in Abell 370
NASA Technical Reports Server (NTRS)
Grego, Laura; Carlstrom, John E.; Joy, Marshall K.; Reese, Erik D.; Holder, Gilbert P.; Patel, Sandeep; Holzapfel, William L.; Cooray, Asantha K.
1999-01-01
We present interferometric measurements of the Sunyaev-Zel'dovich (SZ) effect towards the galaxy cluster Abell 370. These measurements, which directly probe the pressure of the cluster's gas, show the gas is strongly aspherical, on agreement with the morphology revealed by x-ray and gravitational lensing observations. We calculate the cluster's gas mass fraction by comparing the gas mass derived from the SZ measurements to the lensing-derived gravitational mass near the critical lensing radius. We also calculate the gas mass fraction from the SZ data by deriving the total mass under the assumption that the gas is in hydrostatic equilibrium (HSE). We test the assumptions in the HSE method by comparing the total cluster mass implied by the two methods. The Hubble constant derived for this cluster, when the known systematic uncertainties are included, has a very wide range of values and therefore does not provide additional constraints on the validity of the assumptions. We examine carefully the possible systematic errors in the gas fraction measurement. The gas fraction is a lower limit to the cluster's baryon fraction and so we compare the gas mass fraction, calibrated by numerical simulations to approximately the virial radius, to measurements of the global mass fraction of baryonic matter, OMEGA(sub B)/OMEGA(sub matter). Our lower limit to the cluster baryon fraction is f(sub B) = (0.043 +/- 0.014)/h (sub 100). From this, we derive an upper limit to the universal matter density, OMEGA(sub matter) <= 0.72/h(sub 100), and a likely value of OMEGA(sub matter) <= (0.44(sup 0.15, sub -0.12)/h(sub 100).
Wavelet Sparse Approximate Inverse Preconditioners
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Tang, W.-P.; Wan, W. L.
1996-01-01
There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.
NASA Astrophysics Data System (ADS)
Ray, Anandaroop; Sekar, Anusha; Hoversten, G. Michael; Albertin, Uwe
2016-05-01
We present an algorithm to recover the Bayesian posterior model probability density function of subsurface elastic parameters, as required by the full pressure field recorded at an ocean bottom cable due to an impulsive seismic source. Both the data noise and source wavelet are estimated by our algorithm, resulting in robust estimates of subsurface velocity and density. In contrast to purely gradient based approaches, our method avoids model regularization entirely and produces an ensemble of models that can be visualized and queried to provide meaningful information about the sensitivity of the data to the subsurface, and the level of resolution of model parameters. Our algorithm is trans-dimensional and performs model selection, sampling over a wide range of model parametrizations. We follow a frequency domain approach and derive the corresponding likelihood in the frequency domain. We present first a synthetic example of a reservoir at 2 km depth with minimal acoustic impedance contrast, which is difficult to study with conventional seismic amplitude versus offset changes. Finally, we apply our methodology to survey data collected over the Alba field in the North Sea, an area which is known to show very little lateral heterogeneity but nevertheless presents challenges for conventional post migration seismic amplitude versus offset analysis.
VizieR Online Data Catalog: Deep spectroscopy of Abell 85 (Agulli+, 2016)
NASA Astrophysics Data System (ADS)
Agulli, I.; Aguerri, J. A. L.; Sanchez-Janssen, R.; Dalla Vecchia, C.; Diaferio, A.; Barrena, R.; Palmero, L. D.; Yu, H.
2016-07-01
File a85_memb.dat contains 5 columns with the sky coordinates (RA;DE), the r and g band magnitudes and the recessional velocities for each 460 confirmed members of Abell 85 cluster. Details on the data set can be found in the paper. (1 data file).
Abell 58 - a Planetary Nebula with an ONe-rich knot: a signature of binary interaction? .
NASA Astrophysics Data System (ADS)
Lau, H. H. B.; De Marco, O.; Liu, X.-W.
We have investigated the possibility that binary evolution is involved in the formation of the planetary nebula Abell 58. In particular, we assume a neon nova is responsible for the observed high oxygen and neon abundances of the central hydrogen-deficient knot of the H-deficient planetary nebula Abell 58 and the ejecta from the explosion are mixed with the planetary nebula. We have investigated different scenarios involving mergers and wind accretion and found that the most promising formation scenario involves a primary SAGB star that ends its evolution as an ONe white dwarf with an AGB companion at a moderately close separation. Mass is deposited on the white dwarf through wind accretion. So neon novae could occur just after the secondary AGB companion undergoes its final flash. However, the initial separation has to be fine-tuned. To estimate the frequency of such systems we evolve a population of binary systems and find that that Abell 58-like objects should indeed be rare and the fraction of Abell-58 planetary nebula is on the order of 10-4, or lower, among all planetary nebulae.
Crazy heart: kinematics of the "star pile" in Abell 545
NASA Astrophysics Data System (ADS)
Salinas, R.; Richtler, T.; West, M. J.; Romanowsky, A. J.; Lloyd-Davies, E.; Schuberth, Y.
2011-04-01
We study the structure and internal kinematics of the "star pile" in Abell 545 - a low surface brightness structure lying in the center of the cluster. We have obtained deep long-slit spectroscopy of the star pile using VLT/FORS2 and Gemini/GMOS, which is analyzed in conjunction with deep multiband CFHT/MEGACAM imaging. As presented in a previous study the star pile has a flat luminosity profile and its color is consistent with the outer parts of elliptical galaxies. Its velocity map is irregular, with parts being seemingly associated with an embedded nucleus, and others which have significant velocity offsets to the cluster systemic velocity with no clear kinematical connection to any of the surrounding galaxies. This would make the star pile a dynamically defined stellar intra-cluster component. The complicated pattern in velocity and velocity dispersions casts doubts on the adequacy of using the whole star pile as a dynamical test for the innermost dark matter profile of the cluster. This status is fulfilled only by the nucleus and its nearest surroundings which lie at the center of the cluster velocity distribution. Based on observations taken at the European Southern Observatory, Cerro Paranal, Chile, under programme ID 080.B-0529. Also based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil) and SECYT (Argentina); and on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National
MUSE observations of the lensing cluster Abell 1689
NASA Astrophysics Data System (ADS)
Bina, D.; Pelló, R.; Richard, J.; Lewis, J.; Patrício, V.; Cantalupo, S.; Herenz, E. C.; Soto, K.; Weilbacher, P. M.; Bacon, R.; Vernet, J. D. R.; Wisotzki, L.; Clément, B.; Cuby, J. G.; Lagattuta, D. J.; Soucail, G.; Verhamme, A.
2016-05-01
Context. This paper presents the results obtained with the Multi Unit Spectroscopic Explorer (MUSE) for the core of the lensing cluster Abell 1689, as part of MUSE's commissioning at the ESO Very Large Telescope. Aims: Integral-field observations with MUSE provide a unique view of the central 1 × 1 arcmin2 region at intermediate spectral resolution in the visible domain, allowing us to conduct a complete census of both cluster galaxies and lensed background sources. Methods: We performed a spectroscopic analysis of all sources found in the MUSE data cube. Two hundred and eighty-two objects were systematically extracted from the cube based on a guided-and-manual approach. We also tested three different tools for the automated detection and extraction of line emitters. Cluster galaxies and lensed sources were identified based on their spectral features. We investigated the multiple-image configuration for all known sources in the field. Results: Previous to our survey, 28 different lensed galaxies displaying 46 multiple images were known in the MUSE field of view, most of them were detected through photometric redshifts and lensing considerations. Of these, we spectroscopically confirm 12 images based on their emission lines, corresponding to 7 different lensed galaxies between z = 0.95 and 5.0. In addition, 14 new galaxies have been spectroscopically identified in this area thanks to MUSE data, with redshifts ranging between 0.8 and 6.2. All background sources detected within the MUSE field of view correspond to multiple-imaged systems lensed by A1689. Seventeen sources in total are found at z ≥ 3 based on their Lyman-α emission, with Lyman-α luminosities ranging between 40.5 ≲ log (Lyα) ≲ 42.5 after correction for magnification. This sample is particularly sensitive to the slope of the luminosity function toward the faintest end. The density of sources obtained in this survey is consistent with a steep value of α ≤ -1.5, although this result still
Abell 41: shaping of a planetary nebula by a binary central star
NASA Astrophysics Data System (ADS)
Jones, D.; Lloyd, M.; Santander-García, M.; López, J. A.; Meaburn, J.; Mitchell, D. L.; O'Brien, T. J.; Pollacco, D.; Rubio-Díez, M. M.; Vaytet, N. M. H.
2010-11-01
We present the first detailed spatiokinematical analysis and modelling of the planetary nebula Abell 41, which is known to contain the well-studied close-binary system MT Ser. This object represents an important test case in the study of the evolution of planetary nebulae with binary central stars as current evolutionary theories predict that the binary plane should be aligned perpendicular to the symmetry axis of the nebula. Deep narrow-band imaging in the light of [NII]6584Å, [OIII]5007 Å and [SII]6717+6731Å, obtained using ACAM on the William Herschel Telescope, has been used to investigate the ionization structure of Abell 41. Long-slit observations of the Hα and [NII]6584Å emission were obtained using the Manchester Echelle Spectrometer on the 2.1-m San Pedro Mártir Telescope. These spectra, combined with the narrow-band imagery, were used to develop a spatiokinematical model of [NII]6584Å emission from Abell 41. The best-fitting model reveals Abell 41 to have a waisted, bipolar structure with an expansion velocity of ~40 km s-1 at the waist. The symmetry axis of the model nebula is within 5° of perpendicular to the orbital plane of the central binary system. This provides strong evidence that the close-binary system, MT Ser, has directly affected the shaping of its nebula, Abell 41. Although the theoretical link between bipolar planetary nebulae and binary central stars is long established, this nebula is only the second to have this link, between nebular symmetry axis and binary plane, proved observationally. Based on observations made with the William Herschel Telescope operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias. E-mail: david.jones-3@postgrad.manchester.ac.uk
NASA Astrophysics Data System (ADS)
Sergienko, Olga
2013-04-01
Since Doug MacAyeal's pioneering studies of the ice-stream basal traction optimizations by control methods, inversions for unknown parameters (e.g., basal traction, accumulation patterns, etc) have become a hallmark of the present-day ice-sheet modeling. The common feature of such inversion exercises is a direct relationship between optimized parameters and observations used in the optimization procedure. For instance, in the standard optimization for basal traction by the control method, ice-stream surface velocities constitute the control data. The optimized basal traction parameters explicitly appear in the momentum equations for the ice-stream velocities (compared to the control data). The inversion for basal traction is carried out by minimization of the cost (or objective, misfit) function that includes the momentum equations facilitated by the Lagrange multipliers. Here, we build upon this idea, and demonstrate how to optimize for parameters indirectly related to observed data using a suite of nested constraints (like Russian dolls) with additional sets of Lagrange multipliers in the cost function. This method opens the opportunity to use data from a variety of sources and types (e.g., velocities, radar layers, surface elevation changes, etc.) in the same optimization process.
NASA Astrophysics Data System (ADS)
Sambuelli, L.; Bohm, G.; Capizzi, P.; Cardarelli, E.; Cosentino, P.
2011-09-01
By late 2008 one of the most important pieces of the 'Museo delle Antichità Egizie' of Turin, the sculpture of the Pharaoh with god Amun, was planned to be one of the masterpieces of a travelling exhibition in Japan. The 'Fondazione Museo delle Antichità Egizie di Torino', who manages the museum, was concerned with the integrity of the base of the statue which actually presents visible signs of restoration dating back to the early 19th century. It was required to estimate the persistence of the visible fractures, to search for unknown ones and to provide information about the overall mechanical strength of the base. To tackle the first question a GPR reflection survey along three sides of the base was performed and the results were assembled in a 3D rendering. As far as the second question is concerned, two parallel, horizontal ultrasonic 2D tomograms across the base were made. We acquired, for each section, 723 ultrasonic signals corresponding to different transmitter and receiver positions. The tomographic data were inverted using four different software packages based upon different algorithms. The obtained velocity images were then compared each other, with the GPR results and with the visible fractures in the base. A critical analysis of the comparisons is finally presented.
VizieR Online Data Catalog: 1400-MHz Survey of 1478 Abell Clusters of Galaxies (Owen+ 1982)
NASA Astrophysics Data System (ADS)
Owen, F. N.; White, R. A.; Hilldrup, K. C.; Hanisch, R. J.
1994-03-01
This catalog contains observations of Abell clusters of galaxies which were obtained with the Green Bank 91-m telescope at 1400 MHz with an angular resolution of 10'x11' (RAxDEC). This catalog extends the sample of clusters originally published in Owen (1974AJ.....79..427O). The primary goals of this survey were to observe all Abell (1958ApJS....3..211A, Cat. VII/4) clusters with m10 (magnitude of the tenth brightest galaxy in the cluster) less than or equal to 17.0 and declinations north of -19 degrees, to observe all clusters with richness>=3 regardless of m10, and to obtain observations of a representative sample of the rest of the catalog (m10>=17.0; richness<=2). The abelclus.dat file contains ALL 957 detected sources (also beyond 0.5 corrected Abell radii). It contains 525 sources within 0.5 corrected Abell radii, while the published table1.dat file contains 487 entries corresponding to 485 distinct sources (in 442 clusters). The catalog entries contains the flux density at 1400 MHz, the Abell cluster number, richness class, distance class, m10, redshift estimate (z), corrected Abell cluster radius, right ascension (B1950), declination (B1950), deconvolved major and minor source axis lengths, position angle, and distance of the source from the cluster center. (2 data files).
UV Observations of the Galaxy Cluster Abell 1795 with the Optical Monitor on XMM-Newton
NASA Technical Reports Server (NTRS)
Mittaz, J. P. D.; Kaastra, J. S.; Tamura, T.; Fabian, A. C.; Mushotzky, F.; Peterson, J. R.; Ikebe, Y.; Lumb, D. H.; Paerels, F.; Stewart, G.
2000-01-01
We present the results of an analysis of broad band UV observations of the central regions of Abell 1795 observed with the optical monitor on XMM-Newton. As have been found with other UV observations of the central regions of clusters of galaxies, we find evidence for star formation. However, we also find evidence for absorption in the cD galaxy on a more extended scale than has been seen with optical imaging. We also report the first UV observation of part of the filamentary structure seen in H-alpha, X-rays and very deep U band imaging. The part of the filament we see is very blue with UV colours consistent with a very early (O/B) stellar population. This is the first direct evidence of a dominant population of early type stars at the centre of Abell 1795 and implies very recent star formation. The relationship of this emission to emission at other wavebands is discussed.
A combined optical/X-ray study of the Galaxy cluster Abell 2256
NASA Technical Reports Server (NTRS)
Fabricant, Daniel G.; Kent, Stephen M.; Kurtz, Michael J.
1989-01-01
The dynamics of Abell 2256 is investigated by combining X-ray observations of the intracluster gas with optical observations of the galaxy distribution and kinematics. Magnitudes and positions are presented for 172 galaxies and new redshifts for 75. Abell 2256 is similar to the Coma Cluster in its X-ray luminosity, mass, and galaxy density. Both the X-ray surface brightness and the galaxy surface density distributions exhibit an elliptical morphology. The radial galaxy distribution is steeper than the density profile of the X-ray-emitting gas, yet the galaxy velocity dispersion is higher than the equivalent value for the gas. Under the simplest assumptions that the galaxy velocity distribution is isotropic and the gas is isothermal, the galaxies and gas cannot be in hydrostatic equilibrium in a common gravitational potential. Models consistent with available data have mass-to-light ratios which increase with radius and galaxy orbits that are anisotropic with a radial bias.
Anti-Brownian ELectrokinetic (ABEL) Trapping of Single High Density Lipoprotein (HDL) Particles
NASA Astrophysics Data System (ADS)
Bockenhauer, Samuel; Furstenberg, Alexandre; Wang, Quan; Devree, Brian; Jie Yao, Xiao; Bokoch, Michael; Kobilka, Brian; Sunahara, Roger; Moerner, W. E.
2010-03-01
The ABEL trap is a novel device for trapping single biomolecules in solution for extended observation. The trap estimates the position of a fluorescently-labeled object as small as ˜10 nm in solution and then applies a feedback electrokinetic drift every 20 us to trap the object by canceling its Brownian motion. We use the ABEL trap to study HDL particles at the single-copy level. HDL particles, essential in regulation of ``good'' cholesterol in humans, comprise a small (˜10 nm) lipid bilayer disc bounded by a belt of apolipoproteins. By engineering HDL particles with single fluorescent donor/acceptor probes and varying lipid compositions, we are working to study lipid diffusion on small length scales. We also use HDL particles as hosts for single transmembrane receptors, which should enable study of receptor conformational dynamics on long timescales.
Applications of matrix inversion tomosynthesis
NASA Astrophysics Data System (ADS)
Warp, Richard J.; Godfrey, Devon J.; Dobbins, James T., III
2000-04-01
The improved image quality and characteristics of new flat- panel x-ray detectors have renewed interest in advanced algorithms such as tomosynthesis. Digital tomosynthesis is a method of acquiring and reconstructing a three-dimensional data set with limited-angle tube movement. Historically, conventional tomosynthesis reconstruction has suffered contamination of the planes of interest by blurred out-of- plane structures. This paper focuses on a Matrix Inversion Tomosynthesis (MITS) algorithm to remove unwanted blur from adjacent planes. The algorithm uses a set of coupled equations to solve for the blurring function in each reconstructed plane. This paper demonstrates the use of the MITS algorithm in three imaging applications: small animal microscopy, chest radiography, and orthopedics. The results of the MITS reconstruction process demonstrate an improved reduction of blur from out-of-plane structures when compared to conventional tomosynthesis. We conclude that the MITS algorithm holds potential in a variety of applications to improve three-dimensional image reconstruction.
RELICS Discovery of a Probable Lens-magnified SN behind Galaxy Cluster Abell 1763
NASA Astrophysics Data System (ADS)
Rodney, S.; Coe, D.; Bradley, L.; Strolger, L.; Brammer, G.; Avila, R.; Ryan, R.; Ogaz, S.; Riess, A.; Sharon, K.; Johnson, T.; Paterno-Mahler, R.; Molino, A.; Graham, M.; Kelly, P.; Filippenko, A.; Frye, B.; Foley, R.; Schmidt, K.; Umetsu, K.; Czakon, N.; Weiner, B.; Stark, D.; Mainali, R.; Zitrin, A.; Sendra, I.; Graur, O.; Grillo, C.; Hjorth, J.; Selsing, J.; Christensen, L.; Rosati, P.; Nonino, M.; Balestra, I.; Vulcani, B.; McCully, C.; Dawson, W.; Bouwens, R.; Lam, D.; Trenti, M.; Nunez, D. Carrasco; Matheson, T.; Merten, J.; Jha, S.; Jones, C.; Andrade-Santos, F.; Salmon, B.; Bradac, M.; Hoag, A.; Huang, K.; Wang, X.; Oesch, P.
2016-07-01
We report the discovery of a likely supernova (SN) in the background field of the galaxy cluster Abell 1763 (a.k.a. RXC J1335.3+4059, ZwCl 1333.7+4117). The SN candidate was detected in Hubble Space Telescope (HST) observations collected on June 17, 2016 as part of the Reionization Lensing Cluster Survey (RELICS, HST program ID: 14096, PI: D.Coe).
An X-ray temperature map of Abell 754: A major merger
NASA Technical Reports Server (NTRS)
Henry, J. Patrick; Briel, Ulrich G.
1995-01-01
We present the first two-dimensional X-ray temperature map of the rich cluster of galaxies Abell 754. We also present an X-ray surface brightness map with improved spatial resolution and sensitivity compared with previous maps. Both the temperature map and the surface brightness map show that A754 is in the throes of a violent merger; it is probably far from hydrostatic equilibrium.
The Extraordinary Amount of Substructure in the Hubble Frontier Fields Cluster Abell 2744
NASA Astrophysics Data System (ADS)
Jauzac, M.; Eckert, D.; Schwinn, J.; Harvey, D.; Baugh, C. M.; Robertson, A.; Bose, S.; Massey, R.; Owers, M.; Ebeling, H.; Shan, H. Y.; Jullo, E.; Kneib, J.-P.; Richard, J.; Atek, H.; Clément, B.; Egami, E.; Israel, H.; Knowles, K.; Limousin, M.; Natarajan, P.; Rexroth, M.; Taylor, P.; Tchernin, C.
2016-09-01
We present a joint optical/X-ray analysis of the massive galaxy cluster Abell 2744 (z=0.308). Our strong- and weak-lensing analysis within the central region of the cluster, i.e., at R < 1 Mpc from the brightest cluster galaxy, reveals eight substructures, including the main core. All of these dark-matter halos are detected with a significance of at least 5σ and feature masses ranging from 0.5 to 1.4× 1014M⊙ within R < 150 kpc. Merten et al. (2011) and Medezinski et al. (2016) substructures are also detected by us. We measure a slightly higher mass for the main core component than reported previously and attribute the discrepancy to the inclusion of our tightly constrained strong-lensing mass model built on Hubble Frontier Fields data. X-ray data obtained by XMM-Newton reveal four remnant cores, one of them a new detection, and three shocks. Unlike Merten et al. (2011), we find all cores to have both dark and luminous counterparts. A comparison with clusters of similar mass in the MXXL simulations yields no objects with as many massive substructures as observed in Abell 2744, confirming that Abell 2744 is an extreme system. We stress that these properties still do not constitute a challenge to ΛCDM, as caveats apply to both the simulation and the observations: for instance, the projected mass measurements from gravitational lensing and the limited resolution of the sub-haloes finders. We discuss implications of Abell 2744 for the plausibility of different dark-matter candidates and, finally, measure a new upper limit on the self-interaction cross-section of dark matter of σDM < 1.28 cm2g-1(68% CL), in good agreement with previous results from Harvey et al. (2015).
Proof of polar ejection from the close-binary core of the planetary nebula Abell 63
NASA Astrophysics Data System (ADS)
Mitchell, Deborah L.; Pollacco, Don; O'Brien, T. J.; Bryce, M.; López, J. A.; Meaburn, J.; Vaytet, N. M. H.
2007-02-01
We present the first detailed kinematical analysis of the planetary nebula Abell 63, which is known to contain the eclipsing close-binary nucleus UU Sge. Abell 63 provides an important test case in investigating the role of close-binary central stars on the evolution of planetary nebulae. Longslit observations were obtained using the Manchester echelle spectrometer combined with the 2.1-m San Pedro Martir Telescope. The spectra reveal that the central bright rim of Abell 63 has a tube-like structure. A deep image shows collimated lobes extending from the nebula, which are shown to be high-velocity outflows. The kinematic ages of the nebular rim and the extended lobes are calculated to be 8400 +/- 500 and 12900 +/- 2800 yr, respectively, which suggests that the lobes were formed at an earlier stage than the nebular rim. This is consistent with expectations that disc-generated jets form immediately after the common envelope phase. A morphological-kinematical model of the central nebula is presented and the best-fitting model is found to have the same inclination as the orbital plane of the central binary system; this is the first proof that a close-binary system directly affects the shaping of its nebula. A Hubble-type flow is well-established in the morphological-kinematical modelling of the observed line profiles and imagery. Two possible formation models for the elongated lobes of Abell 63 are considered, (i) a low-density, pressure-driven jet excavates a cavity in the remnant asymptotic giant branch (AGB) envelope; (ii) high-density bullets form the lobes in a single ballistic ejection event.
Chandra Observation of Abell 1142: A Cool-core Cluster Lacking a Central Brightest Cluster Galaxy?
NASA Astrophysics Data System (ADS)
Su, Yuanyuan; Buote, David A.; Gastaldello, Fabio; van Weeren, Reinout
2016-04-01
Abell 1142 is a low-mass galaxy cluster at low redshift containing two comparable brightest cluster galaxies (BCGs) resembling a scaled-down version of the Coma Cluster. Our Chandra analysis reveals an X-ray emission peak, roughly 100 kpc away from either BCG, which we identify as the cluster center. The emission center manifests itself as a second beta-model surface brightness component distinct from that of the cluster on larger scales. The center is also substantially cooler and more metal-rich than the surrounding intracluster medium (ICM), which makes Abell 1142 appear to be a cool-core cluster. The redshift distribution of its member galaxies indicates that Abell 1142 may contain two subclusters, each of which contain one BCG. The BCGs are merging at a relative velocity of ≈1200 km s-1. This ongoing merger may have shock-heated the ICM from ≈2 keV to above 3 keV, which would explain the anomalous LX-TX scaling relation for this system. This merger may have displaced the metal-enriched “cool core” of either of the subclusters from the BCG. The southern BCG consists of three individual galaxies residing within a radius of 5 kpc in projection. These galaxies should rapidly sink into the subcluster center due to the dynamical friction of a cuspy cold dark matter halo.
Deciphering the bipolar planetary nebula Abell 14 with 3D ionization and morphological studies
NASA Astrophysics Data System (ADS)
Akras, S.; Clyne, N.; Boumis, P.; Monteiro, H.; Gonçalves, D. R.; Redman, M. P.; Williams, S.
2016-04-01
Abell 14 is a poorly studied object despite being considered a born-again planetary nebula. We performed a detailed study of its 3D morphology and ionization structure using the SHAPE and MOCASSIN codes. We found that Abell 14 is a highly evolved, bipolar nebula with a kinematical age of ˜19 400 yr for a distance of 4 kpc. The high He abundance, and N/O ratio indicate a progenitor of 5 M⊙ that has experienced the third dredge-up and hot bottom burning phases. The stellar parameters of the central source reveal a star at a highly evolved stage near to the white dwarf cooling track, being inconsistent with the born-again scenario. The nebula shows unexpectedly strong [N I] λ5200 and [O I] λ6300 emission lines indicating possible shock interactions. Abell 14 appears to be a member of a small group of highly evolved, extreme type-I planetary nebulae (PNe). The members of this group lie at the lower-left corner of the PNe regime on the [N II]/Hα versus [S II]/Hα diagnostic diagram, where shock-excited regions/objects are also placed. The low luminosity of their central stars, in conjunction with the large physical size of the nebulae, result in a very low photoionization rate, which can make any contribution of shock interaction easily perceptible, even for small velocities.
Chandra Observation of Abell 1142: A Cool-core Cluster Lacking a Central Brightest Cluster Galaxy?
NASA Astrophysics Data System (ADS)
Su, Yuanyuan; Buote, David A.; Gastaldello, Fabio; van Weeren, Reinout
2016-04-01
Abell 1142 is a low-mass galaxy cluster at low redshift containing two comparable brightest cluster galaxies (BCGs) resembling a scaled-down version of the Coma Cluster. Our Chandra analysis reveals an X-ray emission peak, roughly 100 kpc away from either BCG, which we identify as the cluster center. The emission center manifests itself as a second beta-model surface brightness component distinct from that of the cluster on larger scales. The center is also substantially cooler and more metal-rich than the surrounding intracluster medium (ICM), which makes Abell 1142 appear to be a cool-core cluster. The redshift distribution of its member galaxies indicates that Abell 1142 may contain two subclusters, each of which contain one BCG. The BCGs are merging at a relative velocity of ≈1200 km s‑1. This ongoing merger may have shock-heated the ICM from ≈2 keV to above 3 keV, which would explain the anomalous LX–TX scaling relation for this system. This merger may have displaced the metal-enriched “cool core” of either of the subclusters from the BCG. The southern BCG consists of three individual galaxies residing within a radius of 5 kpc in projection. These galaxies should rapidly sink into the subcluster center due to the dynamical friction of a cuspy cold dark matter halo.
ASCA Temperature Maps of Three Clusters of Galaxies: Abell 1060, AWM 7, and the Centaurus Cluster
NASA Astrophysics Data System (ADS)
Furusho, Tae; Yamasaki, Noriko Y.; Ohashi, Takaya; Shibata, Ryo; Kagei, Tomohiro; Ishisaki, Yoshitaka; Kikuchi, Ken'ichi; Ezawa, Hajime; Ikebe, Yasushi
2001-06-01
We present two-dimensional temperature maps of three bright clusters of galaxies (Abell 1060, AWM 7, and the Centaurus cluster), based on multi-pointing observations with the ASCA GIS. The temperatures were derived from hardness ratios by taking into account the XRT response. For the Centaurus cluster, we subtracted the central cool component using the previous ASCA and ROSAT results, and the metallicity gradients observed in AWM 7 and the Centaurus cluster were included in deriving the temperatures. The intracluster medium in Abell 1060 and AWM 7 is almost isothermal from the center to the outer regions with temperatures of 3.3 and 3.9 keV, respectively. The Centaurus cluster exhibits remarkable hot regions within about 30' from the cluster center, showing a temperature increase of ×0.8 keV from the surrounding level of 3.5keV, and the outer cool regions with lower temperatures by -1.3 keV. These results imply that a strong merger has occurred in the Centaurus in the recent 2-3Gyr, and that the central cool component has survived it. In contrast, the gas in Abell 1060 was well-mixed in an early period, which probably has prevented the development of a central cool component. In AWM 7, mixing of the gas should have occurred in a period earlier than the epoch of metal enrichment.
NASA Astrophysics Data System (ADS)
Nath, Saurabh; Mukherjee, Anish; Chatterjee, Souvick; Ganguly, Ranjan; Sen, Swarnendu; Mukhopadhyay, Achintya; Boreyko, Jonathan
2014-11-01
We have observed that capillarity forces may cause floatation in a few non-intuitive configurations. These may be divided into 2 categories: i) floatation of heavier liquid droplets on lighter immiscible ones and ii) fully submerged floatation of lighter liquid droplets in a heavier immiscible medium. We call these counter-intuitive because of the inverse floatation configuration. For case (i) we have identified and studied in detail the several factors affecting the shape and maximum volume of the floating drop. We used water and vegetable oil combinations as test fluids and established the relation between Bond Number and maximum volume contained in a floating drop (in the order of μL). For case (ii), we injected vegetable oil drop-wise into a pool of water. The fully submerged configuration of the drop is not stable and a slight perturbation to the system causes the droplet to burst and float in partially submerged condition. Temporal variation of a characteristic length of the droplet is analyzed using MATLAB image processing. The constraint of small Bond Number establishes the assumption of lubrication regime in the thin gap. A brief theoretical formulation also shows the temporal variation of the gap thickness. Jadavpur University, Jagadis Bose Centre of Excellence, Virginia Tech.
Probing single biomolecules in solution using the anti-Brownian electrokinetic (ABEL) trap.
Wang, Quan; Goldsmith, Randall H; Jiang, Yan; Bockenhauer, Samuel D; Moerner, W E
2012-11-20
Single-molecule fluorescence measurements allow researchers to study asynchronous dynamics and expose molecule-to-molecule structural and behavioral diversity, which contributes to the understanding of biological macromolecules. To provide measurements that are most consistent with the native environment of biomolecules, researchers would like to conduct these measurements in the solution phase if possible. However, diffusion typically limits the observation time to approximately 1 ms in many solution-phase single-molecule assays. Although surface immobilization is widely used to address this problem, this process can perturb the system being studied and contribute to the observed heterogeneity. Combining the technical capabilities of high-sensitivity single-molecule fluorescence microscopy, real-time feedback control and electrokinetic flow in a microfluidic chamber, we have developed a device called the anti-Brownian electrokinetic (ABEL) trap to significantly prolong the observation time of single biomolecules in solution. We have applied the ABEL trap method to explore the photodynamics and enzymatic properties of a variety of biomolecules in aqueous solution and present four examples: the photosynthetic antenna allophycocyanin, the chaperonin enzyme TRiC, a G protein-coupled receptor protein, and the blue nitrite reductase redox enzyme. These examples illustrate the breadth and depth of information which we can extract in studies of single biomolecules with the ABEL trap. When confined in the ABEL trap, the photosynthetic antenna protein allophycocyanin exhibits rich dynamics both in its emission brightness and its excited state lifetime. As each molecule discontinuously converts from one emission/lifetime level to another in a primarily correlated way, it undergoes a series of state changes. We studied the ATP binding stoichiometry of the multi-subunit chaperonin enzyme TRiC in the ABEL trap by counting the number of hydrolyzed Cy3-ATP using stepwise
A comparison of techniques for inversion of radio-ray phase data in presence of ray bending
NASA Technical Reports Server (NTRS)
Wallio, H. A.; Grossi, M. D.
1972-01-01
Derivations are presented of the straight-line Abel transform and the seismological Herglotz-Wiechert transform (which takes ray bending into account) that are used in the reconstruction of refractivity profiles from radio-wave phase data. Profile inversion utilizing these approaches, performed in computer-simulated experiments, are compared for cases of positive, zero, and negative ray bending. For thin atmospheres and ionospheres, such as the Martian atmosphere and ionosphere, radio wave signals are shown to be inverted accurately with both methods. For dense media, such as the solar corona or the lower Venus atmosphere, the refractive recovered by the seismological Herglotz-Wiechert transform provide a significant improvement compared with the straight-line Abel transform.
Application of the least-squares inversion method: Fourier series versus waveform inversion
NASA Astrophysics Data System (ADS)
Min, Dong-Joo; Shin, Jungkyun; Shin, Changsoo
2015-11-01
We describe an implicit link between waveform inversion and Fourier series based on inversion methods such as gradient, Gauss-Newton, and full Newton methods. Fourier series have been widely used as a basic concept in studies on seismic data interpretation, and their coefficients are obtained in the classical Fourier analysis. We show that Fourier coefficients can also be obtained by inversion algorithms, and compare the method to seismic waveform inversion algorithms. In that case, Fourier coefficients correspond to model parameters (velocities, density or elastic constants), whereas cosine and sine functions correspond to components of the Jacobian matrix, that is, partial derivative wavefields in seismic inversion. In the classical Fourier analysis, optimal coefficients are determined by the sensitivity of a given function to sine and cosine functions. In the inversion method for Fourier series, Fourier coefficients are obtained by measuring the sensitivity of residuals between given functions and test functions (defined as the sum of weighted cosine and sine functions) to cosine and sine functions. The orthogonal property of cosine and sine functions makes the full or approximate Hessian matrix become a diagonal matrix in the inversion for Fourier series. In seismic waveform inversion, the Hessian matrix may or may not be a diagonal matrix, because partial derivative wavefields correlate with each other to some extent, making them semi-orthogonal. At the high-frequency limits, however, the Hessian matrix can be approximated by either a diagonal matrix or a diagonally-dominant matrix. Since we usually deal with relatively low frequencies in seismic waveform inversion, it is not diagonally dominant and thus it is prohibitively expensive to compute the full or approximate Hessian matrix. By interpreting Fourier series with the inversion algorithms, we note that the Fourier series can be computed at an iteration step using any inversion algorithms such as the
Two-wavelength lidar inversion algorithm.
Kunz, G J
1999-02-20
Potter [Appl. Opt. 26, 1250 (1987)] has presented a method to determine profiles of the atmospheric aerosol extinction coefficients by use of a two-wavelength lidar with the assumptions of a constant value for the extinction-to-backscatter ratio for each wavelength and a constant value for the ratio between the two extinction coefficients at the two wavelengths. Triggered by this idea, Ackermann [Appl. Opt. 36, 5134 (1997)] expanded this method to consider lidar returns that are a composition of scattering by atmospheric aerosols and molecules, assuming that the molecular scattering is known. In both papers the method is based on the well-known solutions of Bernoulli's differential equation in an iterative scheme with an unknown boundary transmission condition. This boundary condition is less sensitive to noise than boundary extinction conditions. My main purpose is to critically consider the principle behind Potter's method, because it seems that there are several reasons why the number of solutions is not limited to one, as suggested by his original work.
Multidimensional NMR inversion without Kronecker products: Multilinear inversion.
Medellín, David; Ravi, Vivek R; Torres-Verdín, Carlos
2016-08-01
Multidimensional NMR inversion using Kronecker products poses several challenges. First, kernel compression is only possible when the kernel matrices are separable, and in recent years, there has been an increasing interest in NMR sequences with non-separable kernels. Second, in three or more dimensions, the singular value decomposition is not unique; therefore kernel compression is not well-defined for higher dimensions. Without kernel compression, the Kronecker product yields matrices that require large amounts of memory, making the inversion intractable for personal computers. Finally, incorporating arbitrary regularization terms is not possible using the Lawson-Hanson (LH) or the Butler-Reeds-Dawson (BRD) algorithms. We develop a minimization-based inversion method that circumvents the above problems by using multilinear forms to perform multidimensional NMR inversion without using kernel compression or Kronecker products. The new method is memory efficient, requiring less than 0.1% of the memory required by the LH or BRD methods. It can also be extended to arbitrary dimensions and adapted to include non-separable kernels, linear constraints, and arbitrary regularization terms. Additionally, it is easy to implement because only a cost function and its first derivative are required to perform the inversion. PMID:27209370
Multidimensional NMR inversion without Kronecker products: Multilinear inversion
NASA Astrophysics Data System (ADS)
Medellín, David; Ravi, Vivek R.; Torres-Verdín, Carlos
2016-08-01
Multidimensional NMR inversion using Kronecker products poses several challenges. First, kernel compression is only possible when the kernel matrices are separable, and in recent years, there has been an increasing interest in NMR sequences with non-separable kernels. Second, in three or more dimensions, the singular value decomposition is not unique; therefore kernel compression is not well-defined for higher dimensions. Without kernel compression, the Kronecker product yields matrices that require large amounts of memory, making the inversion intractable for personal computers. Finally, incorporating arbitrary regularization terms is not possible using the Lawson-Hanson (LH) or the Butler-Reeds-Dawson (BRD) algorithms. We develop a minimization-based inversion method that circumvents the above problems by using multilinear forms to perform multidimensional NMR inversion without using kernel compression or Kronecker products. The new method is memory efficient, requiring less than 0.1% of the memory required by the LH or BRD methods. It can also be extended to arbitrary dimensions and adapted to include non-separable kernels, linear constraints, and arbitrary regularization terms. Additionally, it is easy to implement because only a cost function and its first derivative are required to perform the inversion.
The SAMI Pilot Survey: stellar kinematics of galaxies in Abell 85, 168 and 2399
NASA Astrophysics Data System (ADS)
Fogarty, L. M. R.; Scott, N.; Owers, M. S.; Croom, S. M.; Bekki, K.; Houghton, R. C. W.; van de Sande, J.; D'Eugenio, F.; Cecil, G. N.; Colless, M. M.; Bland-Hawthorn, J.; Brough, S.; Cortese, L.; Davies, R. L.; Jones, D. H.; Pracy, M.; Allen, J. T.; Bryant, J. J.; Goodwin, M.; Green, A. W.; Konstantopoulos, I. S.; Lawrence, J. S.; Lorente, N. P. F.; Richards, S.; Sharp, R. G.
2015-12-01
We present the SAMI Pilot Survey, consisting of integral field spectroscopy of 106 galaxies across three galaxy clusters, Abell 85, Abell 168 and Abell 2399. The galaxies were selected by absolute magnitude to have Mr < -20.25 mag. The survey, using the Sydney-AAO Multi-object Integral field spectrograph (SAMI), comprises observations of galaxies of all morphological types with 75 per cent of the sample being early-type galaxies (ETGs) and 25 per cent being late-type galaxies (LTGs). Stellar velocity and velocity dispersion maps are derived for all 106 galaxies in the sample. The λR parameter, a proxy for the specific stellar angular momentum, is calculated for each galaxy in the sample. We find a trend between λR and galaxy concentration such that LTGs are less concentrated higher angular momentum systems, with the fast-rotating ETGs (FRs) more concentrated and lower in angular momentum. This suggests that some dynamical processes are involved in transforming LTGs to FRs, though a significant overlap between the λR distributions of these classes of galaxies implies that this is just one piece of a more complicated picture. We measure the kinematic misalignment angle, Ψ, for the ETGs in the sample, to probe the intrinsic shapes of the galaxies. We find the majority of FRs (83 per cent) to be aligned, consistent with them being oblate spheroids (i.e. discs). The slow rotating ETGs (SRs), on the other hand, are significantly more likely to show kinematic misalignment (only 38 per cent are aligned). This confirms previous results that SRs are likely to be mildly triaxial systems.
Two long H I tails in the outskirts of Abell 1367
NASA Astrophysics Data System (ADS)
Scott, T. C.; Cortese, L.; Brinks, E.; Bravo-Alfaro, H.; Auld, R.; Minchin, R.
2012-01-01
We present VLA D-array H I observations of the RSCG 42 and FGC 1287 galaxy groups, in the outskirts of the Abell 1367 cluster. These groups are projected ˜1.8 and 2.7 Mpc west from the cluster centre. The Arecibo Galaxy Environment Survey provided evidence for H I extending over as much as 200 kpc in both groups. Our new, higher resolution observations reveal that the complex H I features detected by Arecibo are in reality two extraordinary long H I tails extending for ˜160 and 250 kpc, respectively, i.e. among the longest H I structures ever observed in groups of galaxies. Although in the case of RSCG 42 the morphology and dynamics of the H I tail, as well as the optical properties of the group members, support a low-velocity tidal interaction scenario, less clear is the origin of the unique features associated with FGC 1287. This galaxy displays an exceptionally long 'dog leg' H I tail, and the large distance from the X-ray-emitting region of Abell 1367 makes a ram-pressure stripping scenario highly unlikely. At the same time, a low-velocity tidal interaction seems unable to explain the extraordinary length of the tail and the lack of any sign of disturbance in the optical properties of FGC 1287. An intriguing possibility could be that this galaxy might have recently experienced a high-speed interaction with another member of the Coma-Abell 1367 Great Wall. We searched for the interloper responsible for this feature and, although we find a possible candidate, we show that without additional observations it is impossible to settle this issue. While the mechanism responsible for this extraordinary H I tail remains to be determined, our discovery highlights how little we know about environmental effects in galaxy groups.
Laterally constrained inversion for CSAMT data interpretation
NASA Astrophysics Data System (ADS)
Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun
2015-10-01
Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.
The X-ray luminosity functions of Abell clusters from the Einstein Cluster Survey
NASA Technical Reports Server (NTRS)
Burg, R.; Giacconi, R.; Forman, W.; Jones, C.
1994-01-01
We have derived the present epoch X-ray luminosity function of northern Abell clusters using luminosities from the Einstein Cluster Survey. The sample is sufficiently large that we can determine the luminosity function for each richness class separately with sufficient precision to study and compare the different luminosity functions. We find that, within each richness class, the range of X-ray luminosity is quite large and spans nearly a factor of 25. Characterizing the luminosity function for each richness class with a Schechter function, we find that the characteristic X-ray luminosity, L(sub *), scales with richness class as (L(sub *) varies as N(sub*)(exp gamma), where N(sub *) is the corrected, mean number of galaxies in a richness class, and the best-fitting exponent is gamma = 1.3 +/- 0.4. Finally, our analysis suggests that there is a lower limit to the X-ray luminosity of clusters which is determined by the integrated emission of the cluster member galaxies, and this also scales with richness class. The present sample forms a baseline for testing cosmological evolution of Abell-like clusters when an appropriate high-redshift cluster sample becomes available.
Methodology for determining multilayered temperature inversions
NASA Astrophysics Data System (ADS)
Fochesatto, G. J.
2015-05-01
Temperature sounding of the atmospheric boundary layer (ABL) and lower troposphere exhibits multilayered temperature inversions specially in high latitudes during extreme winters. These temperature inversion layers are originated based on the combined forcing of local- and large-scale synoptic meteorology. At the local scale, the thermal inversion layer forms near the surface and plays a central role in controlling the surface radiative cooling and air pollution dispersion; however, depending upon the large-scale synoptic meteorological forcing, an upper level thermal inversion can also exist topping the local ABL. In this article a numerical methodology is reported to determine thermal inversion layers present in a given temperature profile and deduce some of their thermodynamic properties. The algorithm extracts from the temperature profile the most important temperature variations defining thermal inversion layers. This is accomplished by a linear interpolation function of variable length that minimizes an error function. The algorithm functionality is demonstrated on actual radiosonde profiles to deduce the multilayered temperature inversion structure with an error fraction set independently.
Rapid approximate inversion of airborne TEM
NASA Astrophysics Data System (ADS)
Fullagar, Peter K.; Pears, Glenn A.; Reid, James E.; Schaa, Ralf
2015-11-01
Rapid interpretation of large airborne transient electromagnetic (ATEM) datasets is highly desirable for timely decision-making in exploration. Full solution 3D inversion of entire airborne electromagnetic (AEM) surveys is often still not feasible on current day PCs. Therefore, two algorithms to perform rapid approximate 3D interpretation of AEM have been developed. The loss of rigour may be of little consequence if the objective of the AEM survey is regional reconnaissance. Data coverage is often quasi-2D rather than truly 3D in such cases, belying the need for `exact' 3D inversion. Incorporation of geological constraints reduces the non-uniqueness of 3D AEM inversion. Integrated interpretation can be achieved most readily when inversion is applied to a geological model, attributed with lithology as well as conductivity. Geological models also offer several practical advantages over pure property models during inversion. In particular, they permit adjustment of geological boundaries. In addition, optimal conductivities can be determined for homogeneous units. Both algorithms described here can operate on geological models; however, they can also perform `unconstrained' inversion if the geological context is unknown. VPem1D performs 1D inversion at each ATEM data location above a 3D model. Interpretation of cover thickness is a natural application; this is illustrated via application to Spectrem data from central Australia. VPem3D performs 3D inversion on time-integrated (resistive limit) data. Conversion to resistive limits delivers a massive increase in speed since the TEM inverse problem reduces to a quasi-magnetic problem. The time evolution of the decay is lost during the conversion, but the information can be largely recovered by constructing a starting model from conductivity depth images (CDIs) or 1D inversions combined with geological constraints if available. The efficacy of the approach is demonstrated on Spectrem data from Brazil. Both separately and in
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
ERIC Educational Resources Information Center
Blasingame, Gerry D.; Abel, Gene G.; Jordan, Alan; Wiegel, Markus
2011-01-01
This article describes the development and utility of the Abel-Blasingame Assessment System for "individuals with intellectual disabilities" (ABID) for assessment of sexual interest and problematic sexual behaviors. The study examined the preliminary psychometric properties and evaluated the clinical utility of the ABID based on a sample of 495…
Efficient 2d full waveform inversion using Fortran coarray
NASA Astrophysics Data System (ADS)
Ryu, Donghyun; Kim, ahreum; Ha, Wansoo
2016-04-01
We developed a time-domain seismic inversion program using the coarray feature of the Fortran 2008 standard to parallelize the algorithm. We converted a 2d acoustic parallel full waveform inversion program with Message Passing Interface (MPI) to a coarray program and examined performance of the two inversion programs. The results show that the speed of the waveform inversion program using the coarray is slightly faster than that of the MPI version. The standard coarray lacks features for collective communication; however, it can be improved in following standards since it is introduced recently. The parallel algorithm can be applied for 3D seismic data processing.
Acoustic Inversion in Optoacoustic Tomography: A Review
Rosenthal, Amir; Ntziachristos, Vasilis; Razansky, Daniel
2013-01-01
Optoacoustic tomography enables volumetric imaging with optical contrast in biological tissue at depths beyond the optical mean free path by the use of optical excitation and acoustic detection. The hybrid nature of optoacoustic tomography gives rise to two distinct inverse problems: The optical inverse problem, related to the propagation of the excitation light in tissue, and the acoustic inverse problem, which deals with the propagation and detection of the generated acoustic waves. Since the two inverse problems have different physical underpinnings and are governed by different types of equations, they are often treated independently as unrelated problems. From an imaging standpoint, the acoustic inverse problem relates to forming an image from the measured acoustic data, whereas the optical inverse problem relates to quantifying the formed image. This review focuses on the acoustic aspects of optoacoustic tomography, specifically acoustic reconstruction algorithms and imaging-system practicalities. As these two aspects are intimately linked, and no silver bullet exists in the path towards high-performance imaging, we adopt a holistic approach in our review and discuss the many links between the two aspects. Four classes of reconstruction algorithms are reviewed: time-domain (so called back-projection) formulae, frequency-domain formulae, time-reversal algorithms, and model-based algorithms. These algorithms are discussed in the context of the various acoustic detectors and detection surfaces which are commonly used in experimental studies. We further discuss the effects of non-ideal imaging scenarios on the quality of reconstruction and review methods that can mitigate these effects. Namely, we consider the cases of finite detector aperture, limited-view tomography, spatial under-sampling of the acoustic signals, and acoustic heterogeneities and losses. PMID:24772060
Mass dependent galaxy transformation mechanisms in the complex environment of SuperGroup Abell 1882
NASA Astrophysics Data System (ADS)
Sengupta, Aparajita
We present our data and results from panchromatic photometry and optical spectrometry of the nearest (extremely rich) filamentary large scale structure, SuperGroup Abell 1882. It is a precursor of a cluster and is an inevitable part of the narrative in the study of galaxy transformations. There has been strong empirical evidence over the past three decades that galaxy environment affects galaxy properties. Blue disky galaxies transform into red bulge-like galaxies as they traverse into the deeper recesses of a cluster. However, we have little insight into the story of galaxy evolution in the early stages of cluster formation. Besides, in relaxed clusters that have been studied extensively, several evolutionary mechanisms take effect on similar spatial and temporal scales, making it almost impossible to disentangle different local and global mechanisms. A SuperGroup on the other hand, has a shallower dark-matter potential. Here, the accreting galaxies are subjected to evolutionary mechanisms over larger time and spatial scales. This separates processes that are otherwise superimposed in rich cluster-filament interfaces. As has been found from cluster studies, galaxy color and morphology tie very strongly with local galaxy density even in a complex and nascent structure like Abell 1882. Our major results indicate that there is a strong dependence of galaxy transformations on the galaxy masses themselves. Mass- dependent evolutionary mechanisms affect galaxies at different spatial scales. The galaxy color also varies with radial projected distance from the assumed center of the structure for a constant local galaxy density, indicating the underlying large scale structure as a second order evolutionary driver. We have looked for clues to the types of mechanisms that might cause the transformations at various mass regimes. We have found the thoroughly quenched low mass galaxies confined to the groups, whereas there are evidences of intermediate-mass quenched galaxies
Inverse anticipating chaos synchronization.
Shahverdiev, E M; Sivaprakasam, S; Shore, K A
2002-07-01
We derive conditions for achieving inverse anticipating synchronization where a driven time-delay chaotic system synchronizes to the inverse future state of the driver. The significance of inverse anticipating chaos in delineating synchronization regimes in time-delay systems is elucidated. The concept is extended to cascaded time-delay systems.
Applications of inverse pattern projection
NASA Astrophysics Data System (ADS)
Li, Wansong; Bothe, Thorsten; Kalms, Michael K.; von Kopylow, Christoph; Jueptner, Werner P. O.
2003-05-01
Fast and robust 3D quality control as well as fast deformation measurement is of particular importance for industrial inspection. Additionally a direct response about measured properties is desired. Therefore, robust optical techniques are needed which use as few images as possible for measurement and visualize results in an efficient way. One promising technique for this aim is the inverse pattern projection which has the following advantages: The technique codes the information of a preceding measurement into the projected inverse pattern. Thus, it is possible to do differential measurements using only one camera frame for each state. Additionally, the results are optimized straight fringes for sampling which are independent of the object curvature. The hardware needs are low as just a programmable projector and a standard camera are necessary. The basic idea of inverse pattern projection, necessary algorithms and found optimizations are demonstrated, roughly. Evaluation techniques were found to preserve a high quality phase measurement under imperfect conditions. The different application fields can be sorted out by the type of pattern used for inverse projection. We select two main topics for presentation. One is the incremental (one image per state) deformation measurement which is a promising technique for high speed deformation measurements. A video series of a wavering flag with projected inverse pattern was evaluated to show the complete deformation series. The other application is the optical feature marking (augmented reality) that allows to map any measured result directly onto the object under investigation. Any properties can be visualized directly on the object"s surface which makes inspections easier than with use of a separated indicating device. The general ability to straighten any kind of information on 3D surfaces is shown while preserving an exact mapping of camera image and object parts. In many cases this supersedes an additional monitor to
X-ray constraints on the shape of the dark matter in five Abell clusters
NASA Technical Reports Server (NTRS)
Buote, David A.; Canizares, Claude R.
1992-01-01
X-ray observations obtained with the Einstein Observatory are used to constrain the shape of the dark matter in the inner regions of Abell clusters A401, A426, A1656, A2029, and A2199, each of which exhibits highly flattened optical isopleths. The dark matter is modeled as an ellipsoid with a mass density of about r exp -2. The possible shapes of the dark matter is constrained by comparing these model isophotes to the image isophotes. The X-ray isophotes, and therefore the gravitational potentials, have ellipticities of about 0.1-0.2. The dark matter within the central 1 Mpc is found to be substantially rounder for all the clusters. It is concluded that the shape of the galaxy distributions in these clusters traces neither the gravitational potential nor the gravitating matter.
ULTRA DEEP AKARI OBSERVATIONS OF ABELL 2218: RESOLVING THE 15 {mu}m EXTRAGALACTIC BACKGROUND LIGHT
Hopwood, R.; Serjeant, S.; Negrello, M.; Pearson, C.; Egami, E.; Im, M.; Ko, J.; Lee, H. M.; Lee, M. G.; Kneib, J.-P.; Matsuhara, H.; Nakagawa, T.; Takagi, T.; Smail, I.
2010-06-10
We present extragalactic number counts and a lower limit estimate for the cosmic infrared background (CIRB) at 15 {mu}m from AKARI ultra deep mapping of the gravitational lensing cluster Abell 2218. These data are the deepest taken by any facility at this wavelength and uniquely sample the normal galaxy population. We have de-blended our sources, to resolve photometric confusion, and de-lensed our photometry to probe beyond AKARI's blank-field sensitivity. We estimate a de-blended 5{sigma} sensitivity of 28.7 {mu}Jy. The resulting 15 {mu}m galaxy number counts are a factor of 3 fainter than previous results, extending to a depth of {approx} 0.01 mJy and providing a stronger lower limit constraint on the CIRB at 15 {mu}m of 1.9 {+-} 0.5 nW m{sup -2} sr{sup -1}.
Dirac neutrino mass from a neutrino dark matter model for the galaxy cluster Abell 1689
NASA Astrophysics Data System (ADS)
Nieuwenhuizen, Theodorus Maria
2016-03-01
The dark matter in the galaxy cluster Abell 1689 is modelled as an isothermal sphere of neutrinos. New data on the 2d mass density allow an accurate description of its core and halo. The model has no “missing baryon problem” and beyond 2.1 Mpc the baryons have the cosmic mass abundance. Combination of cluster data with the cosmic dark matter fraction - here supposed to stem from the neutrinos - leads to a solution of the dark matter riddle by left and right handed neutrinos with mass (1.861 ± 0.016)h 70 -2eV/c 2. The thus far observed absence of neutrinoless double beta decay points to (quasi-) Dirac neutrinos: uncharged electrons with different flavour and mass eigenbasis, as for quarks. Though the cosmic microwave background spectrum is matched up to some 10% accuracy only, the case is not ruled out because the plasma phase of the early Universe may be turbulent.
NASA Technical Reports Server (NTRS)
Hoessel, J. G.; Gunn, J. E.; Thuan, T. X.
1980-01-01
Two-color aperture photometry of the brightest galaxies in a complete sample of nearby Abell clusters is presented. The results are used to anchor the bright end of the Hubble diagram; essentially the entire formal error for this method is then due to the sample of distant clusters used. New determinations of the systematic trend of galaxy absolute magnitude with the cluster properties of richness and Bautz-Morgan type are derived. When these new results are combined with the Gunn and Oke (1975) data on high-redshift clusters, a formal value (without accounting for any evolution) of q sub 0 = -0.55 + or - 0.45 (1 standard deviations) is found.
Television documentary, history and memory. An analysis of Sergio Zavoli's The Gardens of Abel
Foot, John
2014-01-01
This article examines a celebrated documentary made for Italian state TV in 1968 and transmitted in 1969 to an audience of millions. The programme – The Gardens of Abel – looked at changes introduced by the radical psychiatrist Franco Basaglia in an asylum in the north-east of Italy (Gorizia). The article examines the content of this programme for the first time, questions some of the claims that have been made for it, and outlines the sources used by the director, Sergio Zavoli. The article argues that the film was as much an expression of Zavoli's vision and ideas as it was linked to those of Franco Basaglia himself. Finally, the article highlights the way that this programme has become part of historical discourse and popular memory. PMID:25937804
An Approximation to the Periodic Solution of a Differential Equation of Abel
NASA Astrophysics Data System (ADS)
Mickens, Ronald E.
2011-10-01
The Abel equation, in canonical form, is y^' = sint- y^3 (*) and corresponds to the singular (ɛ --> 0) limit of the nonlinear, forced oscillator ɛy^'' + y^' + y^3 = sint, ɛ-> 0. (**) Equation (*) has the property that it has a unique periodic solution defined on (-∞,∞). Further, as t increases, all solutions are attracted into the strip |y| < 1 and any two different solutions y1(t) and y2(t) satisfy the condition Lim [y1(t) - y2(t)] = 0, (***) t --> ∞ and for t negatively decreasing, each solution, except for the periodic solution, becomes unbounded.ootnotetextU. Elias, American Mathematical Monthly, vol.115, (Feb. 2008), pps. 147-149. Our purpose is to calculate an approximation to the unique periodic solution of Eq. (*) using the method of harmonic balance. We also determine an estimation for the blow-up time of the non-periodic solutions.
Cheng, Yu-Ting; Lu, Chi-Cheng; Yen, Gow-Chin
2015-01-01
Epidemiological studies have shown that increased dietary intake of natural antioxidants is beneficial for health because of their bioactivities, including antioxidant and anti-inflammation actions. Camellia oil made from tea seed (Camellia oleifera Abel.) is commonly used as an edible oil and a traditional medicine in Taiwan and China. Until now, the camellia oil has been widely considered as a dietary oil for heath. In this review, we summarize the protective effects of camellia oil with antioxidant activity against oxidative stress leading to hepatic damage and gastrointestinal ulcers. The information in this review leads to the conclusion that camellia oil is not only an edible oil but also a vegetable oil with a potential function for human health. PMID:26598814
Narrow-angle tail radio sources and evidence for radial orbits in Abell clusters
NASA Technical Reports Server (NTRS)
O'Dea, Christopher P.; Owen, Frazer N.; Sarazin, Craig L.
1986-01-01
Published observational data on the tail orientations (TOs) of 60 narrow-angle-tail (NAT) radio sources in Abell clusters of galaxies are analyzed statistically using a maximum-likelihood approach. The results are presented in a table, and it is found that the observed TO distributions in the whole sample and in subsamples of morphologically regular NATs and NATs with pericentric distances d greater than 500 kpc are consistent with isotropic orbits, whereas the TOs for NATs with d less than 500 kpc are consistent with highly radial orbits. If radial orbits were observed near the centers of other types of cluster galaxies as well, it could be inferred that violent relaxation during cluster formation was incomplete, and that clusters form by spherical collapse and secondary infall, as proposed by Gunn (1977).
The Mass of Abell 1060 and AWM 7 from Spatially Resolved X-Ray Spectroscopy
NASA Astrophysics Data System (ADS)
Loewenstein, M.; Mushotzky, R. F.
1996-11-01
Using X-ray temperature and surface brightness profiles of the hot intracluster medium (ICM) derived from ASCA (Astro-D) and ROSAT observations, we place constraints on the dark matter (DM) and baryon fraction distributions in the poor clusters Abell 1060 (A1060) and AWM 7. Although their total mass distributions are similar, AWM 7 has twice the baryon fraction of A1060 in the best-fit models. The functional form of the DM distribution is ill determined; however, mass models where the baryon fractions in A1060 and AWM 7 significantly overlap are excluded. Such variations in baryon fraction are not predicted by standard models and imply that some mechanism in addition to gravity plays a major role in organizing matter on cluster scales.
Inverse Scattering Approach to Improving Pattern Recognition
Chapline, G; Fu, C
2005-02-15
The Helmholtz machine provides what may be the best existing model for how the mammalian brain recognizes patterns. Based on the observation that the ''wake-sleep'' algorithm for training a Helmholtz machine is similar to the problem of finding the potential for a multi-channel Schrodinger equation, we propose that the construction of a Schrodinger potential using inverse scattering methods can serve as a model for how the mammalian brain learns to extract essential information from sensory data. In particular, inverse scattering theory provides a conceptual framework for imagining how one might use EEG and MEG observations of brain-waves together with sensory feedback to improve human learning and pattern recognition. Longer term, implementation of inverse scattering algorithms on a digital or optical computer could be a step towards mimicking the seamless information fusion of the mammalian brain.
Inverse scattering approach to improving pattern recognition
NASA Astrophysics Data System (ADS)
Chapline, George; Fu, Chi-Yung
2005-05-01
The Helmholtz machine provides what may be the best existing model for how the mammalian brain recognizes patterns. Based on the observation that the "wake-sleep" algorithm for training a Helmholtz machine is similar to the problem of finding the potential for a multi-channel Schrodinger equation, we propose that the construction of a Schrodinger potential using inverse scattering methods can serve as a model for how the mammalian brain learns to extract essential information from sensory data. In particular, inverse scattering theory provides a conceptual framework for imagining how one might use EEG and MEG observations of brain-waves together with sensory feedback to improve human learning and pattern recognition. Longer term, implementation of inverse scattering algorithms on a digital or optical computer could be a step towards mimicking the seamless information fusion of the mammalian brain.
Galaxy Luminosity Function of the Dynamically Young Abell 119 Cluster: Probing the Cluster Assembly
NASA Astrophysics Data System (ADS)
Lee, Youngdae; Rey, Soo-Chang; Hilker, Michael; Sheen, Yun-Kyeong; Yi, Sukyoung K.
2016-05-01
We present the galaxy luminosity function (LF) of the Abell 119 cluster down to {M}r˜ -14 mag based on deep images in the u, g, and r bands taken by using MOSAIC II CCD mounted on the Blanco 4 m telescope at the CTIO. The cluster membership was accurately determined based on the radial velocity information and on the color-magnitude relation for bright galaxies and the scaling relation for faint galaxies. The overall LF exhibits a bimodal behavior with a distinct dip at r˜ 18.5 mag ({M}r˜ -17.8 mag), which is more appropriately described by a two-component function. The shape of the LF strongly depends on the clustercentric distance and on the local galaxy density. The LF of galaxies in the outer, low-density region exhibits a steeper slope and more prominent dip compared with that of counterparts in the inner, high-density region. We found evidence for a substructure in the projected galaxy distribution in which several overdense regions in the Abell 119 cluster appear to be closely associated with the surrounding, possible filamentary structure. The combined LF of the overdense regions exhibits a two-component function with a distinct dip, while the LF of the central region is well described by a single Schechter function. We suggest that, in the context of the hierarchical cluster formation scenario, the observed overdense regions are the relics of galaxy groups, retaining their two-component LFs with a dip, which acquired their shapes through a galaxy merging process in group environments, before they fall into a cluster.
A plethora of diffuse steep spectrum radio sources in Abell 2034 revealed by LOFAR
NASA Astrophysics Data System (ADS)
Shimwell, T. W.; Luckin, J.; Brüggen, M.; Brunetti, G.; Intema, H. T.; Owers, M. S.; Röttgering, H. J. A.; Stroe, A.; van Weeren, R. J.; Williams, W. L.; Cassano, R.; de Gasperin, F.; Heald, G. H.; Hoang, D. N.; Hardcastle, M. J.; Sridhar, S. S.; Sabater, J.; Best, P. N.; Bonafede, A.; Chyży, K. T.; Enßlin, T. A.; Ferrari, C.; Haverkorn, M.; Hoeft, M.; Horellou, C.; McKean, J. P.; Morabito, L. K.; Orrù, E.; Pizzo, R.; Retana-Montenegro, E.; White, G. J.
2016-06-01
With Low-Frequency Array (LOFAR) observations, we have discovered a diverse assembly of steep spectrum emission that is apparently associated with the intracluster medium (ICM) of the merging galaxy cluster Abell 2034. Such a rich variety of complex emission associated with the ICM has been observed in few other clusters. This not only indicates that Abell 2034 is a more interesting and complex system than previously thought but it also demonstrates the importance of sensitive and high-resolution, low-frequency observations. These observations can reveal emission from relativistic particles which have been accelerated to sufficient energy to produce observable emission or have had their high energy maintained by mechanisms in the ICM. The most prominent feature in our maps is a bright bulb of emission connected to two steep spectrum filamentary structures, the longest of which extends perpendicular to the merger axis for 0.5 Mpc across the south of the cluster. The origin of these objects is unclear, with no shock detected in the X-ray images and no obvious connection with cluster galaxies or AGNs. We also find that the X-ray bright region of the cluster coincides with a giant radio halo with an irregular morphology and a very steep spectrum. In addition, the cluster hosts up to three possible radio relics, which are misaligned with the cluster X-ray emission. Finally, we have identified multiple regions of emission with a very steep spectral index that seem to be associated with either tailed radio galaxies or a shock.
Deep spectroscopy of nearby galaxy clusters - I. Spectroscopic luminosity function of Abell 85
NASA Astrophysics Data System (ADS)
Agulli, I.; Aguerri, J. A. L.; Sánchez-Janssen, R.; Dalla Vecchia, C.; Diaferio, A.; Barrena, R.; Dominguez Palmero, L.; Yu, H.
2016-05-01
We present a new deep spectroscopic catalogue for Abell 85, within 3.0 × 2.6 Mpc2 and down to Mr ˜ Mr^{ast } +6. Using the Visible Multi-Object Spectrograph at the Very Large Telescope and the AutoFiber 2 at the William Herschel Telescope, we obtained almost 1430 new redshifts for galaxies with mr ≤ 21 mag and <μe,r> ≤ 24 mag arcsec-2. These redshifts, together with Sloan Digital Sky Survey Data Release 6 and NASA/IPAC Extragaalctic Database spectroscopic information, result in 460 confirmed cluster members. This data set allows the study of the luminosity function (LF) of the cluster galaxies covering three orders of magnitudes in luminosities. The total and radial LFs are best modelled by a double Schechter function. The normalized LFs show that their bright (Mr ≤ -21.5) and faint (Mr ≥ -18.0) ends are independent of clustercentric distance and similar to the field LFs unlike the intermediate luminosity range (-21.5 ≤ Mr ≤ -18.0). Similar results are found for the LFs of the dominant types of galaxies: red, passive, virialized and early-infall members. On the contrary, the LFs of blue, star forming, non-virialized and recent-infall galaxies are well described by a single Schechter function. These populations contribute to a small fraction of the galaxy density in the innermost cluster region. However, in the outskirts of the cluster, they have similar densities to red, passive, virialized and early-infall members at the LF faint end. These results confirm a clear dependence of the colour and star formation of Abell 85 members in the cluster centric distance.
Search for a non-equilibrium plasma in the merging galaxy cluster Abell 754
NASA Astrophysics Data System (ADS)
Inoue, Shota; Hayashida, Kiyoshi; Ueda, Shutaro; Nagino, Ryo; Tsunemi, Hiroshi; Koyama, Katsuji
2016-06-01
Abell 754 is a galaxy cluster in which an ongoing merger is evident on the plane of the sky, from the southeast to the northwest. We study the spatial variation of the X-ray spectra observed with Suzaku along the merging direction, centering on the Fe Ly α/Fe He α line ratio to search for possible deviation from ionization equilibrium. Fitting with a single-temperature collisional non-equilibrium plasma model shows that the electron temperature increases from the southeast to the northwest. The ionization parameter is consistent with that in equilibrium (net > 1013 s cm-3) except for the specific region with the highest temperature (kT=13.3_{-1.1}^{+1.4}keV) where n_et=10^{11.6_{-1.7}^{+0.6}}s cm-3. The elapsed time from the plasma heating estimated from the ionization parameter is 0.36-76 Myr at the 90% confidence level. This timescale is quite short but consistent with the traveling time of a shock to pass through that region. We thus interpret that the non-equilibrium ionization plasma in Abell 754 observed is a remnant of the shock heating in the merger process. However, we note that the X-ray spectrum of the specific region where the non-equilibrium is found can also be fitted with a collisional ionization plasma model with two temperatures, low kT=4.2^{+4.2}_{-1.5}keV and very high kT >19.3 keV. The very high temperature component is alternatively fitted with a power-law model. Either of these spectral models is interpreted as a consequence of the ongoing merger process as in the case of the non-equilibrium ionization plasma.
The galaxy population of Abell 1367: the stellar mass-metallicity relation
NASA Astrophysics Data System (ADS)
Mouhcine, M.; Kriwattanawong, W.; James, P. A.
2011-04-01
Using wide baseline broad-band photometry, we analyse the stellar population properties of a sample of 72 galaxies, spanning a wide range of stellar masses and morphological types, in the nearby spiral-rich and dynamically young galaxy cluster Abell 1367. The sample galaxies are distributed from the cluster centre out to approximately half the cluster Abell radius. The optical/near-infrared colours are compared with simple stellar population synthesis models from which the luminosity-weighted stellar population ages and metallicities are determined. The locus of the colours of elliptical galaxies traces a sequence of varying metallicity at a narrow range of luminosity-weighted stellar ages. Lenticular galaxies in the red sequence, however, exhibit a substantial spread of luminosity-weighted stellar metallicities and ages. For red-sequence lenticular galaxies and blue cloud galaxies, low-mass galaxies tend to be on average dominated by stellar populations of younger luminosity-weighted ages. Sample galaxies exhibit a strong correlation between integrated stellar mass and luminosity-weighted stellar metallicity. Galaxies with signs of morphological disturbance and ongoing star formation activity, tend to be underabundant with respect to passive galaxies in the red sequence of comparable stellar masses. We argue that this could be due to tidally driven gas flows towards the star-forming regions, carrying less enriched gas and diluting the pre-existing gas to produce younger stellar populations with lower metallicities than would be obtained prior to the interaction. Finally, we find no statistically significant evidence for changes in the luminosity-weighted ages and metallicities for either red-sequence or blue-cloud galaxies, at fixed stellar mass, with location within the cluster. We dedicate this work to the memory of our friend and colleague C. Moss who died suddenly recently.
Revisiting Abell 2744: a powerful synergy of the GLASS spectroscopy and the HFF photometry.
NASA Astrophysics Data System (ADS)
Wang, Xin; Borello Schmidt, Kasper; Treu, Tommaso
2015-08-01
We present new emission line identifications and improve the strong lensing reconstruction of the massive cluster Abell 2744 using the Grism Lens-Amplified Survey from Space (GLASS) observations and the full depth of the Hubble Frontier Fields (HFF) imaging. We performed a blind and targeted search for emission lines in objects within the full field of view (FoV) of the GLASS prime pointings, including all the previously known multiple arc images. We report over 50 high quality spectroscopic redshifts, 4 of which are for the arc images. We also present an extensive analysis based on the HFF photometry, measuring the colors and photometric redshifts of all objects within the FoV, and comparing the spectroscopic and photometric results of the same ensemble of sources. In order to improve the lens model of Abell 2744, we develop a rigorous alogorithm to screen arc images, based on their colors and morphology, and selecting the most reliable ones to use. As a result, 21 systems (corresponding to 59 images) pass the screening process and are used to reconstruct the gravitational potential of the cluster pixellated on an adaptive mesh. The resulting total mass distribution is compared with a stellar mass map obtained from the deep Spitzer Frontier Fields data in a fashion very similar to the reduction of the Spitzer UltRa Faint SUrvey Program (SURFS UP) clusters, in order to study the relative distribution of stars and dark matter in the cluster. The maps of convergence, shear, and magnification are made publicly available in the standard HFF format.
Suzaku observation of a high-entropy cluster Abell 548W
NASA Astrophysics Data System (ADS)
Nakazawa, Kazuhiro; Kato, Yuichi; Gu, Liyi; Kawaharada, Madoka; Takizawa, Motokazu; Fujita, Yutaka; Makishima, Kazuo
2016-06-01
Abell 548W, one of the galaxy clusters located in the Abell 548 region, has about an order of magnitude lower X-ray luminosity compared to ordinal clusters in view of the well-known intracluster medium (ICM) temperature vs. X-ray luminosity (kT-LX) relation. The cluster hosts a pair of diffuse radio sources to the northwest and north, both about 10' apart from the cluster center. They are candidate radio relics, frequently associated with merging clusters. A Suzaku deep observation with exposure of 84.4 ks was performed to search for signatures of merging in this cluster. The XIS detectors successfully detected the ICM emission out to 16' from the cluster center. The temperature is ˜ 3.6 keV around its center, and ˜ 2 keV at the outermost regions. The hot region (˜ 6 keV) beside the relic candidates shifted to the cluster center reported by XMM-Newton was not seen in the Suzaku data, although its temperature of 3.6 keV itself is higher than the average temperature of 2.5 keV around the radio sources. In addition, the signature of a cool (kT ˜ 0.9 keV) component was found around the northwest source. A marginal temperature jump at its outer edge was also found, consistent with the canonical idea of the shock acceleration origin of the radio relics. The cluster has among the highest central entropy of ˜ 400 keV cm2 and is one of the so-called low surface brightness clusters. Taking into account the fact that its shape itself is relatively circular and smooth and also that its temperature structure is nearly flat, possible scenarios for merging are discussed.
NASA Technical Reports Server (NTRS)
Patel, Sandeep K.; Joy, Marshall; Carlstrom, John E.; Holder, Gilbert P.; Reese, Erik D.; Gomez, Percy L.; Hughes, John P.; Grego, Laura; Holzapfel, William L.
2000-01-01
We present multi-wavelength observations of the Abell 1995 galaxy cluster. From analysis of x-ray spectroscopy and imaging data we derive the electron temperature, cluster core radius, and central electron number density. Using optical spectroscopy of 15 cluster members, we derive an accurate cluster redshift and velocity dispersion. Finally, the interferometric imaging of the SZE toward Abell 1995 at 28.5 GHz provides a measure of the integrated pressure through the cluster.
Electromagnetic inverse applications for functional brain imaging
Wood, C.C.
1997-10-01
This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). This project addresses an important mathematical and computational problem in functional brain imaging, namely the electromagnetic {open_quotes}inverse problem.{close_quotes} Electromagnetic brain imaging techniques, magnetoencephalography (MEG) and electroencephalography (EEG), are based on measurements of electrical potentials and magnetic fields at hundreds of locations outside the human head. The inverse problem is the estimation of the locations, magnitudes, and time-sources of electrical currents in the brain from surface measurements. This project extends recent progress on the inverse problem by combining the use of anatomical constraints derived from magnetic resonance imaging (MRI) with Bayesian and other novel algorithmic approaches. The results suggest that we can achieve significant improvements in the accuracy and robustness of inverse solutions by these two approaches.
VizieR Online Data Catalog: r' photometry of Abell 1367 and Coma (Iglesias-Paramo+, 2003)
NASA Astrophysics Data System (ADS)
Iglesias-Paramo, J.; Boselli, A.; Gavazzi, G.; Cortese, L.; Vilchez, J. M.
2002-11-01
We provide the total r'-band galaxy counts corresponding to our observed fields of the clusters of galaxies Abell 1367 and Coma, as well as the r'-band background counts from Yasuda et al. (2001AJ....122.1104Y). We also provide some basic properties of the galaxies detected in our r'-band survey of the clusters of galaxies Abell 1367 and Coma: coordinates, r'-band magnitudes and surface brightness, position angles, recession velocities and ellipticities are provided. The observations were carried out with the Wide Field Camera (WFC) attached to the Prime Focus of the INT 2.5m located at Observatorio de El Roque de los Muchachos, on 26 and 28 April 2000, under photometric conditions, excepting the last half of the second night. (3 data files).
VizieR Online Data Catalog: Hα galaxies in Abell 1367 and Coma (Iglesias-Paramo+, 2002)
NASA Astrophysics Data System (ADS)
Iglesias-Paramo, J.; Boselli, A.; Cortese, L.; Vilchez, J. M.; Gavazzi, G.
2002-04-01
We present a deep wide field Hα imaging survey of the central regions of the two nearby clusters of galaxies Coma and Abell 1367, taken with the WFC at the Prime Focus at the NT 2.5m telescope located at Observatorio de El Roque de los Muchachos (La Palma), on April 26th and 28th 2000. We determine for the first time the Schechter parameters of the Hα luminosity function (LF) of cluster galaxies. (2 data files).
The physical structure of planetary nebulae around sdO stars: Abell 36, DeHt 2, and RWT 152
NASA Astrophysics Data System (ADS)
Aller, A.; Miranda, L. F.; Olguín, L.; Vázquez, R.; Guillén, P. F.; Oreiro, R.; Ulla, A.; Solano, E.
2015-01-01
We present narrow-band Hα and [O III] images, and high-resolution, long-slit spectra of the planetary nebulae (PNe) Abell 36, DeHt 2, and RWT 152 aimed at studying their morphology and internal kinematics. These data are complemented with intermediate-resolution, long-slit spectra to describe the spectral properties of the central stars and nebulae. The morphokinematical analysis shows that Abell 36 consists of an inner spheroid and two bright point-symmetric arcs; DeHt 2 is elliptical with protruding polar regions and a bright non-equatorial ring; and RWT 152 is bipolar. The formation of Abell 36 and DeHt 2 requires several ejection events including collimated bipolar outflows that probably are younger than and have disrupted the main shell. The nebular spectra of the three PNe show a high excitation and also suggest a possible deficiency in heavy elements in DeHt 2 and RWT 152. The spectra of the central stars strongly suggest an sdO nature and their association with PNe points out that they have most probably evolved through the asymptotic giant branch. We analyse general properties of the few known sdOs associated with PNe and find that most of them are relatively or very evolved PNe, show complex morphologies, host binary central stars, and are located at relatively high Galactic latitudes.
Analysis of the optical emission of the young precataclysmic variables HS 1857+5144 and ABELL 65
NASA Astrophysics Data System (ADS)
Shimansky, V. V.; Pozdnyakova, S. A.; Borisov, N. V.; Bikmaev, I. F.; Vlasyuk, V. V.; Spiridonova, O. I.; Galeev, A. I.; Mel'Nikov, S. S.
2009-10-01
We analyze the physical state and the properties of the close binary systems HS 1857+5144 and Abell 65. We took the spectra of both systems over a wide range of orbital phases with the 6-m telescope of the Special Astrophysical Observatory of the Russian Academy of Sciences (SAO RAS) and obtained their multicolor light curves with the RTT150 and Zeiss-1000 telescopes of the SAO RAS. We demonstrate that both Abell 65 and HS 1857+5144 are young precataclysmic variables (PV) with orbital periods of P orb = 1. d 003729 and P orb = 0. d 26633331, respectively. The observed brightness and spectral variations during the orbital period are due to the radiation of the cold component, which absorbs the short-wave radiation of the hot component and reemits it in the visual part of the spectrum. A joint analysis of the brightness and radial velocity curves allowed us to find the possible and optimum sets of their fundamental parameters. We found the luminosity excesses of the secondary components of HS 1857+5144 and Abell 65 with respect to the corresponding Main Sequence stars to be typical for such objects. The excess luminosities of the secondary components of all young PVs are indicative of their faster relaxation rate towards the quiescent state compared to the rates estimated in earlier studies.
Using GPU Programming for Inverse Spectroscopy
David Gerts; N. Fredette; H. Wimberly
2010-07-01
The Idaho National Laboratory (INL) has developed a detector that relies heavily on computationally expensive inverse spectroscopy algorithms to determine probabilistic three dimensional mappings of the source and its intensity. This inverse spectroscopy algorithm applies to material accountability due to the potential to determine where nuclear sources are present as a function of time and space. And yet because the novel algorithm can become prohibitively expensive on a standard desktop PC, the INL has incorporated new hardware from the commercial graphics community. General programming for graphics processing units (GPUs) is not a new concept. However, the application of GPUs to evidence theory-based inverse spectroscopy is both novel and particularly apropos. Improvements while using a (slightly upgraded) standard PC are approximately three orders of magnitude, making a ten hour computation in less than four seconds. This significantly changes the concept of prohibitively expensive calculations and makes application to materials accountability possible in near real time. Indeed, the sensor collection time is now expected to dominate the time required to determine the source and its intensity, instead of the inverse spectroscopy method.
On the inversion-indel distance
2013-01-01
Background The inversion distance, that is the distance between two unichromosomal genomes with the same content allowing only inversions of DNA segments, can be computed thanks to a pioneering approach of Hannenhalli and Pevzner in 1995. In 2000, El-Mabrouk extended the inversion model to allow the comparison of unichromosomal genomes with unequal contents, thus insertions and deletions of DNA segments besides inversions. However, an exact algorithm was presented only for the case in which we have insertions alone and no deletion (or vice versa), while a heuristic was provided for the symmetric case, that allows both insertions and deletions and is called the inversion-indel distance. In 2005, Yancopoulos, Attie and Friedberg started a new branch of research by introducing the generic double cut and join (DCJ) operation, that can represent several genome rearrangements (including inversions). Among others, the DCJ model gave rise to two important results. First, it has been shown that the inversion distance can be computed in a simpler way with the help of the DCJ operation. Second, the DCJ operation originated the DCJ-indel distance, that allows the comparison of genomes with unequal contents, considering DCJ, insertions and deletions, and can be computed in linear time. Results In the present work we put these two results together to solve an open problem, showing that, when the graph that represents the relation between the two compared genomes has no bad components, the inversion-indel distance is equal to the DCJ-indel distance. We also give a lower and an upper bound for the inversion-indel distance in the presence of bad components. PMID:24564182
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
Aerosol physical properties from satellite horizon inversion
NASA Technical Reports Server (NTRS)
Gray, C. R.; Malchow, H. L.; Merritt, D. C.; Var, R. E.; Whitney, C. K.
1973-01-01
The feasibility is investigated of determining the physical properties of aerosols globally in the altitude region of 10 to 100 km from a satellite horizon scanning experiment. The investigation utilizes a horizon inversion technique previously developed and extended. Aerosol physical properties such as number density, size distribution, and the real and imaginary components of the index of refraction are demonstrated to be invertible in the aerosol size ranges (0.01-0.1 microns), (0.1-1.0 microns), (1.0-10 microns). Extensions of previously developed radiative transfer models and recursive inversion algorithms are displayed.
Molecular seismology: an inverse problem in nanobiology.
Hinow, Peter; Boczko, Erik M
2007-05-01
The density profile of an elastic fiber like DNA will change in space and time as ligands associate with it. This observation affords a new direction in single molecule studies provided that density profiles can be measured in space and time. In fact, this is precisely the objective of seismology, where the mathematics of inverse problems have been employed with success. We argue that inverse problems in elastic media can be directly applied to biophysical problems of fiber-ligand association, and demonstrate that robust algorithms exist to perform density reconstruction in the condensed phase.
Inverse potential scattering in duct acoustics.
Forbes, Barbara J; Pike, E Roy; Sharp, David B; Aktosun, Tuncay
2006-01-01
The inverse problem of the noninvasive measurement of the shape of an acoustical duct in which one-dimensional wave propagation can be assumed is examined within the theoretical framework of the governing Klein-Gordon equation. Previous deterministic methods developed over the last 40 years have all required direct measurement of the reflectance or input impedance but now, by application of the methods of inverse quantum scattering to the acoustical system, it is shown that the reflectance can be algorithmically derived from the radiated wave. The potential and area functions of the duct can subsequently be reconstructed. The results are discussed with particular reference to acoustic pulse reflectometry.
NASA Astrophysics Data System (ADS)
Jackiewicz, Jason
2009-09-01
With the rapid advances in sophisticated solar modeling and the abundance of high-quality solar pulsation data, efficient and robust inversion techniques are crucial for seismic studies. We present some aspects of an efficient Fourier Optimally Localized Averaging (OLA) inversion method with an example applied to time-distance helioseismology.
Jackiewicz, Jason
2009-09-16
With the rapid advances in sophisticated solar modeling and the abundance of high-quality solar pulsation data, efficient and robust inversion techniques are crucial for seismic studies. We present some aspects of an efficient Fourier Optimally Localized Averaging (OLA) inversion method with an example applied to time-distance helioseismology.
ERIC Educational Resources Information Center
Bedard, Catherine; Belin, Pascal
2004-01-01
Voice is the carrier of speech but is also an ''auditory face'' rich in information on the speaker's identity and affective state. Three experiments explored the possibility of a ''voice inversion effect,'' by analogy to the classical ''face inversion effect,'' which could support the hypothesis of a voice-specific module. Experiment 1 consisted…
On the Merging Cluster Abell 578 and Its Central Radio Galaxy 4C+67.13
NASA Astrophysics Data System (ADS)
Hagino, K.; Stawarz, Ł.; Siemiginowska, A.; Cheung, C. C.; Kozieł-Wierzbowska, D.; Szostek, A.; Madejski, G.; Harris, D. E.; Simionescu, A.; Takahashi, T.
2015-06-01
Here we analyze radio, optical, and X-ray data for the peculiar cluster Abell 578. This cluster is not fully relaxed and consists of two merging sub-systems. The brightest cluster galaxy (BCG), CGPG 0719.8+6704, is a pair of interacting ellipticals with projected separation ˜10 kpc, the brighter of which hosts the radio source 4C+67.13. The Fanaroff-Riley type-II radio morphology of 4C+67.13 is unusual for central radio galaxies in local Abell clusters. Our new optical spectroscopy revealed that both nuclei of the CGPG 0719.8+6704 pair are active, albeit at low accretion rates corresponding to the Eddington ratio ˜ {{10}-4} (for the estimated black hole masses of ˜ 3× {{10}8} {{M}⊙ } and ˜ {{10}9} {{M}⊙ }). The gathered X-ray (Chandra) data allowed us to confirm and to quantify robustly the previously noted elongation of the gaseous atmosphere in the dominant sub-cluster, as well as a large spatial offset (˜60 kpc projected) between the position of the BCG and the cluster center inferred from the modeling of the X-ray surface brightness distribution. Detailed analysis of the brightness profiles and temperature revealed also that the cluster gas in the vicinity of 4C+67.13 is compressed (by a factor of about ˜1.4) and heated (from ≃ 2.0 keV up to 2.7 keV), consistent with the presence of a weak shock (Mach number ˜1.3) driven by the expanding jet cocoon. This would then require the jet kinetic power of the order of ˜ {{10}45} erg s-1, implying either a very high efficiency of the jet production for the current accretion rate, or a highly modulated jet/accretion activity in the system. Based on service observations made with the WHT operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias.
A Census of Star Formation and Active Galactic Nuclei Populations in Abell 1689
NASA Astrophysics Data System (ADS)
Jones, Logan H.; Atlee, David Wesley
2016-01-01
A recent survey of low-z galaxy clusters observed a disjunction between X-ray and mid-infrared selected populations of active galactic nuclei (X-ray and IR AGNs) (Atlee+ 2011, ApJ 729, 22.). Here we present an analysis of near-infrared spectroscopic data of star-forming galaxies in cluster Abell 1689 in order to confirm the identity of some of their IR AGN and to provide a check on their reported star formation rates. Our sample consists of 24 objects in Abell 1689. H and K band spectroscopic observations of target objects and standard stars were obtained by David Atlee between 2010 May 17 and 2011 June 6 using the Large Binocular Telescope's LUCI instrument. After undergoing initial reductions, standard stars were corrected for telluric absorption using TelFit (Gullikson+ 2014, AJ, 158, 53). Raw detector counts were converted to physical units using the wavelength-dependent response of the grating and the star's reported H and K band magnitudes to produce conversion factors that fully correct for instrumental effects. Target spectra were flux-calibrated using the airmass-corrected transmission profiles produced by TelFit and the associated H band conversion factor (or the average of the two factors, for nights with two standard stars). Star formation rates were calculated using the SFR-L(Ha) relation reported in Kennicutt (1998), with the measured luminosity of the Pa-a emission line at the luminosity distance of the cluster used as a proxy for L(Ha) (Kennicutt 1998, ARA&A 36, 189; Hummer & Stoney 1987, MNRAS 346, 1055). The line ratios H2 2.121 mm/Brg and [FeII]/Pab were used to classify targets as starburst galaxies, AGNs, or LINERs (Rodriguez-Ardila+ 2005, MNRAS, 364, 1041). Jones was supported by the NOAO/KPNO Research Experience for Undergraduates (REU) Program, which is funded by the National Science Foundation Research Experiences for Undergraduates Program (AST-1262829).
SPECTRAL INDEX STUDIES OF THE DIFFUSE RADIO EMISSION IN ABELL 2256: IMPLICATIONS FOR MERGER ACTIVITY
Kale, Ruta; Dwarakanath, K. S. E-mail: dwaraka@rri.res.i
2010-08-01
We present a multi-wavelength analysis of the merging rich cluster of galaxies, Abell 2256 (A2256). We have observed A2256 at 150 MHz using the Giant Metrewave Radio Telescope and successfully detected the diffuse radio halo and the relic emission over a {approx}1.2 Mpc{sup 2} extent. Using this 150 MHz image and the images made using archival observations from the Very Large Array (VLA; 1369 MHz) and the Westerbrok Synthesis Radio Telescope (WSRT; 330 MHz), we have produced spectral index images of the diffuse radio emission in A2256. These spectral index images show a distribution of flat spectral index (S {proportional_to} {nu}{sup {alpha}}, {alpha} in the range -0.7 to -0.9) plasma in the region NW of the cluster center. Regions showing steep spectral indices ({alpha} in the range -1.0 to -2.3) are toward the SE of the cluster center. These spectral indices indicate synchrotron lifetimes for the relativistic plasmas in the range 0.08-0.4 Gyr. We interpret this spectral behavior as resulting from a merger event along the direction SE to NW within the last 0.5 Gyr or so. A shock may be responsible for the NW relic in A2256 and the megaparsec scale radio halo toward the SE is likely to be generated by the turbulence injected by mergers. Furthermore, the diffuse radio emission shows spectral steepening toward lower frequencies. This low-frequency spectral steepening is consistent with a combination of spectra from two populations of relativistic electrons created at two epochs (two mergers) within the last {approx}0.5 Gyr. Earlier interpretations of the X-ray and the optical data also suggested that there were two mergers in Abell 2256 in the last 0.5 Gyr, consistent with the current findings. Also highlighted in this study is the futility of correlating the average temperatures of thermal gas and the average spectral indices of diffuse radio emission in the respective clusters.
Swarm intelligence optimization and its application in geophysical data inversion
NASA Astrophysics Data System (ADS)
Yuan, Sanyi; Wang, Shangxu; Tian, Nan
2009-06-01
The inversions of complex geophysical data always solve multi-parameter, nonlinear, and multimodal optimization problems. Searching for the optimal inversion solutions is similar to the social behavior observed in swarms such as birds and ants when searching for food. In this article, first the particle swarm optimization algorithm was described in detail, and ant colony algorithm improved. Then the methods were applied to three different kinds of geophysical inversion problems: (1) a linear problem which is sensitive to noise, (2) a synchronous inversion of linear and nonlinear problems, and (3) a nonlinear problem. The results validate their feasibility and efficiency. Compared with the conventional genetic algorithm and simulated annealing, they have the advantages of higher convergence speed and accuracy. Compared with the quasi-Newton method and Levenberg-Marquardt method, they work better with the ability to overcome the locally optimal solutions.
Inverse scattering problems with multi-frequencies
NASA Astrophysics Data System (ADS)
Bao, Gang; Li, Peijun; Lin, Junshan; Triki, Faouzi
2015-09-01
This paper is concerned with computational approaches and mathematical analysis for solving inverse scattering problems in the frequency domain. The problems arise in a diverse set of scientific areas with significant industrial, medical, and military applications. In addition to nonlinearity, there are two common difficulties associated with the inverse problems: ill-posedness and limited resolution (diffraction limit). Due to the diffraction limit, for a given frequency, only a low spatial frequency part of the desired parameter can be observed from measurements in the far field. The main idea developed here is that if the reconstruction is restricted to only the observable part, then the inversion will become stable. The challenging task is how to design stable numerical methods for solving these inverse scattering problems inspired by the diffraction limit. Recently, novel recursive linearization based algorithms have been presented in an attempt to answer the above question. These methods require multi-frequency scattering data and proceed via a continuation procedure with respect to the frequency from low to high. The objective of this paper is to give a brief review of these methods, their error estimates, and the related mathematical analysis. More attention is paid to the inverse medium and inverse source problems. Numerical experiments are included to illustrate the effectiveness of these methods.
A Cosmic Train Wreck: JVLA Radio Observations of the HST Frontier Fields Cluster Abell 2744
NASA Astrophysics Data System (ADS)
Pearce, Connor; Van Weeren, Reinout J.; Jones, Christine; Forman, William R.; Ogrean, Georgiana A.; Andrade-Santos, Felipe; Kraft, Ralph P.; Dawson, William; Brüggen, Marcus; Roediger, Elke; Bulbul, Esra; Mroczkowski, Tony
2016-01-01
The galaxy cluster mergers observed in the HST Frontier Fields represent some of the most energetic events in the Universe. Major cluster mergers leave distinct signatures in the ICM in the form of shocks, turbulence, and diffuse cluster radio sources. These diffuse radio sources, so-called radio relics and halos, provide evidence for the acceleration of relativistic particles and the presence of large scale magnetic fields in the ICM. Observations of these halos and relics allow us to (i) study the physics of particle acceleration and its relation with shocks and turbulence in the ICM and (ii) constrain the dynamical evolution of the merger eventsWe present Jansky Very Large Array 1-4 GHz observations of the Frontier cluster Abell 2744. We confirm the presence of the known giant radio halo and radio relic via our deep radio images. Owing to the much greater sensitivity of the JVLA compared to previous observations, we are able to detect a previously unobserved long Mpc-size filament of synchrotron emission to the south west of the cluster core. We also present a radio spectral index image of the diffuse cluster emission to test the origin of the radio relic and halo, related to the underlying particle acceleration mechanism. Finally, we carry out a search for radio emission from the 'jellyfish' galaxies in A2744 to estimate their star formation rate. These highly disturbed galaxies are likely influenced by the cluster merger event, although the precise origin of these galaxies is still being debated.
JVLA S- and X-band polarimetry of the merging cluster Abell 2256
NASA Astrophysics Data System (ADS)
Ozawa, Takeaki; Nakanishi, Hiroyuki; Akahori, Takuya; Anraku, Kenta; Takizawa, Motokazu; Takahashi, Ikumi; Onodera, Sachiko; Tsuda, Yuya; Sofue, Yoshiaki
2015-12-01
We report on polarimetry results of a merging cluster of galaxies, Abell 2256, with the Karl G. Jansky Very Large Array (JVLA). We performed new observations with JVLA at the S band (2051-3947 MHz) and X band (8051-9947 MHz) in the C array configuration, and detected significant polarized emissions from the radio relic, Source A, and Source B in this cluster. We calculated the total magnetic-field strengths toward the radio relic using revised equipartition formula, which is 1.8-5.0 μG. With dispersions of Faraday rotation measure, the magnetic-field strengths toward Sources A and B are estimated to be 0.63-1.26 μG and 0.11-0.21 μG, respectively. An extremely high degree of linear polarization, as high as ˜ 35%, about a half of the maximum polarization, was detected toward the radio relic, which indicates highly ordered magnetic lines of force over the beam sizes (˜ 52 kpc). The fractional polarization of the radio relic decreases from ˜ 35% to ˜ 20% at around 3 GHz as the frequency decreases, and is nearly constant between 1.37 and 3 GHz. Both analyses with depolarization models and Faraday tomography suggest multiple depolarization components toward the radio relic and imply the existence of turbulent magnetic fields.
Systematic Uncertainties in Characterizing Cluster Outskirts: The Case of Abell 133
NASA Astrophysics Data System (ADS)
Paine, Jennie; Ogrean, Georgiana A.; Nulsen, Paul; Farrah, Duncan
2016-01-01
The outskirts of galaxy clusters have low surface brightness compared to the X-ray background, making accurate background subtraction particularly important for analyzing cluster spectra out to and beyond the virial radius. We analyze the thermodynamic properties of the intracluster medium (ICM) of Abell 133 and assess the extent to which uncertainties on background subtraction affect measured quantities. We implement two methods of analyzing the ICM spectra: one in which the blank-sky background is subtracted, and another in which the sky background is modeled. We find that the two methods are consistent within the 90% confidence ranges. We were able to measure the thermodynamic properties of the cluster up to R500. Even at R500, the systematic uncertainties associated with the sky background in the direction of A133 are small, despite the ICM signal constituting only ~25% of the total signal. This work was supported in part by the NSF REU and DoD ASSURE programs under NSF grant no. 1262851 and by the Smithsonian Institution. GAO acknowledges support by NASA through a Hubble Fellowship grant HST-HF2-51345.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
A redshift survey of the strong-lensing cluster ABELL 383
Geller, Margaret J.; Hwang, Ho Seong; Kurtz, Michael J.; Diaferio, Antonaldo; Coe, Dan; Rines, Kenneth J. E-mail: hhwang@cfa.harvard.edu E-mail: diaferio@ph.unito.it E-mail: kenneth.rines@wwu.edu
2014-03-01
Abell 383 is a famous rich cluster (z = 0.1887) imaged extensively as a basis for intensive strong- and weak-lensing studies. Nonetheless, there are few spectroscopic observations. We enable dynamical analyses by measuring 2360 new redshifts for galaxies with r {sub Petro} ≤ 20.5 and within 50' of the Brightest Cluster Galaxy (BCG; R.A.{sub 2000} = 42.°014125, decl.{sub 2000} = –03.°529228). We apply the caustic technique to identify 275 cluster members within 7 h {sup –1} Mpc of the hierarchical cluster center. The BCG lies within –11 ± 110 km s{sup –1} and 21 ± 56 h {sup –1} kpc of the hierarchical cluster center; the velocity dispersion profile of the BCG appears to be an extension of the velocity dispersion profile based on cluster members. The distribution of cluster members on the sky corresponds impressively with the weak-lensing contours of Okabe et al. especially when the impact of foreground and background structure is included. The values of R {sub 200} = 1.22 ± 0.01 h {sup –1} Mpc and M {sub 200} = (5.07 ± 0.09) × 10{sup 14} h {sup –1} M {sub ☉} obtained by application of the caustic technique agree well with recent completely independent lensing measures. The caustic estimate extends direct measurement of the cluster mass profile to a radius of ∼5 h {sup –1} Mpc.
Peculiar velocities of cD galaxies - MX spectroscopy of Abell 1795
NASA Astrophysics Data System (ADS)
Hill, John M.; Hintzen, Paul; Oegerle, W. R.; Romanishin, W.; Lesser, M. P.; Eisenhamer, J. D.; Batuski, D. J.
1988-09-01
Spectroscopic observations of galaxies in the Abell 1795 field have been obtained using the MX multiple-object spectrograph on the Steward Observatory 2.3 m telescope. Redshifts are presented for 46 galaxies, including 41 cluster members. It is found that the A1795 cD galaxy is not at rest in the cluster gravitational potential well; it has a peculiar radial velocity, cz, of 365 km/s, and the hypothesis that the mean cluster velocity is as large as the cD's velocity can be rejected at the 99.5 percent confidence level. This conclusion is supported by spectroscopic data for the 'cooling flow' gas found in the central region of the cluster; this gas, except for the portion coincident with the cD nucleus, lies at the velocity derived for the cluster mean. It is suggested that current models of the formation of cD galaxies are unlikely to account for the large peculiar velocities of the cD galaxies in A1795 and A2670 unless substantial subclustering is still present. However, the available data show no evidence for velocity subclustering in either A1795 or A2670.
X-Ray Spectroscopy of the Cluster of Galaxies Abell 1795 with XMM-Newton
NASA Technical Reports Server (NTRS)
Tamura, T.; Kaastra, J. S.; Peterson, J. R.; Paerels, F.; Mittaz, J. P. D.; Trudolyubov, S. P.; Stewart, G.; Fabian, A. C.; Mushotzky, R. F.; Lumb, D. H.
2000-01-01
The initial results from XMM-Newton observations of the rich cluster of galaxies Abell 1795 are presented. The spatially-resolved X-ray spectra taken by the European Photon Imaging Cameras (EPIC) show a temperature drop at a radius of - 200 kpc from the cluster center, indicating that the ICM is cooling. Both the EPIC and the Reflection Grating Spectrometers (RGS) spectra extracted from the cluster center can be described by an isothermal model with a temperature of approx. 4 keV. The volume emission measure of any cool component (less than 1 keV) is less than a few % of the hot component at the cluster center. A strong O VIII Lyman alpha line was detected with the RGS from the cluster core. The O abundance of the ICM is 0.2-0.5 times the solar value. The O to Fe ratio at the cluster center is 0.5 - 1.5 times the solar ratio.
STRONG GRAVITATIONAL LENSING BY THE SUPER-MASSIVE cD GALAXY IN ABELL 3827
Carrasco, E. R.; Gomez, P. L.; Lee, H.; Diaz, R.; Bergmann, M.; Turner, J. E. H.; Miller, B. W.; West, M. J.; Verdugo, T.
2010-06-01
We have discovered strong gravitational lensing features in the core of the nearby cluster Abell 3827 by analyzing Gemini South GMOS images. The most prominent strong lensing feature is a highly magnified, ring-shaped configuration of four images around the central cD galaxy. GMOS spectroscopic analysis puts this source at z {approx} 0.2. Located {approx}20'' away from the central galaxy is a secondary tangential arc feature which has been identified as a background galaxy with z {approx} 0.4. We have modeled the gravitational potential of the cluster core, taking into account the mass from the cluster, the brightest cluster galaxy (BCG), and other galaxies. We derive a total mass of (2.7 {+-} 0.4) x 10{sup 13} M {sub sun} within 37 h {sup -1} kpc. This mass is an order of magnitude larger than that derived from X-ray observations. The total mass derived from lensing data suggests that the BCG in this cluster is perhaps the most massive galaxy in the nearby universe.
Peculiar radio structures in the central regions of galaxy cluster Abell 585
NASA Astrophysics Data System (ADS)
Jamrozy, M.; Stawarz, Ł.; Marchenko, V.; Kuźmicz, A.; Ostrowski, M.; Cheung, C. C.; Sikora, M.
2014-06-01
In this paper, we analyse the peculiar radio structure observed across the central region of the galaxy cluster Abell 585 (z = 0.12). In the low-resolution radio maps, this structure appears uniform and diffuse on angular scales of ˜3 arcmin, and is seemingly related to the distant (z = 2.5) radio quasar B3 0727+409 rather than to the cluster itself. However, after a careful investigation of the unpublished archival radio data with better angular resolution, we resolve the structure into two distinct arcmin-scale features, which resemble typical lobes of cluster radio galaxies with no obvious connection to the background quasar. We support this conclusion by examining the spectral and polarization properties of the features, demonstrating in addition that the analysed structure can hardly be associated with any sort of a radio mini-halo or relics of the cluster. Yet at the same time we are not able to identify host galaxies of the radio lobes in the available optical and infrared surveys. We consider some speculative explanations for our findings, including gravitational wave recoil kicks of supermassive black holes responsible for the lobes' formation in the process of merging massive ellipticals within the central parts of a rich cluster environment, but we do not reach any robust conclusions regarding the origin of the detected radio features.
Basis set expansion for inverse problems in plasma diagnostic analysis.
Jones, B; Ruiz, C L
2013-07-01
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)] is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20-25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
Basis set expansion for inverse problems in plasma diagnostic analysis
Jones, B.; Ruiz, C. L.
2013-07-15
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)] is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20–25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
Basis set expansion for inverse problems in plasma diagnostic analysis
NASA Astrophysics Data System (ADS)
Jones, B.; Ruiz, C. L.
2013-07-01
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)], 10.1063/1.1482156 is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20-25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
Joint inversion for mapping subsurface hydrologicalparameters
Tseng, Hung-Wen; Lee, Ki Ha
2001-03-07
Using electromagnetic (EM) and seismic travel time data and a least-square criteria, a two-dimensional joint inversion algorithm is under development to assess the feasibility of directly mapping subsurface hydrological properties in a crosswell setup. A simplified Archie's law combined with the time average equation relates the magnetic fields and seismic travel time to two hydrological parameters; rock porosity and pore fluid electrical conductivity. For simplicity, the hydrological parameter distributions are assumed to be two-dimensional. Preliminary results show that joint inversion does have better resolving power for the interpretation than using the EM method alone. Various inversion scenarios have been tested, and it has been found that alternately perturbing just one of the two parameters at each iteration gives the best data fit.
Bayesian inversion for optical diffraction tomography
NASA Astrophysics Data System (ADS)
Ayasso, H.; Duchêne, B.; Mohammad-Djafari, A.
2010-05-01
In this paper, optical diffraction tomography is considered as a non-linear inverse scattering problem and tackled within the Bayesian estimation framework. The object under test is a man-made object known to be composed of compact regions made of a finite number of different homogeneous materials. This a priori knowledge is appropriately translated by a Gauss-Markov-Potts prior. Hence, a Gauss-Markov random field is used to model the contrast distribution whereas a hidden Potts-Markov field accounts for the compactness of the regions. First, we express the a posteriori distributions of all the unknowns and then a Gibbs sampling algorithm is used to generate samples and estimate the posterior mean of the unknowns. Some preliminary results, obtained by applying the inversion algorithm to laboratory controlled data, are presented.
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; Tartakovsky, Daniel M.
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulic head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.
Compression scheme for geophysical electromagnetic inversions
NASA Astrophysics Data System (ADS)
Abubakar, A.
2014-12-01
We have developed a model-compression scheme for improving the efficiency of the regularized Gauss-Newton inversion algorithm for geophysical electromagnetic applications. In this scheme, the unknown model parameters (the conductivity/resistivity distribution) are represented in terms of a basis such as Fourier and wavelet (Haar and Daubechies). By applying a truncation criterion, the model may then be approximated by a reduced number of basis functions, which is usually much less than the number of the model parameters. Further, because the geophysical electromagnetic measurements have low resolution, it is sufficient for inversion to only keep the low-spatial frequency part of the image. This model-compression scheme accelerates the computational time and also reduces the memory usage of the Gauss-Newton method. We are able to significantly reduce the algorithm computational complexity without compromising the quality of the inverted models.
NASA Astrophysics Data System (ADS)
Gough, D.
1984-12-01
Helioseismological inversion, as with the inversion of any other data, is divided into three phases. The first is the solution of the so-called forward problem: namely, the calculation of the eigenfrequencies of a theoretical equilibrium state. The second is an attempt to understand the results, either empirically by determining how those frequencies vary as chosen parameters defining the equilibrium model are varied, or analytically from asymptotic expansions in limiting cases of high order or degree. The third phase is to pose and solve an inverse problem, which seeks to find a plausible equilibrium model of the Sun whose eigenfrequencies are consistent with observation. The three phases are briefly discussed in this review, and the third, which is not yet widely used in helioseismology, is illustrated with some selected inversions of artificial solar data.
Analysis of Temperature Distributions in Nighttime Inversions
NASA Astrophysics Data System (ADS)
Telyak, Oksana; Krasouski, Aliaksandr; Svetashev, Alexander; Turishev, Leonid; Barodka, Siarhei
2015-04-01
theoretical approaches based on discriminant analysis, mesoscale modeling with WRF provides fairly successful forecasts of formation times and regions for all types of temperature inversions up to 3 days in advance. Furthermore, we conclude that without proper adjustment for the presence of thin isothermal layers (adiabatic and/or inversion layers), temperature data can affect results of statistical climate studies. Provided there are regions where a long-term, constant inversion is present (e.g., Antarctica or regions with continental climate), these data can contribute an uncompensated systematic error of 2 to 10° C. We argue that this very fact may lead to inconsistencies in long-term temperature data interpretations (e.g., conclusions ranging from "global warming" to "global cooling" based on temperature observations for the same region and time period). Due to the importance of this problem from the scientific as well as practical point of view, our plans for further studies include analysis of autumn and wintertime inversions and convective inversions. At the same time, it seems promising to develop an algorithm of automatic recognition of temperature inversions based on a combination of WRF modeling results, surface and satellite observations.
Numerical linear algebra for reconstruction inverse problems
NASA Astrophysics Data System (ADS)
Nachaoui, Abdeljalil
2004-01-01
Our goal in this paper is to discuss various issues we have encountered in trying to find and implement efficient solvers for a boundary integral equation (BIE) formulation of an iterative method for solving a reconstruction problem. We survey some methods from numerical linear algebra, which are relevant for the solution of this class of inverse problems. We motivate the use of our constructing algorithm, discuss its implementation and mention the use of preconditioned Krylov methods.
Forecast Variance Estimates Using Dart Inversion
NASA Astrophysics Data System (ADS)
Gica, E.
2014-12-01
The tsunami forecast tool developed by the NOAA Center for Tsunami Research (NCTR) provides real-time tsunami forecast and is composed of the following major components: a pre-computed tsunami propagation database, an inversion algorithm that utilizes real-time tsunami data recorded at DART stations to define the tsunami source, and inundation models that predict tsunami wave characteristics at specific coastal locations. The propagation database is a collection of basin-wide tsunami model runs generated from 50x100 km "unit sources" with a slip of 1 meter. Linear combination and scaling of unit sources is possible since the nonlinearity in the deep ocean is negligible. To define the tsunami source using the unit sources, real-time DART data is ingested into an inversion algorithm. Based on the selected DART and length of tsunami time series, the inversion algorithm will select the best combination of unit sources and scaling factors that best fit the observed data at the selected locations. This combined source then serves as boundary condition for the inundation models. Different combinations of DARTs and length of tsunami time series used in the inversion algorithm will result in different selection of unit sources and scaling factors. Since the combined unit sources are used as boundary condition for inundation modeling, different sources will produce variations in the tsunami wave characteristics. As part of the testing procedures for the tsunami forecast tool, staff at NCTR and both National and Pacific Tsunami Warning Centers, performed post-event forecasts for several historical tsunamis. The extent of variation due to different source definitions obtained from the testing is analyzed by comparing the simulated maximum tsunami wave amplitude with recorded data at tide gauge locations. Results of the analysis will provide an error estimate defining the possible range of the simulated maximum tsunami wave amplitude for each specific inundation model.
Frequency-domain elastic full-waveform multiscale inversion method based on dual-level parallelism
NASA Astrophysics Data System (ADS)
Li, Yuan-Yuan; Li, Zhen-Chun; Zhang, Kai; Zhang, Xuan
2015-12-01
The complexity of an elastic wavefield increases the nonlinearity of inversion. To some extent, multiscale inversion decreases the nonlinearity of inversion and prevents it from falling into local extremes. A multiscale strategy based on the simultaneous use of frequency groups and layer stripping method based on damped wave field improves the stability of inversion. A dual-level parallel algorithm is then used to decrease the computational cost and improve practicability. The seismic wave modeling of a single frequency and inversion in a frequency group are computed in parallel by multiple nodes based on multifrontal massively parallel sparse direct solver and MPI. Numerical tests using an overthrust model show that the proposed inversion algorithm can effectively improve the stability and accuracy of inversion by selecting the appropriate inversion frequency and damping factor in lowfrequency seismic data.
Generalized emissivity inverse problem.
Ming, DengMing; Wen, Tao; Dai, XianXi; Dai, JiXin; Evenson, William E
2002-04-01
Inverse problems have recently drawn considerable attention from the physics community due to of potential widespread applications [K. Chadan and P. C. Sabatier, Inverse Problems in Quantum Scattering Theory, 2nd ed. (Springer Verlag, Berlin, 1989)]. An inverse emissivity problem that determines the emissivity g(nu) from measurements of only the total radiated power J(T) has recently been studied [Tao Wen, DengMing Ming, Xianxi Dai, Jixin Dai, and William E. Evenson, Phys. Rev. E 63, 045601(R) (2001)]. In this paper, a new type of generalized emissivity and transmissivity inverse (GETI) problem is proposed. The present problem differs from our previous work on inverse problems by allowing the unknown (emissivity) function g(nu) to be temperature dependent as well as frequency dependent. Based on published experimental information, we have developed an exact solution formula for this GETI problem. A universal function set suggested for numerical calculation is shown to be robust, making this inversion method practical and convenient for realistic calculations.
Direct and indirect inversions
NASA Astrophysics Data System (ADS)
Virieux, Jean; Brossier, Romain; Métivier, Ludovic; Operto, Stéphane; Ribodetti, Alessandra
2016-06-01
A bridge is highlighted between the direct inversion and the indirect inversion. They are based on fundamental different approaches: one is looking after a projection from the data space to the model space while the other one is reducing a misfit between observed data and synthetic data obtained from a given model. However, it is possible to obtain similar structures for model perturbation, and we shall focus on P-wave velocity reconstruction. This bridge is built up through the Born approximation linearizing the forward problem with respect to model perturbation and through asymptotic approximations of the Green functions of the wave propagation equation. We first describe the direct inversion and its ingredients and then we focus on a specific misfit function design leading to a indirect inversion. Finally, we shall compare this indirect inversion with more standard least-squares inversion as the FWI, enabling the focus on small weak velocity perturbations on one side and the speed-up of the velocity perturbation reconstruction on the other side. This bridge has been proposed by the group led by Raul Madariaga in the early nineties, emphasizing his leading role in efficient imaging workflows for seismic velocity reconstruction, a drastic requirement at that time.
On the nonuniqueness of receiver function inversions
Ammon, C.J. ); Randall, G.E. ); Zandt, G. )
1990-09-10
To study the resolving power of teleseismic P waveforms for receiver structure, the authors model synthetic waveforms using a time domain waveform inversion scheme beginning with a range of initial models to estimate the range of acceptable velocity structures. To speed up the waveform inversions, they implement Randall's (1989) efficient algorithms for calculating differential seismograms and include a smoothness constraint on all the resulting velocity models utilizing the jumping inversion technique of Shaw and Orcutt (1985). They present the results of more than 235 waveform inversions for one-dimensional velocity structures that indicate that the primary sensitivity of a receiver function is to high wavenumber velocity changes, and a depth-velocity product, not simply velocity. The range of slownesses in a typical receiver function study does not appear to be broad enough to remove the depth-velocity ambiguity; the inclusion of a priori information is necessary. They also present inversion results for station RSCP, located in the Cumberland Plateau, Tennessee. The results are similar to those from a previous study by Owens et al. (1984) and demonstrate the uncertainties in the resulting velocity estimate more clearly.
Structural state testing using eddy current inversion
NASA Astrophysics Data System (ADS)
Dolgov, N. Y.; Chernov, L. A.
2000-05-01
The inverse eddy current problem can be described as the task of reconstructing an unknown distribution of electrical conductivity from eddy-current probe voltage measurements recorded as function of excitation frequency. Conductivity variation may be a result of surface processing with substances like hydrogen and carbon or surface heating. We developed mathematical reasons and supporting software for inverse conductivity profiling. Inverse problem was solved for layered plane and cylindrical conductors. Because the inverse problem is nonlinear, we propose using an iterative algorithm which can be formalized as the minimization of an error functional related to the difference between the probe voltages theoretically predicted by the direct problem solving and the measured probe voltages. Numerical results were obtained for some models of conductivity distribution. It was shown that inverse problem can be solved exactly in case of correct measurements. Good estimation of the true conductivity distribution takes place also for measurement noise about 2 percent but in the case of 5 percent error, results are worse.
Globular Clusters, Ultracompact Dwarfs, and Dwarf Galaxies in Abell 2744 at a Redshift of 0.308
NASA Astrophysics Data System (ADS)
Lee, Myung Gyoon; Jang, In Sung
2016-11-01
We report a photometric study of globular clusters (GCs), ultracompact dwarfs (UCDs), and dwarf galaxies in the giant merging galaxy cluster Abell 2744 at z = 0.308. Color–magnitude diagrams of the point sources derived from deep F814W (rest frame r‧) and F105W (rest frame I) images of Abell 2744 in the Hubble Space Telescope Frontier Field show a rich population of point sources, which have colors that are similar to those of typical GCs. These sources are as bright as -14.9\\lt {M}r\\prime ≤slant -11.4 (26.0 < F814W(Vega) ≤ 29.5) mag, being mostly UCDs and bright GCs in Abell 2744. The luminosity function (LF) of these sources shows a break at {M}r\\prime ≈ -12.9 (F814W ≈ 28.0) mag, indicating a boundary between UCDs and bright GCs. The numbers of GCs and UCDs are estimated to be 1,711,640+589,760 ‑430,500 and 147 ± 26, respectively. The clustercentric radial number density profiles of the UCDs and bright GCs show similar slopes, but these profiles are much steeper than those of the dwarf galaxies and the mass density profile based on gravitational lensing analysis. We derive an LF of the red sequence galaxies for -22.9\\lt {M}r\\prime ≤slant -13.9 mag. The faint end of this LF is fit well by a flat power law with α =-1.14+/- 0.08, showing no faint upturn. These results support the galaxy-origin scenario for bright UCDs: they are the nuclei of dwarf galaxies that are stripped when they pass close to the center of massive galaxies or a galaxy cluster, while some of the faint UCDs are at the bright end of the GCs.
Structure and Formation of cD Galaxies: NGC 6166 in ABELL 2199
NASA Astrophysics Data System (ADS)
Bender, Ralf; Kormendy, John; Cornell, Mark E.; Fisher, David B.
2015-07-01
Hobby-Eberly Telescope (HET) spectroscopy is used to measure the velocity dispersion profile of the nearest prototypical cD galaxy, NGC 6166 in the cluster Abell 2199. We also present composite surface photometry from many telescopes. We confirm the defining feature of a cD galaxy; i.e., (we suggest), a halo of stars that fills the cluster center and that is controlled dynamically by cluster gravity, not by the central galaxy. Our HET spectroscopy shows that the velocity dispersion of NGC 6166 rises from σ ≃ 300 km s-1 in the inner r˜ 10\\prime\\prime to σ =865+/- 58 km s-1 at r ˜ 100″ in the cD halo. This extends published observations of an outward σ increase and shows for the first time that σ rises all the way to the cluster velocity dispersion of 819 ± 32 km s-1. We also observe that the main body of NGC 6166 moves at +206 ± 39 km s-1 with respect to the cluster mean velocity, but the velocity of the inner cD halo is ˜70 km s-1 closer to the cluster velocity. These results support our picture that cD halos consist of stars that were stripped from individual cluster galaxies by fast tidal encounters. However, our photometry does not confirm the widespread view that cD halos are identifiable as an extra, low-surface-brightness component that is photometrically distinct from the inner, steep-Sérsic-function main body of an otherwise-normal giant elliptical galaxy. Instead, all of the brightness profile of NGC 6166 outside its core is described to ±0.037 V mag arcsec-2 by a single Sérsic function with index n≃ 8.3. The cD halo is not recognizable from photometry alone. This blurs the distinction between cluster-dominated cD halos and the similarly-large-Sérsic-index halos of giant, core-boxy-nonrotating ellipticals. These halos are believed to be accreted onto compact, high-redshift progenitors (“red nuggets”) by large numbers of minor mergers. They belong dynamically to their central galaxies. Still, cDs and core-boxy-nonrotating Es
The complex structure of Abell 2345: a galaxy cluster with non-symmetric radio relics
NASA Astrophysics Data System (ADS)
Boschin, W.; Barrena, R.; Girardi, M.
2010-10-01
Context. The connection of cluster mergers with the presence of extended, diffuse radio sources in galaxy clusters is still debated. Aims: We aim to obtain new insights into the internal dynamics of the cluster Abell 2345. This cluster exhibits two non-symmetric radio relics well studied through recent, deep radio data. Methods: Our analysis is based on redshift data for 125 galaxies acquired at the Telescopio Nazionale Galileo and on new photometric data acquired at the Isaac Newton Telescope. We also use ROSAT/HRI archival X-ray data. We combine galaxy velocities and positions to select 98 cluster galaxies and analyze the internal dynamics of the cluster. Results: We estimate a mean redshift < z > = 0.1789 and a line-of-sight (LOS) velocity dispersion σV ~ 1070 km s-1. The two-dimensional galaxy distribution reveals the presence of three significant peaks within a region of ~1 h70-1 Mpc (the E, NW, and SW peaks). The spectroscopic catalog confirms the presence of these three clumps. The SW and NW clumps have similar mean velocities, while the E clump has a larger mean velocity (Δ Vrf ~ 800 km s-1); this structure causes the presence of the two peaks we find in the cluster velocity distribution. The difficulty in separating the galaxy clumps leads to a very uncertain mass estimate M ~ 2 × 1015 h70-1 M⊙. Moreover, the E clump well coincides with the main mass peak as recovered from the weak gravitational lensing analysis and is off-set to the east from the BCG by ~1.3´. The ROSAT X-ray data also show a very complex structure, mainly elongated in the E-W direction, with two (likely three) peaks in the surface brightness distribution, which, however, are off-set from the position of the peaks in the galaxy density. The observed phenomenology agrees with the hypothesis that we are looking at a complex cluster merger occurring along two directions: a major merger along the ~E-W direction (having a component along the LOS) and a minor merger in the western cluster
Linking star formation and galaxy kinematics in the massive cluster Abell 2163
NASA Astrophysics Data System (ADS)
Menacho, Veronica; Verdugo, Miguel
2015-02-01
The origin of the morphology-density relation is still an open question in galaxy evolution. It is most likely driven by the combination of the efficient star formation in the highest peaks of the mass distribution at high-z and the transformation by environmental processes at later times as galaxies fall into more massive halos. To gain additional insights about these processes we study the kinematics, star formation and structural properties of galaxies in Abell 2163 a very massive (~4×1015 M⊙, Holz & Perlmutter 2012) merging cluster at z = 0.2. We use high resolution spectroscopy with VLT/VIMOS to derive rotation curves and dynamical masses for galaxies that show regular kinematics. Galaxies that show irregular rotation are also analysed to study the origin of their distortion. This information is combined with stellar masses and structural parameters obtained from high quality CFHT imaging. From narrow band photometry (2.2m/WFI), centered on the redshifted Hα line, we obtain star formation rates. Although our sample is still small, field and cluster galaxies lie in a similar Tully-Fisher relation as local galaxies. Controlling by additional parameters like SFRs or bulge-to-disk ratio do not affect this result. We find however that ~50% of the cluster galaxies display irregular kinematics in contrast to what is found in the field at similar redshifts (~30%, Böhm et al. 2004) and in agreement with other studies in clusters (e.g. Bösch et al. 2013, Kutdemir et al. 2010) which points out to additional processes operating in clusters that distort the galaxy kinematics.
NASA Astrophysics Data System (ADS)
Braglia, Filiberto G.; Ade, Peter A. R.; Bock, James J.; Chapin, Edward L.; Devlin, Mark J.; Edge, Alastair; Griffin, Matthew; Gundersen, Joshua O.; Halpern, Mark; Hargrave, Peter C.; Hughes, David H.; Klein, Jeff; Marsden, Gaelen; Mauskopf, Philip; Moncelsi, Lorenzo; Netterfield, Calvin B.; Ngo, Henry; Olmi, Luca; Pascale, Enzo; Patanchon, Guillaume; Pimbblet, Kevin A.; Rex, Marie; Scott, Douglas; Semisch, Christopher; Thomas, Nicholas; Truch, Matthew D. P.; Tucker, Carole; Tucker, Gregory S.; Valiante, Elisabetta; Viero, Marco P.; Wiebe, Donald V.
2011-04-01
We present observations at 250, 350 and 500 μm of the nearby galaxy cluster Abell 3112 (z= 0.075) carried out with the Balloon-borne Large Aperture Submillimeter Telescope. Five cluster members are individually detected as bright submillimetre (submm) sources. Their far-infrared spectral energy distributions and optical colours identify them as normal star-forming galaxies of high mass, with globally evolved stellar populations. They all have (B-R) colours of 1.38 ± 0.08, transitional between the blue, active population and the red, evolved galaxies that dominate the cluster core. We stack to estimate the mean submm emission from all cluster members, which is determined to be 16.6 ± 2.5, 6.1 ± 1.9 and 1.5 ± 1.3 mJy at 250, 350 and 500 μm, respectively. Stacking analyses of the submm emission of cluster members reveal trends in the mean far-infrared luminosity with respect to clustercentric radius and KS-band magnitude. We find that a large fraction of submm emission comes from the boundary of the inner, virialized region of the cluster, at clustercentric distances around R500. Stacking also shows that the bulk of the submm emission arises in intermediate-mass galaxies with KS magnitude ˜1 mag fainter than the characteristic magnitude ?. The results and constraints obtained in this work will provide a useful reference for the forthcoming surveys to be conducted on galaxy clusters by Herschel.
The growth of the galaxy cluster Abell 85: mergers, shocks, stripping and seeding of clumping
NASA Astrophysics Data System (ADS)
Ichinohe, Y.; Werner, N.; Simionescu, A.; Allen, S. W.; Canning, R. E. A.; Ehlert, S.; Mernier, F.; Takahashi, T.
2015-04-01
We present the results of deep Chandra, XMM-Newton and Suzaku observations of the nearby galaxy cluster Abell 85, which is currently undergoing at least two mergers, and in addition shows evidence for gas sloshing which extends out to r ≈ 600 kpc. One of the two infalling subclusters, to the south of the main cluster centre, has a dense, X-ray bright cool core and a tail extending to the south-east. The northern edge of this tail is strikingly smooth and sharp (narrower than the Coulomb mean free path of the ambient gas) over a length of 200 kpc, while towards the south-west the boundary of the tail is blurred and bent, indicating a difference in the plasma transport properties between these two edges. The thermodynamic structure of the tail strongly supports an overall north-westward motion. We propose, that a sloshing-induced tangential, ambient, coherent gas flow is bending the tail eastwards. The brightest galaxy of this subcluster is at the leading edge of the dense core, and is trailed by the tail of stripped gas, suggesting that the cool core of the subcluster has been almost completely destroyed by the time it reached its current radius of r ≈ 500 kpc. The surface-brightness excess, likely associated with gas stripped from the infalling southern subcluster, extends towards the south-east out to at least r500 of the main cluster, indicating that the stripping of infalling subclusters may seed gas inhomogeneities. The second merging subcluster appears to be a diffuse non-cool-core system. Its merger is likely supersonic with a Mach number of ≈1.4.
A multiwavelength view of the galaxy cluster Abell 523 and its peculiar diffuse radio source
NASA Astrophysics Data System (ADS)
Girardi, M.; Boschin, W.; Gastaldello, F.; Giovannini, G.; Govoni, F.; Murgia, M.; Barrena, R.; Ettori, S.; Trasatti, M.; Vacca, V.
2016-03-01
We study the structure of the galaxy cluster Abell 523 (A523) at z = 0.104 using new spectroscopic data for 132 galaxies acquired at the Telescopio Nazionale Galileo, new photometric data from the Isaac Newton Telescope, and X-ray and radio data from the Chandra and Very Large Array archives. We estimate the velocity dispersion of the galaxy population, σ _V=949_{-60}^{+80} km s-1, and the X-ray temperature of the hot intracluster medium, kT = 5.3 ± 0.3 keV. We infer that A523 is a massive system: M200 ˜ 7-9 × 1014 M⊙. The analysis of the optical data confirms the presence of two subclusters, 0.75 Mpc apart, tracing the SSW-NNE direction and dominated by the two brightest cluster galaxies (BCG1 and BCG2). The X-ray surface brightness is strongly elongated towards the NNE direction, and its peak is clearly offset from both the brightest cluster galaxies (BCGs). We confirm the presence of a 1.3 Mpc large radio halo, elongated in the ESE-WNW direction and perpendicular to the optical/X-ray elongation. We detect a significant radio/X-ray offset and radio polarization, two features which might be the result of a magnetic field energy spread on large spatial scales. A523 is found consistent with most scaling relations followed by clusters hosting radio haloes, but quite peculiar in the Pradio-LX relation: it is underluminous in the X-rays or overluminous in radio. A523 can be described as a binary head-on merger caught after a collision along the SSW-NNE direction. However, minor optical and radio features suggest a more complex cluster structure, with A523 forming at the crossing of two filaments along the SSW-NNE and ESE-WNW directions.
Origin of galactic bulges, the evolution of groups, and the distribution of Abell clusters
Barnes, J.E.
1984-01-01
Various dynamical topics connected with the origins of galaxies and large scale structure were studied. In Chapter 1 the hypothesis that galactic bulges are simply ellipticals modified by the gravitational field of exponential disks is tested with N-body experiments and an analysis of S. Kent's data-set. The author concludes that, unless disks have improbably low M/L ratios, bulges were not ellipticals; disk fields should produce significant effects, but generally in the wrong direction to explain the differences between bulges and ellipticals. Chapters 2, 3 and 4 explore the evolution of groups of galaxies under the general assumption that galaxies possess massive halos. A sequence of increasingly realistic techniques are employed, culminating in an extensive series of large direct-summation N-body simulations. It is shown that groups of halo-galaxies evolve rapidly, the galaxies becoming segregated at the center of the system. This induces a systematic bias in the observed virial parameters, underestimating the total mass of the system, which may account for the relative M/L ratios of groups and rich clusters, and for the general trend of M/L with scale size between approx.0.1 and approx.1.0 Mpc. Groups with apparent crossing times of approx.0.1 H/sub 0//sup -1/ have probably only just collapsed and are rapidly evolving toward multiple-merger systems. Chapter 5 compares the clustering statistics of rich clusters in N-body simulations with recent observations for Abell clusters. It was found that models with significant power on large scales, such as the cold particle models have the best chance of accounting for the observations.
THE DISTRIBUTION OF DARK MATTER OVER THREE DECADES IN RADIUS IN THE LENSING CLUSTER ABELL 611
Newman, Andrew B.; Ellis, Richard S.; Treu, Tommaso; Marshall, Philip J.; Sand, David J.; Richard, Johan; Capak, Peter; Miyazaki, Satoshi
2009-12-01
We present a detailed analysis of the baryonic and dark matter distribution in the lensing cluster Abell 611 (z = 0.288), with the goal of determining the dark matter profile over an unprecedented range of cluster-centric distance. By combining three complementary probes of the mass distribution, weak lensing from multi-color Subaru imaging, strong lensing constraints based on the identification of multiply imaged sources in Hubble Space Telescope images, and resolved stellar velocity dispersion measures for the brightest cluster galaxy secured using the Keck telescope, we extend the methodology for separating the dark and baryonic mass components introduced by Sand et al. Our resulting dark matter profile samples the cluster from approx3 kpc to 3.25 Mpc, thereby providing an excellent basis for comparisons with recent numerical models. We demonstrate that only by combining our three observational techniques can degeneracies in constraining the form of the dark matter profile be broken on scales crucial for detailed comparisons with numerical simulations. Our analysis reveals that a simple Navarro-Frenk-White (NFW) profile is an unacceptable fit to our data. We confirm earlier claims based on less extensive analyses of other clusters that the inner profile of the dark matter profile deviates significantly from the NFW form and find a inner logarithmic slope beta flatter than 0.3 (68%; where rho{sub DM} propor to r{sup -b}eta at small radii). In order to reconcile our data with cluster formation in a LAMBDACDM cosmology, we speculate that it may be necessary to revise our understanding of the nature of baryon-dark matter interactions in cluster cores. Comprehensive weak and strong lensing data, when coupled with kinematic information on the brightest cluster galaxy, can readily be applied to a larger sample of clusters to test the universality of these results.
Abell 262 and RXJ0341: Two Brightest Cluster Galaxies with Line Emission Blanketing a Cool Core
NASA Astrophysics Data System (ADS)
Edwards, Louise O. V.; Heng, Renita
2014-08-01
Over the last decade, integral field (IFU) analysis of the brightest cluster galaxies (BCGs) in several cool core clusters has revealed the central regions of these massive old red galaxies to be far from dead. Bright line emission alongside extended X-ray emission links nearby galaxies, is superposed upon vast dust lanes and extends out in long thin filaments from the galaxy core. Yet, to date no unifying picture has come into focus, and the activity across systems is currently seen as a grab-bag of possibile emission line mechanisms. Our primary goal is to work toward a consistent picture for why the BCGs seem are undergoing a renewed level of activity. One problem is most of the current data remains focused on mapping the very core of the BCG, but neglects surrounding galaxies. We propose to discover the full extent of line emission in a complementary pair of BCGs. In Abell 262, an extensive dust patch screens large portions of an otherwise smooth central galaxy, whereas RXJ0341 appears to be a double-core dust free BCG. We will map the full extent of the line emission in order to deduce whether the line emission is a product of local interactions, or the large-scale cluster X-ray gas. The narrow band filter set and large FOV afforded by the the Mayall MOSAIC-1 (MOSA) imager allows us to concurrently conduct an emission line survey of both clusters, locating all line emitting members and beginning a search for the effect of the environment of the different regions (outskirts vs. cluster core) out to the virial radius. We will combine our results with publically available data from 2MASS to determine the upper limits on specific star formation in the BCG and other cluster galaxies within the cluster virial radius.
Red population of Abell 1314 : A rest-frame narrowband photometric evolutionary analysis
NASA Astrophysics Data System (ADS)
Sreedhar, Yuvraj Harsha
2014-06-01
Red sequence galaxies form with an intense burst of star formation in the early universe to evolve passively into massive, metal rich, old galaxies at z ˜ 0. But Abell 1314 (z=0.034) is found to host almost all red sequence galaxy members - identified using the mz index, classified using the Principle Component Analysis technique and SDSS colour correlations - some of which show properties of low-mass, star forming, and metal rich galaxies. The variably spread Intra-Cluster Medium (ICM) near the core forms a vital part in influencing the evolution of these members. To study their evolution, I correlated different parameters of the rest-frame narrowband photometry and the derived luminosity-weighted mean Single Stellar Population model ages and metallicities. The study finds the member galaxies evolve differently in three different sections of the cluster: 1. the region of ≤ 200 kpc hosts passively evolving old, massive systems which accumulate mass by dry, minor mergers, 2. the zone between 200-500 kpc shows stripped systems (or in the process of being gas stripped) by ram pressure with moderate star formation history, 3. the outer regions (≥ 500 kpc) show low-mass red objects with blue, star forming Butcher-Oemler galaxy like colours. This sort of environmental condition is known to harbour hybrid systems, like, the pseudo bulges, blue sequence E/S0 and Butcher-Oemler like satellite cluster galaxies. Overall, the cluster is found to be poor, quiescent with galaxies to have formed by the monolithic structure formation in the early universe and are now evolving with mergers and gas stripping processes by ram pressure.
Two and three dimensional magnetotelluric inversion
NASA Astrophysics Data System (ADS)
Booker, J. R.
Improved imaging of underground electrical structure has wide practical importance in exploring for groundwater, mineral, and geothermal resources, and in characterizing oil fields and waste sites. Because the electromagnetic inverse problem for natural sources is generally multidimensional, most imaging algorithms saturate available computer power long before they can deal with complete data sets. We have developed an algorithm to directly invert large multidimensional magnetotelluric data sets that is orders of magnitude faster than competing methods. In the past year, we have extended the two-dimensional (2D) version to permit incorporation of geological constraints, have developed ways to assess model resolution, and have completed work on an accurate and fast three-dimensional (3D) forward algorithm. We are proposing to further enhance the capabilities of the 2D code and to incorporate the 3D forward code in a fully 3D inverse algorithm. Finally, we will embark on an investigation of related EM imaging techniques which may have the potential for further increasing resolution.
Two and three dimensional magnetotelluric inversion
Booker, J.R.
1994-07-01
Improved imaging of underground electrical structure has wide practical importance in exploring for groundwater, mineral and geothermal resources, and in characterizing oil fields and waste sites. Because the electromagnetic inverse problem for natural sources is generally multi-dimensional, most imaging algorithms saturate available computer power long before they can deal with complete data sets. We have developed an algorithm to directly invert large multi-dimensional magnetotelluric data sets that is orders of magnitude faster than competing methods. In the past year, we have extended the two- dimensional (2D) version to permit incorporation of geological constraints, have developed ways to assess model resolution and have completed work on an accurate and fast three-dimensional (3D) forward algorithm. We are proposing to further enhance the capabilities of the 2D code and to incorporate the 3D forward code in a fully 3D inverse algorithm. Finally, we will embark on an investigation of related EM imaging techniques which may have the potential for further increasing resolution.
Inverse kinematics problem in robotics using neural networks
NASA Technical Reports Server (NTRS)
Choi, Benjamin B.; Lawrence, Charles
1992-01-01
In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.
Bolens, Guillemette
2011-01-01
This article grapples with the question of the corpse through two particular literary texts. Rather than an elucidation of the physiological principle of the human body by means of dissection, the play Mactatio Abel, written in England in the 15th century, stages the difficulty of the relation to the corpse, via an amplification of the biblical narrative of Abel's murder by Cain. As for Chaucer's work, The Book of the Duchess, it rewrites Ovid's and Machaut's texts featuring the figure of Morpheus in a way that distinguishes between an imitation of the living and its simulacrum in the sense Wolfgang Iser gives this concept. Chaucer's Morpheus, instead of promoting verisimilitude, forbids it. Indeed, he animates a corpse from within instead of simulating an apparition of the deceased. The simulacrum, rather than a mimetic copy of the real, blocks all representational illusion, in order to formulate absence. The readability of the corpse in both works is relational. Both literary texts express the corpse as being always already grounded in a relational and narratorial space.
NASA Astrophysics Data System (ADS)
Munari, E.; Grillo, C.; De Lucia, G.; Biviano, A.; Annunziatella, M.; Borgani, S.; Lombardi, M.; Mercurio, A.; Rosati, P.
2016-08-01
In this Letter we compare the abundance of the member galaxies of a rich, nearby (z = 0.09) galaxy cluster, Abell 2142, with that of halos of comparable virial mass extracted from sets of state-of-the-art numerical simulations, both collisionless at different resolutions and with the inclusion of baryonic physics in the form of cooling, star formation, and feedback by active galactic nuclei. We also use two semi-analytical models to account for the presence of orphan galaxies. The photometric and spectroscopic information, taken from the Sloan Digital Sky Survey Data Release 12 database, allows us to estimate the stellar velocity dispersion of member galaxies of Abell 2142. This quantity is used as proxy for the total mass of secure cluster members and is properly compared with that of subhalos in simulations. We find that simulated halos have a statistically significant (≳ 7 sigma confidence level) smaller amount of massive (circular velocity above 200 {km} {{{s}}}-1) subhalos, even before accounting for the possible incompleteness of observations. These results corroborate the findings from a recent strong lensing study of the Hubble Frontier Fields galaxy cluster MACS J0416 and suggest that the observed difference is already present at the level of dark matter (DM) subhalos and is not solved by introducing baryonic physics. A deeper understanding of this discrepancy between observations and simulations will provide valuable insights into the impact of the physical properties of DM particles and the effect of baryons on the formation and evolution of cosmological structures.
Application of full-wave inversion to real crosshole data
Song, Z.; Williamson, P.R.
1994-12-31
A 2.5D acoustic frequency domain fullwave inversion method was applied to a real dataset from an open-cast coal exploration site. The only data processing required was the removal of tube waves, because no shear wave arrivals were observed. The inversion is efficient because only a few frequency components are needed. The authors encounter two site-specific problems (source inconsistency and anisotropy) which are addressed by simple adaptations of the inversion algorithm. High resolution results are achieved for both velocity and attenuation reconstructions. The fullwave inversion method combines the advantages of first-arrival travel-time tomography and reflected waves migration. To evaluate the inversion result, they model time domain traces using a source signature estimated by fitting the frequency domain response of the reconstructed model to the observed data across the spectrum. The synthetic traces match the early arrivals in the real data reasonably well.
NASA Astrophysics Data System (ADS)
Clare, R. B.; Levinger, J. S.
1981-02-01
We use the formalism of hyperspherical harmonics to calculate several moments for the triton photoeffect, for a Volkov spin-independent potential. First, we improve the accuracy of Maleki's calculations of the moments σ2 and σ3 by including more terms in the hyperspherical expansion. We also calculate moments σ0 and σ1 for a Serber mixture. We find reasonable agreement between our moments found by sum rules and those found from the cross sections calculated by Fang et al. and Levinger-Fitzgibbon. We then develop a technique of inversion of a finite number of moments by making the assumption that the cross section can be written as a sum of several Laguerre polynomials multiplied by a decreasing exponential. We test our inversion technique successfully on several model potentials. We then modify it and apply it to the five moments (σ-1 to σ3) for a force without exchange, and find fair agreement with Fang's values of the cross section. Finally, we apply the inversion technique to our three moments (σ-1,σ0,and σ1) for a Serber mixture, and find reasonable agreement with Gorbunov's measurements of the 3He photoeffect. NUCLEAR REACTIONS Triton photoeffects, hyperspherical harmonics, moments of photoeffect, inversion of moments.
ERIC Educational Resources Information Center
Rodgers, Joann Ellison
2010-01-01
The notion that vitamins, minerals, and other "supplemental" nutrients profoundly change behavior, mood, and intellect has origins as old as recorded history. Research has indeed suggested connections between nutrient deficiencies and behavior problems, but correlations are not the same as causality. This "Abell Report" is an…
Analytical inversion formula for uniformly attenuated fan-beam projections
Weng, Y.; Zeng, G.L.; Gullberg, G.T.
1997-04-01
In deriving algorithms to reconstruct single photon emission computed tomography (SPECT) projection data, it is important that the algorithm compensates for photon attenuation in order to obtain quantitative reconstruction results. A convolution backprojection algorithm was derived by Tretiak and Metz to reconstruct two-dimensional (2-D) transaxial slices from uniformly attenuated parallel-beam projections. Using transformation of coordinates, this algorithm can be modified to obtain a formulation useful to reconstruct uniformly attenuated fan-beam projections. Unlike that for parallel-beam projections, this formulation does not produce a filtered backprojection reconstruction algorithm but instead has a formulation that is an inverse integral operator with a spatially varying kernel. This algorithm thus requires more computation time than does the filtered backprojection reconstruction algorithm for the uniformly attenuated parallel-beam case. However, the fan-beam reconstructions demonstrate the same image quality as that of parallel-beam reconstructions.
Triaxiality, principal axis orientation and non-thermal pressure in Abell 383
NASA Astrophysics Data System (ADS)
Morandi, Andrea; Limousin, Marceau
2012-04-01
While clusters of galaxies are regarded as one of the most important cosmological probes, the conventional spherical modelling of the intracluster medium and the dark matter (DM), and the assumption of strict hydrostatic equilibrium (i.e. the equilibrium gas pressure is provided entirely by thermal pressure) are very approximate at best. Extending our previous works, we developed further a method to reconstruct for the first time the full 3D structure (triaxial shape and principal-axis orientation) of both DM and intracluster (IC) gas, and the level of non-thermal pressure of the IC gas. We outline an application of our method to the galaxy cluster Abell 383, taken as part of the Cluster Lensing and Supernova Survey with Hubble (CLASH) multicycle treasury programme, presenting results of a joint analysis of X-ray and strong lensing measurements. We find that the intermediate-major and minor-major axis ratios of the DM are 0.71 ± 0.10 and 0.55 ± 0.06, respectively, and the major axis of the DM halo is inclined with respect to the line of sight of 21?1 ± 10?1. The level of non-thermal pressure has been evaluated to be about 10 per cent of the total energy budget. We discuss the implications of our method for the viability of the cold dark matter (CDM) scenario, focusing on the concentration parameter C and the inner slope of the DM, γ, since the cuspiness of DM density profiles in the central regions is one of the critical tests of the CDM paradigm for structure formation: we measure γ= 1.02 ± 0.06 on scales down to 25 Kpc, and C= 4.76 ± 0.51, values which are close to the predictions of the standard model, and providing further evidences that support the CDM scenario. Our method allows us to recover the 3D physical properties of clusters in a bias-free way, overcoming the limitations of the standard spherical modelling and enhancing the use of clusters as more precise cosmological probes.
Disentangling the ICL with the CHEFs: Abell 2744 as a case study
NASA Astrophysics Data System (ADS)
Jimenez-Teja, Yolanda; Dupke, Renato a.
2015-08-01
The intracluster light (ICL) is important for understanding the metal enrichment of the intracluster gas and constraining cosmological parameters independently of the other methods. However, its measurement it is not trivial due to the necessity of disentangling the light of stars locked up in galaxies from the proper ICL. Currently, there is no standard method to efficiently measure the ICL (Rudick et al. 2011, ApJ, 732, 48), and different approaches relying on the binding energy of the cluster galaxies, the density of the material, or the surface brightness distribution, have been tried. Moreover, a suitable way to disentangle the limits of the brightest cluster galaxy (BCG) and the ICL still has not been developed.The CHEFs (from Chebyshev-Fourier bases, Jiménez-Teja & Benítez 2012, ApJ, 745, 150) are a mathematical tool especially designed to model the two-dimensional light distribution of galaxies. We use the CHEFs and tools from differential geometry to infer the light contribution of the ICL to the total brightness, without imposing any artificial thresholds and avoiding the ambiguity introduced by free parameters that are usually set in these studies (Rudick et al. 2011).We use the extremely deep optical images from Abell 2744, the Pandora cluster, a multi-cluster merger, observed by the Hubble Frontier Fields project to show the efficiency of this new method. The CHEFs can accurately fit and remove all the galaxies close to the cluster center, including the BCG. The limits of the BCG are marked out by determining the points where the surface curvature changes, thus disentangling the ICL from the BCG light in a completely natural way. Once we have the residual image just containing ICL and background, we extrapolate the value of this latter from images of individual pointings close to the main Pandora field. We finally estimate the ICL to be ~24% of the total light, which is very consistent with the predictions from numerical simulations (Montes
Ultraviolet Imaging of the cD Galaxy in Abell 1795
NASA Astrophysics Data System (ADS)
Smith, Eric P.; Neff, Susan G.; Smith, Andrew M.; Stecher, Thedore P.; Bohlin, Ralph C.; O'Connell, Robert W.; Roberts, Morton S.
1995-12-01
We present an image of the Abell 1795 cD galaxy and its environment obtained with the Goddard Ultraviolet Imaging Telescope (UIT). Our ultraviolet (UV) image was obtained during the March 1995 Astro-2 Space Shuttle mission using a filter centered at ~ 1520 Angstroms/ (Delta lambda =354 Angstroms/). The ultraviolet image resulting from a 1310 second exposure has stellar images with ~ 5.0arcsec FWHM. We compare these data to published optical, radio (VLA) and archival HST observations. This richness class 2 cluster is known to contain a large cooling flow (dot {M} 300M_⊙ yr(-1) ) and its cD galaxy contains a relatively bright yet small radio source (4C26.42). Previous optical observations have shown the cD galaxy possesses a system of Hα filaments (van Breugel et al./ 1984, ApJ, 276, 79), whose surface brightness is consistent with models in which the emission--lines arise from radiatively regulated accretion (i.e. cooling X-ray gas). Broad-band optical investigations have revealed the presence of ``blue lobes'' near the cD galaxy center. These regions are posited to contain young stars formed via the interaction of a radio jet and the intercluster medium (McNamara & O'Connell 1993, AJ, 105, 417). The HST observations show the elliptical galaxy has an easily resolved dust lane structure near its center. The cD galaxy is very bright in the ultraviolet (m1520=15.1) and exhibits a strong radial color gradient with the center being bluer. Indeed, UV light is detected from the central 7.6arcsecx16 .1arcsec (8.4x17.7 kpc) which can be compared with the optical extents of 38arcsecx70 arcsec . We discuss the implications that our new UV data have for the high mass star formation rate, and examine how our photometry fits in with previous models for the unusual features present in the system. Most of the other cluster galaxies are not detected. We report photometry and predicted star formation rates for those that were seen along with upper limits for those galaxies not
Diffuse light and building history of the galaxy cluster Abell 2667
NASA Astrophysics Data System (ADS)
Covone, G.; Adami, C.; Durret, F.; Kneib, J.-P.; Lima Neto, G. B.; Slezak, E.
2006-12-01
Aims.We searched for diffuse intracluster light in the galaxy cluster Abell 2667 (z=0.233) from HST images in three broad band-filters. Methods: .We applied an iterative multi-scale wavelet analysis and reconstruction technique to these images, which allows to subtract stars and galaxies from the original images. Results: .We detect a zone of diffuse emission southwest of the cluster center (DS1) and a second faint object (ComDif) within DS1. Another diffuse source (DS2) may be detected at lower confidence level northeast of the center. These sources of diffuse light contribute to 10-15% of the total visible light in the cluster. Whether they are independent entities or part of the very elliptical external envelope of the central galaxy remains unclear. Deep VLT VIMOS integral field spectroscopy reveals a faint continuum at the positions of DS1 and ComDif but do not allow a redshift to be computed, so we conclude if these sources are part of the central galaxy or not. A hierarchical substructure detection method reveals the presence of several galaxy pairs and groups defining a similar direction to the one drawn by the DS1 - central galaxy - DS2 axis. The analysis of archive XMM-Newton and Chandra observations shows X-ray emission elongated in the same direction. The X-ray temperature map shows the presence of a cool core, a broad cool zone stretching from north to south, and hotter regions towards the northeast, southwest, and northwest. This might suggest shock fronts along these directions produced by infalling material, even if uncertainties remain quite large on the temperature determination far from the center. Conclusions: .These various data are consistent with a picture in which diffuse sources are concentrations of tidal debris and harassed matter expelled from infalling galaxies by tidal stripping and undergoing an accretion process onto the central cluster galaxy; as such, they are expected to be found along the main infall directions. Note, however
Unstructured discontinuous Galerkin for seismic inversion.
van Bloemen Waanders, Bart Gustaaf; Ober, Curtis Curry; Collis, Samuel Scott
2010-04-01
This abstract explores the potential advantages of discontinuous Galerkin (DG) methods for the time-domain inversion of media parameters within the earth's interior. In particular, DG methods enable local polynomial refinement to better capture localized geological features within an area of interest while also allowing the use of unstructured meshes that can accurately capture discontinuous material interfaces. This abstract describes our initial findings when using DG methods combined with Runge-Kutta time integration and adjoint-based optimization algorithms for full-waveform inversion. Our initial results suggest that DG methods allow great flexibility in matching the media characteristics (faults, ocean bottom and salt structures) while also providing higher fidelity representations in target regions. Time-domain inversion using discontinuous Galerkin on unstructured meshes and with local polynomial refinement is shown to better capture localized geological features and accurately capture discontinuous-material interfaces. These approaches provide the ability to surgically refine representations in order to improve predicted models for specific geological features. Our future work will entail automated extensions to directly incorporate local refinement and adaptive unstructured meshes within the inversion process.
Inverse heat conduction problems
NASA Astrophysics Data System (ADS)
Orlande, Helcio Rangel Barreto
We present the solution of the following inverse problems: (1) Inverse Problem of Estimating Interface Conductance Between Periodically Contacting Surfaces; (2) Inverse Problem of Estimating Interface Conductance During Solidification via Conjugate Gradient Method; (3) Determination of the Reaction Function in a Reaction-Diffusion Parabolic Problem; and (4) Simultaneous Estimation of Thermal Diffusivity and Relaxation Time with Hyperbolic Heat Conduction Model. Also, we present the solution of a direct problem entitled: Transient Thermal Constriction Resistance in a Finite Heat Flux Tube. The Conjugate Gradient Method with Adjoint Equation was used in chapters 1-3. The more general function estimation approach was treated in these chapters. In chapter 1, we solve the inverse problem of estimating the timewise variation of the interface conductance between periodically contacting solids, under quasi-steady-state conditions. The present method is found to be more accurate than the B-Spline approach for situations involving small periods, which are the most difficult on which to perform the inverse analysis. In chapter 2, we estimate the timewise variation of the interface conductance between casting and mold during the solidification of aluminum. The experimental apparatus used in this study is described. In chapter 3, we present the estimation of the reaction function in a one dimensional parabolic problem. A comparison of the present function estimation approach with the parameter estimation technique, wing B-Splines to approximate the reaction function, revealed that the use of function estimation reduces the computer time requirements. In chapter 4 we present a finite difference solution for the transient constriction resistance in a cylinder of finite length with a circular contact surface. A numerical grid generation scheme was used to concentrate grid points in the regions of high temperature gradients in order to reduce discretization errors. In chapter 6, we
Satellite Imagery Analysis for Nighttime Temperature Inversion Clouds
NASA Technical Reports Server (NTRS)
Kawamoto, K.; Minnis, P.; Arduini, R.; Smith, W., Jr.
2001-01-01
Clouds play important roles in the climate system. Their optical and microphysical properties, which largely determine their radiative property, need to be investigated. Among several measurement means, satellite remote sensing seems to be the most promising. Since most of the cloud algorithms proposed so far are daytime use which utilizes solar radiation, Minnis et al. (1998) developed a nighttime use one using 3.7-, 11 - and 12-microns channels. Their algorithm, however, has a drawback that is not able to treat temperature inversion cases. We update their algorithm, incorporating new parameterization by Arduini et al. (1999) which is valid for temperature inversion cases. This updated algorithm has been applied to GOES satellite data and reasonable retrieval results were obtained.
Inverse Functions and their Derivatives.
ERIC Educational Resources Information Center
Snapper, Ernst
1990-01-01
Presented is a method of interchanging the x-axis and y-axis for viewing the graph of the inverse function. Discussed are the inverse function and the usual proofs that are used for the function. (KR)
Intersections, ideals, and inversion
Vasco, D.W.
1998-10-01
Techniques from computational algebra provide a framework for treating large classes of inverse problems. In particular, the discretization of many types of integral equations and of partial differential equations with undetermined coefficients lead to systems of polynomial equations. The structure of the solution set of such equations may be examined using algebraic techniques.. For example, the existence and dimensionality of the solution set may be determined. Furthermore, it is possible to bound the total number of solutions. The approach is illustrated by a numerical application to the inverse problem associated with the Helmholtz equation. The algebraic methods are used in the inversion of a set of transverse electric (TE) mode magnetotelluric data from Antarctica. The existence of solutions is demonstrated and the number of solutions is found to be finite, bounded from above at 50. The best fitting structure is dominantly onedimensional with a low crustal resistivity of about 2 ohm-m. Such a low value is compatible with studies suggesting lower surface wave velocities than found in typical stable cratons.
Sensitivity analysis of distributed volcanic source inversion
NASA Astrophysics Data System (ADS)
Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José
2016-04-01
A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep
NASA Astrophysics Data System (ADS)
Moss, C.; Whittle, M.
2000-09-01
We have undertaken a survey of Hα emission in a substantially complete sample of CGCG galaxies of types Sa and later within 1.5 Abell radii of the centres of eight low-redshift Abell clusters (Abell 262, 347, 400, 426, 569, 779, 1367 and 1656). Some 320 galaxies were surveyed, of which 116 were detected in emission (39 per cent of spirals, 75 per cent of peculiars). Here we present previously unpublished data for 243 galaxies in seven clusters. Detected emission is classified as `compact' or `diffuse'. From an analysis of the full survey sample, we confirm our previous identification of compact and diffuse emission with circumnuclear starburst and disc emission respectively. The circumnuclear emission is associated either with the presence of a bar, or with a disturbed galaxy morphology indicative of ongoing tidal interactions (whether galaxy-galaxy, galaxy-group, or galaxy-cluster). The frequency of such tidally induced (circumnuclear) starburst emission in spirals increases from regions of lower to higher local galaxy surface density, and from clusters with lower to higher central galaxy space density. The percentages of spirals classed as disturbed and of galaxies classified as peculiar show a similar trend. These results suggest that tidal interactions for spirals are more frequent in regions of higher local density and for clusters with higher central galaxy density. The prevalence of such tidal interactions in clusters is expected from recent theoretical modelling of clusters with a non-static potential undergoing collapse and infall. Furthermore, in accord with this picture, we suggest that peculiar galaxies are predominantly ongoing mergers. We conclude that tidal interactions are likely to be the main mechanism for the transformation of spirals to S0s in clusters. This mechanism operates more efficiently in higher density environments, as is required by the morphological type-local surface density (T-Σ) relation for galaxies in clusters. For regions of
Joint inversion of surface and borehole magnetic amplitude data
NASA Astrophysics Data System (ADS)
Li, Zelin; Yao, Changli; Zheng, Yuanman; Yuan, Xiaoyu
2016-04-01
3D magnetic inversion for susceptibility distribution is a powerful tool in quantitative interpretation of magnetic data in mineral exploration. However, the inversion and interpretation of such data are faced with two problems. One problem is the poor imaging results of deep sources when only surface data are inverted. The other is the unknown total magnetization directions of sources when strong remanence exists. To deal with these problems simultaneously, we propose a method through the joint inversion of surface and borehole magnetic amplitude data. In this method, we first transform both surface and borehole magnetic data to magnetic amplitude data that are less sensitive to the directions of total magnetization, and then preform a joint inversion of the whole amplitude data to generate a 3D susceptibility distribution. The amplitude inversion algorithm uses Tikhonov regularization and imposes a positivity constraint on the effective susceptibility defined as the ratio of magnetization magnitude over the geomagnetic field strength. In addition, a distance-based weighting function is used to make the algorithm applicable to joint data sets. To solve this positivity-constraint inversion problem efficiently, an appropriate optimization method must be chosen. We first use an interior-point method to incorporate the positivity constraint into the total objective function, and then minimize the objective function via a Gauss-Newton method due to the nonlinearity introduced by the positivity constraint and the amplitude data. To further improve the efficiency of the inversion algorithm, we use a conjugate gradient method to carry out the fast matrix-vector multiplication during the minimization. To verify the utility of the proposed method, we invert the synthetic and field data using three inversion methods, including the joint inversion of surface and borehole three-component magnetic data, the inversion of surface magnetic amplitude data, and the proposed joint
Inversion based on computational simulations
Hanson, K.M.; Cunningham, G.S.; Saquib, S.S.
1998-09-01
A standard approach to solving inversion problems that involve many parameters uses gradient-based optimization to find the parameters that best match the data. The authors discuss enabling techniques that facilitate application of this approach to large-scale computational simulations, which are the only way to investigate many complex physical phenomena. Such simulations may not seem to lend themselves to calculation of the gradient with respect to numerous parameters. However, adjoint differentiation allows one to efficiently compute the gradient of an objective function with respect to all the variables of a simulation. When combined with advanced gradient-based optimization algorithms, adjoint differentiation permits one to solve very large problems of optimization or parameter estimation. These techniques will be illustrated through the simulation of the time-dependent diffusion of infrared light through tissue, which has been used to perform optical tomography. The techniques discussed have a wide range of applicability to modeling including the optimization of models to achieve a desired design goal.
Structure-constrained image-guided inversion of geophysical data
NASA Astrophysics Data System (ADS)
Zhou, Jieyi
The regularization term in the objective function of an inverse problem is equivalent to the "model covariance" in Tarantola's wording. It is not entirely reasonable to consider the model covariance to be isotropic and homogenous, as done in classical Tikhonov regularization, because the correlation relationships among model cells are likely to change with different directions and locations. The structure-constrained image-guided inversion method, presented in this thesis, aims to solve this problem, and can be used to integrate different types of geophysical data and geological information. The method is first theoretically developed and successfully tested with electrical resistivity data. Then it is applied to hydraulic tomography, and promising hydraulic conductivity models are obtained as well. With a correct guiding image, the image-guided inversion results not only follow the correct structure patterns, but also are closer to the true model in terms of parameter values, when compared with the conventional inversion results. To further account for the uncertainty in the guiding image, a Bayesian inversion scheme is added to the image-guided inversion algorithm. Each geophysical model parameter and geological (structure) model parameter is described by a probability density. Using the data misfit of image-guided inversion of the geophysical data as criterion, a stochastic (image-guided) inversion algorithm allows one to optimize both the geophysical model and the geological model at the same time. The last problem discussed in this thesis is, image-guided inversion and interpolation can help reduce non-uniqueness and improve resolution when utilizing spectral induced polarization data and petrophysical relationships to estimate permeability.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Kalman plus weights: a time scale algorithm
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
2001-01-01
KPW is a time scale algorithm that combines Kalman filtering with the basic time scale equation (BTSE). A single Kalman filter that estimates all clocks simultaneously is used to generate the BTSE frequency estimates, while the BTSE weights are inversely proportional to the white FM variances of the clocks. Results from simulated clock ensembles are compared to previous simulation results from other algorithms.
Inverse Problems in Classical and Quantum Physics
NASA Astrophysics Data System (ADS)
Almasy, Andrea A.
2009-12-01
The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. In this thesis, also two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A promising result is that one can qualitatively reconstruct the conductivity inside the cross-section of a human chest. Even though the human volunteer is neither two-dimensional nor circular, such reconstructions can be useful in medical applications: monitoring for lung problems such as accumulating fluid or a collapsed lung and noninvasive monitoring of heart function and blood flow.
The NYU inverse swept wing code
NASA Technical Reports Server (NTRS)
Bauer, F.; Garabedian, P.; Mcfadden, G.
1983-01-01
An inverse swept wing code is described that is based on the widely used transonic flow program FLO22. The new code incorporates a free boundary algorithm permitting the pressure distribution to be prescribed over a portion of the wing surface. A special routine is included to calculate the wave drag, which can be minimized in its dependence on the pressure distribution. An alternate formulation of the boundary condition at infinity was introduced to enhance the speed and accuracy of the code. A FORTRAN listing of the code and a listing of a sample run are presented. There is also a user's manual as well as glossaries of input and output parameters.
NASA Astrophysics Data System (ADS)
Trigub, R. M.
2015-08-01
We study the convergence of linear means of the Fourier series \\sumk=-∞+∞λk,\\varepsilon\\hat{f}_keikx of a function f\\in L1 \\lbrack -π,π \\rbrack to f(x) as \\varepsilon\\searrow0 at all points at which the derivative \\bigl(\\int_0^xf(t) dt\\bigr)' exists (i.e. at the d-points). Sufficient conditions for the convergence are stated in terms of the factors \\{λk,\\varepsilon\\} and, in the case of λk,\\varepsilon=\\varphi(\\varepsilon k), in terms of the condition that the functions \\varphi and x\\varphi'(x) belong to the Wiener algebra A( R). We also study a new problem concerning the convergence of means of the Abel-Poisson type, \\sumk=-∞^∞r\\psi(\\vert k\\vert)\\hat{f}_keikx, as r\
Inversion of magnetotelluric data in a sparse model domain
NASA Astrophysics Data System (ADS)
Nittinger, Christian G.; Becken, Michael
2016-06-01
The inversion of magnetotelluric data into subsurface electrical conductivity poses an ill-posed problem. Smoothing constraints are widely employed to estimate a regularized solution. Here, we present an alternative inversion scheme that estimates a sparse representation of the model in a wavelet basis. The objective of the inversion is to determine the few non-zero wavelet coefficients which are required to fit the data. This approach falls into the class of sparsity constrained inversion schemes and minimizes the combination of the data misfit in a least squares ℓ2 sense and of a model coefficient norm in a ℓ1 sense (ℓ2-ℓ1 minimization). The ℓ1 coefficient norm renders the solution sparse in a suitable representation such as the multi-resolution wavelet basis, but does not impose explicit structural penalties on the model as it is the case for ℓ2 regularization. The presented numerical algorithm solves the mixed ℓ2-ℓ1 norm minimization problem for the non-linear magnetotelluric inverse problem. We demonstrate the feasibility of our algorithm on synthetic 2-D MT data as well as on a real data example. We found that sparse models can be estimated by inversion and that the spatial distribution of non-vanishing coefficients indicates regions in the model which are resolved.
Inversion of magnetotelluric data in a sparse model domain
NASA Astrophysics Data System (ADS)
Nittinger, Christian G.; Becken, Michael
2016-08-01
The inversion of magnetotelluric data into subsurface electrical conductivity poses an ill-posed problem. Smoothing constraints are widely employed to estimate a regularized solution. Here, we present an alternative inversion scheme that estimates a sparse representation of the model in a wavelet basis. The objective of the inversion is to determine the few non-zero wavelet coefficients which are required to fit the data. This approach falls into the class of sparsity constrained inversion schemes and minimizes the combination of the data misfit in a least-squares ℓ2 sense and of a model coefficient norm in an ℓ1 sense (ℓ2-ℓ1 minimization). The ℓ1 coefficient norm renders the solution sparse in a suitable representation such as the multiresolution wavelet basis, but does not impose explicit structural penalties on the model as it is the case for ℓ2 regularization. The presented numerical algorithm solves the mixed ℓ2-ℓ1 norm minimization problem for the nonlinear magnetotelluric inverse problem. We demonstrate the feasibility of our algorithm on synthetic 2-D MT data as well as on a real data example. We found that sparse models can be estimated by inversion and that the spatial distribution of non-vanishing coefficients indicates regions in the model which are resolved.
Source-independent full waveform inversion of seismic data
Lee, Ki Ha; Kim, Hee Joon
2002-03-20
A rigorous full waveform inversion of seismic data has been a challenging subject partly because of the lack of precise knowledge of the source. Since currently available approaches involve some form of approximations to the source, inversion results are subject to the quality and the choice of the source information used. We propose a new full waveform inversion methodology that does not involve source spectrum information. Thus potential inversion errors due to source estimation can be eliminated. A gather of seismic traces is first Fourier-transformed into the frequency domain and a normalized wavefield is obtained for each trace in the frequency domain. Normalization is done with respect to the frequency response of a reference trace selected from the gather, so the complex-valued normalized wavefield is dimensionless. The source spectrum is eliminated during the normalization procedure. With its source spectrum eliminated, the normalized wavefield allows us construction of an inversion algorithm without the source information. The inversion algorithm minimizes misfits between measured normalized wavefield and numerically computed normalized wavefield. The proposed approach has been successfully demonstrated using a simple two-dimensional scalar problem.
A 2163: Merger events in the hottest Abell galaxy cluster. I. Dynamical analysis from optical data
NASA Astrophysics Data System (ADS)
Maurogordato, S.; Cappi, A.; Ferrari, C.; Benoist, C.; Mars, G.; Soucail, G.; Arnaud, M.; Pratt, G. W.; Bourdin, H.; Sauvageot, J.-L.
2008-04-01
Context: A 2163 is among the richest and most distant Abell clusters, presenting outstanding properties in different wavelength domains. X-ray observations have revealed a distorted gas morphology and strong features have been detected in the temperature map, suggesting that merging processes are important in this cluster. However, the merging scenario is not yet well-defined. Aims: We have undertaken a complementary optical analysis, aiming to understand the dynamics of the system, to constrain the merging scenario and to test its effect on the properties of galaxies. Methods: We present a detailed optical analysis of A 2163 based on new multicolor wide-field imaging and medium-to-high resolution spectroscopy of several hundred galaxies. Results: The projected galaxy density distribution shows strong subclustering with two dominant structures: a main central component (A), and a northern component (B), visible both in optical and in X-ray, with two other substructures detected at high significance in the optical. At magnitudes fainter than R=19, the galaxy distribution shows a clear elongation approximately with the east-west axis extending over 4~h70-1 Mpc, while a nearly perpendicular bridge of galaxies along the north-south axis appears to connect (B) to (A). The (A) component shows a bimodal morphology, and the positions of its two density peaks depend on galaxy luminosity: at magnitudes fainter than R = 19, the axis joining the peaks shows a counterclockwise rotation (from NE/SW to E-W) centered on the position of the X-ray maximum. Our final spectroscopic catalog of 512 objects includes 476 new galaxy redshifts. We have identified 361 galaxies as cluster members; among them, 326 have high precision redshift measurements, which allow us to perform a detailed dynamical analysis of unprecedented accuracy. The cluster mean redshift and velocity dispersion are respectively z= 0.2005 ± 0.0003 and 1434 ± 60 km s-1. We spectroscopically confirm that the northern
The ASTRODEEP Frontier Fields catalogues. I. Multiwavelength photometry of Abell-2744 and MACS-J0416
NASA Astrophysics Data System (ADS)
Merlin, E.; Amorín, R.; Castellano, M.; Fontana, A.; Buitrago, F.; Dunlop, J. S.; Elbaz, D.; Boucaud, A.; Bourne, N.; Boutsia, K.; Brammer, G.; Bruce, V. A.; Capak, P.; Cappelluti, N.; Ciesla, L.; Comastri, A.; Cullen, F.; Derriere, S.; Faber, S. M.; Ferguson, H. C.; Giallongo, E.; Grazian, A.; Lotz, J.; Michałowski, M. J.; Paris, D.; Pentericci, L.; Pilo, S.; Santini, P.; Schreiber, C.; Shu, X.; Wang, T.
2016-05-01
Context. The Frontier Fields survey is a pioneering observational program aimed at collecting photometric data, both from space (Hubble Space Telescope and Spitzer Space Telescope) and from ground-based facilities (VLT Hawk-I), for six deep fields pointing at clusters of galaxies and six nearby deep parallel fields, in a wide range of passbands. The analysis of these data is a natural outcome of the Astrodeep project, an EU collaboration aimed at developing methods and tools for extragalactic photometry and creating valuable public photometric catalogues. Aims: We produce multiwavelength photometric catalogues (from B to 4.5 μm) for the first two of the Frontier Fields, Abell-2744 and MACS-J0416 (plus their parallel fields). Methods: To detect faint sources even in the central regions of the clusters, we develop a robust and repeatable procedure that uses the public codes Galapagos and Galfit to model and remove most of the light contribution from both the brightest cluster members, and the intra-cluster light. We perform the detection on the processed HST H160 image to obtain a pure H-selected sample, which is the primary catalogue that we publish. We also add a sample of sources which are undetected in the H160 image but appear on a stacked infrared image. Photometry on the other HST bands is obtained using SExtractor, again on processed images after the procedure for foreground light removal. Photometry on the Hawk-I and IRAC bands is obtained using our PSF-matching deconfusion code t-phot. A similar procedure, but without the need for the foreground light removal, is adopted for the Parallel fields. Results: The procedure of foreground light subtraction allows for the detection and the photometric measurements of ~2500 sources per field. We deliver and release complete photometric H-detected catalogues, with the addition of the complementary sample of infrared-detected sources. All objects have multiwavelength coverage including B to H HST bands, plus K
NASA Astrophysics Data System (ADS)
Karman, W.; Caputi, K. I.; Grillo, C.; Balestra, I.; Rosati, P.; Vanzella, E.; Coe, D.; Christensen, L.; Koekemoer, A. M.; Krühler, T.; Lombardi, M.; Mercurio, A.; Nonino, M.; van der Wel, A.
2015-02-01
We present the first observations of the Frontier Fields cluster Abell S1063 taken with the newly commissioned Multi Unit Spectroscopic Explorer (MUSE) integral field spectrograph. Because of the relatively large field of view (1 arcmin2), MUSE is ideal to simultaneously target multiple galaxies in blank and cluster fields over the full optical spectrum. We analysed the four hours of data obtained in the science verification phase on this cluster and measured redshifts for 53 galaxies. We confirm the redshift of five cluster galaxies, and determine the redshift of 29 other cluster members. Behind the cluster, we find 17 galaxies at higher redshift, including three previously unknown Lyman-α emitters at z> 3, and five multiply-lensed galaxies. We report the detection of a new z = 4.113 multiply lensed galaxy, with images that are consistent with lensing model predictions derived for the Frontier Fields. We detect C iii], C iv, and He ii emission in a multiply lensed galaxy at z = 3.116, suggesting the likely presence of an active galactic nucleus. We also created narrow-band images from the MUSE datacube to automatically search for additional line emitters corresponding to high-redshift candidates, but we could not identify any significant detections other than those found by visual inspection. With the new redshifts, it will become possible to obtain an accurate mass reconstruction in the core of Abell S1063 through refined strong lensing modelling. Overall, our results illustrate the breadth of scientific topics that can be addressed with a single MUSE pointing. We conclude that MUSE is a very efficient instrument to observe galaxy clusters, enabling their mass modelling, and to perform a blind search for high-redshift galaxies.
NASA Astrophysics Data System (ADS)
Cortese, L.; Gavazzi, G.; Iglesias-Paramo, J.; Boselli, A.; Carrasco, L.
2003-04-01
Optical spectroscopy of 93 galaxies, 60 projected in the direction of Abell 1367, 21 onto the Coma cluster and 12 on Virgo, is reported. The targets were selected because they were detected in previous Hα , UV or r' surveys. The present observations bring to 100% the redshift completeness of Hα selected galaxies in the Coma region and to 75% in Abell 1367. All observed galaxies except one show Hα emission and belong to the clusters. This confirms previous determinations of the Hα luminosity function of the two clusters that were based on the assumption that all Hα detected galaxies were cluster members. Using the newly obtained data we re-determine the UV luminosity function of Coma and we compute for the first time the UV luminosity function of A1367. Their faint end slopes remain uncertain (-2.00
Inverse of polynomial matrices in the irreducible form
NASA Technical Reports Server (NTRS)
Chang, Fan R.; Shieh, Leang S.; Mcinnis, Bayliss C.
1987-01-01
An algorithm is developed for finding the inverse of polynomial matrices in the irreducible form. The computational method involves the use of the left (right) matrix division method and the determination of linearly dependent vectors of the remainders. The obtained transfer function matrix has no nontrivial common factor between the elements of the numerator polynomial matrix and the denominator polynomial.
Inverse problems of ultrasound tomography in models with attenuation
NASA Astrophysics Data System (ADS)
Goncharsky, Alexander V.; Romanov, Sergey Y.
2014-04-01
We develop efficient methods for solving inverse problems of ultrasound tomography in models with attenuation. We treat the inverse problem as a coefficient inverse problem for unknown coordinate-dependent functions that characterize both the speed cross section and the coefficients of the wave equation describing attenuation in the diagnosed region. We derive exact formulas for the gradient of the residual functional in models with attenuation, and develop efficient algorithms for minimizing the gradient of the residual by solving the conjugate problem. These algorithms are easy to parallelize when implemented on supercomputers, allowing the computation time to be reduced by a factor of several hundred compared to a PC. The numerical analysis of model problems shows that it is possible to reconstruct not only the speed cross section, but also the properties of the attenuating medium. We investigate the choice of the initial approximation for iterative algorithms used to solve inverse problems. The algorithms considered are primarily meant for the development of ultrasound tomographs for differential diagnosis of breast cancer.
Inverse problems of ultrasound tomography in models with attenuation.
Goncharsky, Alexander V; Romanov, Sergey Y
2014-04-21
We develop efficient methods for solving inverse problems of ultrasound tomography in models with attenuation. We treat the inverse problem as a coefficient inverse problem for unknown coordinate-dependent functions that characterize both the speed cross section and the coefficients of the wave equation describing attenuation in the diagnosed region. We derive exact formulas for the gradient of the residual functional in models with attenuation, and develop efficient algorithms for minimizing the gradient of the residual by solving the conjugate problem. These algorithms are easy to parallelize when implemented on supercomputers, allowing the computation time to be reduced by a factor of several hundred compared to a PC. The numerical analysis of model problems shows that it is possible to reconstruct not only the speed cross section, but also the properties of the attenuating medium. We investigate the choice of the initial approximation for iterative algorithms used to solve inverse problems. The algorithms considered are primarily meant for the development of ultrasound tomographs for differential diagnosis of breast cancer. PMID:24694653
Wave-Based Inversion & Imaging for the Optical Quadrature Microscope
Lehman, S K
2005-10-27
The Center for Subsurface Sensing & Imaging System's (CenSSIS) Optical Quadrature Microscope (OQM) is a narrow band visible light microscope capable of measuring both amplitude and phase of a scattered field. We develop a diffraction tomography, that is, wave-based, scattered field inversion and imaging algorithm, for reconstructing the refractive index of the scattering object.
Inverse problems of ultrasound tomography in models with attenuation.
Goncharsky, Alexander V; Romanov, Sergey Y
2014-04-21
We develop efficient methods for solving inverse problems of ultrasound tomography in models with attenuation. We treat the inverse problem as a coefficient inverse problem for unknown coordinate-dependent functions that characterize both the speed cross section and the coefficients of the wave equation describing attenuation in the diagnosed region. We derive exact formulas for the gradient of the residual functional in models with attenuation, and develop efficient algorithms for minimizing the gradient of the residual by solving the conjugate problem. These algorithms are easy to parallelize when implemented on supercomputers, allowing the computation time to be reduced by a factor of several hundred compared to a PC. The numerical analysis of model problems shows that it is possible to reconstruct not only the speed cross section, but also the properties of the attenuating medium. We investigate the choice of the initial approximation for iterative algorithms used to solve inverse problems. The algorithms considered are primarily meant for the development of ultrasound tomographs for differential diagnosis of breast cancer.
Inverse lithography using sparse mask representations
NASA Astrophysics Data System (ADS)
Ionescu, Radu C.; Hurley, Paul; Apostol, Stefan
2015-03-01
We present a novel optimisation algorithm for inverse lithography, based on optimization of the mask derivative, a domain inherently sparse, and for rectilinear polygons, invertible. The method is first developed assuming a point light source, and then extended to general incoherent sources. What results is a fast algorithm, producing manufacturable masks (the search space is constrained to rectilinear polygons), and flexible (specific constraints such as minimal line widths can be imposed). One inherent trick is to treat polygons as continuous entities, thus making aerial image calculation extremely fast and accurate. Requirements for mask manufacturability can be integrated in the optimization without too much added complexity. We also explain how to extend the scheme for phase-changing mask optimization.
Inversion of Seabed Parameters in the Stockholm Archipelago
NASA Astrophysics Data System (ADS)
Abrahamsson, L.; Andersson, B. L.
2001-12-01
The purpose of this work was to apply acoustic inversion to a bay in the Stockholm archipelago with strong variations of the bottom both vertically and horizontally. The inversions were based on measurements undertaken in May 2001 of transmission loss over a 2.5 km long track. The bottom parameters were estimated by minimizing the difference between simulated and measured data. The parabolic wave equation was used as a wave propagation model and the inversions were carried out by a genetic algorithm. They resulted in a relatively good fit. The inverted bottom parameters were also evaluated by model predictions against a control data set of other frequencies than those of the inversion. The agreement between the estimated and measured parameters was good.
Joint three-dimensional inversion of magnetotelluric and magnetovariational data
NASA Astrophysics Data System (ADS)
Zhdanov, M. S.; Dmitriev, V. I.; Gribenko, A. V.
2010-08-01
The problem of quantitative three-dimensional interpretation of the magnetotelluric (MT) data ranks among the most difficult problems in electromagnetic (EM) geophysics. Our paper presents a new rigorous numerical method for MT inversion, based on the integral equations technique. An important feature of the proposed method is the calculation of the Frechet derivative with the aid of a quasi-analytical approximation with an inhomogeneous background. This approach simplifies the algorithm of inversion and requires only a single forward modeling on each iteration. We have also developed a method for a joint inversion of MT and magnetovariational (MV) data. We show in the present paper that the joint inversion of MT impedances and the Wiese-Parkinson vectors can automatically allow for the static shift in the observed data, which is caused by the geoelectric inhomogeneities contained in the near-surface layer.
NASA Technical Reports Server (NTRS)
Hsia, T. C.; Lu, G. Z.; Han, W. H.
1987-01-01
In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.
Inversion for sediment geoacoustic properties at the New England Bight
NASA Astrophysics Data System (ADS)
Potty, Gopu R.; Miller, James H.; Lynch, James F.
2003-10-01
This article discusses inversions for bottom geoacoustic properties using broadband acoustic signals obtained from explosive sources. Two different inversion schemes for estimating the compressional wave speeds and attenuation are presented in this paper. In addition to these sediment parameters, source-receiver range is also estimated using the arrival time data. The experimental data used for the inversions are SUS charge explosions acquired on a vertical hydrophone array during the Shelf Break Primer Experiment conducted south of New England in the Middle Atlantic Bight in August 1996. The modal arrival times are extracted using a wavelet analysis. In the first inversion scheme, arrival times corresponding to various modes and frequencies from 10 to 200 Hz are used for the inversion of compressional wave speeds. A hybrid inversion scheme based on a genetic algorithm (GA) is used for the inversion. In an earlier study, Potty et al. [J. Acoust. Soc. Am. 108(3), 973-986 (2000)] have used this hybrid scheme in a range-independent environment. In the present study results of range-dependent inversions are presented. The sound speeds in the water column and bathymetry are assumed range dependent, whereas the sediment compressional wave speeds are assumed range independent. The variations in the sound speeds in the water column are represented using empirical orthogonal functions (EOFs). The replica fields corresponding to the unknown parameters were constructed using adiabatic theory. In the second inversion scheme, modal attenuation coefficients are calculated using modal amplitude ratios. The ratios of the modal amplitudes are also calculated using time-frequency diagrams. A GA-based inversion scheme is used for this search. Finally, as a cross check, the computed compressional wave speeds along with the modal arrival times were used to estimate the source-receiver range. The inverted sediment properties and ranges are seen to compare well with in situ measurements
NASA Astrophysics Data System (ADS)
Grigorov, Igor V.
2009-12-01
In article the algorithm of numerical modelling of the nonlinear equation of Korteweg-de Vrieze which generates nonlinear algorithm of digital processing of signals is considered. For realisation of the specified algorithm it is offered to use a inverse scattering method (ISM). Algorithms of direct and return spectral problems, and also problems of evolution of the spectral data are in detail considered. Results of modelling are resulted.
ERIC Educational Resources Information Center
Losada, David E.; Barreiro, Alvaro
2003-01-01
Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…
Riemann–Hilbert problem approach for two-dimensional flow inverse scattering
Agaltsov, A. D.; Novikov, R. G.
2014-10-15
We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.
New 3D parallel SGILD modeling and inversion
Xie, G.; Li, J.; Majer, E.
1998-09-01
In this paper, a new parallel modeling and inversion algorithm using a Stochastic Global Integral and Local Differential equation (SGILD) is presented. The authors derived new acoustic integral equations and differential equation for statistical moments of the parameters and field. The new statistical moments integral equation on the boundary and local differential equations in domain will be used together to obtain mean wave field and its moments in the modeling. The new moments global Jacobian volume integral equation and the local Jacobian differential equations in domain will be used together to update the mean parameters and their moments in the inversion. A new parallel multiple hierarchy substructure direct algorithm or direct-iteration hybrid algorithm will be used to solve the sparse matrices and one smaller full matrix from domain to the boundary, in parallel. The SGILD modeling and imaging algorithm has many advantages over the conventional imaging approaches. The SGILD algorithm can be used for the stochastic acoustic, electromagnetic, and flow modeling and inversion, and are important for the prediction of oil, gas, coal, and geothermal energy reservoirs in geophysical exploration.
Computationally efficient Bayesian inference for inverse problems.
Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.
2007-10-01
Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.
Inverse problems biomechanical imaging (Conference Presentation)
NASA Astrophysics Data System (ADS)
Oberai, Assad A.
2016-03-01
It is now well recognized that a host of imaging modalities (a list that includes Ultrasound, MRI, Optical Coherence Tomography, and optical microscopy) can be used to "watch" tissue as it deforms in response to an internal or external excitation. The result is a detailed map of the deformation field in the interior of the tissue. This deformation field can be used in conjunction with a material mechanical response to determine the spatial distribution of material properties of the tissue by solving an inverse problem. Images of material properties thus obtained can be used to quantify the health of the tissue. Recently, they have been used to detect, diagnose and monitor cancerous lesions, detect vulnerable plaque in arteries, diagnose liver cirrhosis, and possibly detect the onset of Alzheimer's disease. In this talk I will describe the mathematical and computational aspects of solving this class of inverse problems, and their applications in biology and medicine. In particular, I will discuss the well-posedness of these problems and quantify the amount of displacement data necessary to obtain a unique property distribution. I will describe an efficient algorithm for solving the resulting inverse problem. I will also describe some recent developments based on Bayesian inference in estimating the variance in the estimates of material properties. I will conclude with the applications of these techniques in diagnosing breast cancer and in characterizing the mechanical properties of cells with sub-cellular resolution.
Modular Inverse Reinforcement Learning for Visuomotor Behavior
Rothkopf, Constantin A.; Ballard, Dana H.
2013-01-01
In a large variety of situations one would like to have an expressive and accurate model of observed animal or human behavior. While general purpose mathematical models may capture successfully properties of observed behavior, it is desirable to root models in biological facts. Because of ample empirical evidence for reward-based learning in visuomotor tasks we use a computational model based on the assumption that the observed agent is balancing the costs and benefits of its behavior to meet its goals. This leads to using the framework of Reinforcement Learning, which additionally provides well-established algorithms for learning of visuomotor task solutions. To quantify the agent’s goals as rewards implicit in the observed behavior we propose to use inverse reinforcement learning, which quantifies the agent’s goals as rewards implicit in the observed behavior. Based on the assumption of a modular cognitive architecture, we introduce a modular inverse reinforcement learning algorithm that estimates the relative reward contributions of the component tasks in navigation, consisting of following a path while avoiding obstacles and approaching targets. It is shown how to recover the component reward weights for individual tasks and that variability in observed trajectories can be explained succinctly through behavioral goals. It is demonstrated through simulations that good estimates can be obtained already with modest amounts of observation data, which in turn allows the prediction of behavior in novel configurations. PMID:23832417
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; Tartakovsky, Daniel M.
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulicmore » head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.« less
NASA Astrophysics Data System (ADS)
Tao, Yi; Sen, Mrinal K.; Zhang, Rui; Spikes, Kyle T.
2013-06-01
Non-uniqueness presents challenges to seismic inverse problems, especially for time-lapse inversion where multiple inversions are needed for different vintages of seismic data. For time-lapse applications, the focus typically is to detect relatively small changes in seismic attributes at limited locations and to relate these differences to changes in the underlying physical properties. We propose a robust inversion workflow where the baseline inversion uses a starting model, which combines a high-frequency fractal component and a low-frequency component from well log data. This starting model provides an estimate of the null space based on fractal statistics of well data. To further focus on the localized changes, the inverted elastic parameters from the baseline model and the difference between two time-lapse data are summed together to produce the virtual time-lapse seismic data. This is known as double-difference inversion, which focuses primarily on the areas where time-lapse changes occur. The misfit function uses both data and model norms so that the ill-posedness of the inverse problem can be regularized. We pre-process the seismic data using a local correlation-based warping algorithm to register the time-lapse datasets. Finally, very fast simulated annealing, a nonlinear global search method, is used to minimize the misfit function. We demonstrate the effectiveness of our method with synthetic data and field data from Cranfield site used for CO2 sequestration studies.
Stress inversion assumptions review
NASA Astrophysics Data System (ADS)
Lejri, Mostfa; Maerten, Frantz; Maerten, Laurent; Joonnenkindt, Jean Pierre; Soliva, Roger
2014-05-01
Wallace (1951) and Bott (1959) were the first to introduce the idea that the slip on each fault surface has the same direction and sense as the maximum shear stress resolved on that surface. This hypothesis are based on the assumptions that (i) faults are planar, (ii) blocks are rigid, (iii) neither stress perturbations nor block rotations along fault surfaces occur and (iv), the applied stress state is uniform. However, this simplified hypothesis is questionable since complex fault geometries, heterogeneous fault slip directions, evidences of stress perturbations in microstructures and block rotations along fault surfaces were reported in the literature. Earlier numerical geomechanical models confirmed that the striation lines (slip vectors) are not necessarily parallel to the maximum shear stress vector but is consistent with local stress perturbations. This leads us to ask as to what extent the Wallace and Bott simplifications are reliable as a basis hypothesis for stress inversion. In this presentation, a geomechanical multi-parametric study using 3D boundary element method (BEM), covering (i) fault geometries such as intersected faults or corrugated fault surfaces, (ii) the full range of Andersonian state of stress, (iii) fault friction, (iv) half space effect and (v), rock properties, is performed in order to understand the effect of each parameter on the angular misfit between geomechanical slip vectors and the resolved shear stresses. It is shown that significant angular misfits can be found under specific configurations and therefore we conclude that stress inversions based on the Wallace-Bott hypothesis might sometime give results that should be interpreted with care. Major observations are that (i) applying optimum tectonic stress conditions on complex fault geometries can increase the angular misfit, (ii) elastic material properties, combined to half-space effect, can enhance this effect, and (iii) an increase of the sliding friction leads to a
Inverse magnetorheological fluids.
Rodríguez-Arco, L; López-López, M T; Zubarev, A Y; Gdula, K; Durán, J D G
2014-09-01
We report a new kind of field-responsive fluid consisting of suspensions of diamagnetic (DM) and ferromagnetic (FM) microparticles in ferrofluids. We designate them as inverse magnetorheological (IMR) fluids for analogy with inverse ferrofluids (IFFs). Observations on the particle self-assembly in IMR fluids upon magnetic field application showed that DM and FM microparticles were assembled into alternating chains oriented along the field direction. We explain such assembly on the basis of the dipolar interaction energy between particles. We also present results on the rheological properties of IMR fluids and, for comparison, those of IFFs and bidispersed magnetorheological (MR) fluids. Interestingly, we found that upon magnetic field application, the rheological properties of IMR fluids were enhanced with respect to bidispersed MR fluids with the same FM particle concentration, by an amount greater than the sum of the isolated contribution of DM particles. Furthermore, the field-induced yield stress was moderately increased when up to 30% of the total FM particle content was replaced with DM particles. Beyond this point, the dependence of the yield stress on the DM content was non-monotonic, as expected for FM concentrations decreasing to zero. We explain these synergistic results by two separate phenomena: the formation of exclusion areas for FM particles due to the perturbation of the magnetic field by DM particles and the dipole-dipole interaction between DM and FM particles, which enhances the field-induced structures. Based on the second phenomenon, we present a theoretical model for the yield stress that semi-quantitatively predicts the experimental results. PMID:25022363
NASA Technical Reports Server (NTRS)
Alfano, Robert R. (Inventor); Cai, Wei (Inventor)
2007-01-01
A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.
NASA Technical Reports Server (NTRS)
Snow, W. L.
1972-01-01
Temperature profiles were measured for agron atmospheric pressure by using absolute line and continuum intensity and were compared with stark width and shift measurements. A detailed analysis of the engineering aspects of setting up for Abel inverting deta photographically is presented. The merits of using photographic detection and of using continuum radiation for temperature profile analysis are discussed. The importance of empirically determining the optical depth is emphasized by discrepancies between measured (two-path) and calculated estimates.
Inverse problem for Bremsstrahlung radiation
Voss, K.E.; Fisch, N.J.
1991-10-01
For certain predominantly one-dimensional distribution functions, an analytic inversion has been found which yields the velocity distribution of superthermal electrons given their Bremsstrahlung radiation. 5 refs.
NASA Astrophysics Data System (ADS)
Sarazin, C.; Hogge, T.; Chatzikos, M.; Wik, D.; Giacintucci, S.; Clarke, T.; Wong, K.; Gitti, M.; Finoguenov, A.
2014-07-01
XMM-Newton and Chandra observations of remarkable dynamic structures in the X-ray gas and connected radio sources in three clusters are presented. Abell 2061 is a highly irregular, merging cluster in the Corona Borealis supercluster. X-ray observations show that there is a plume of very cool gas (˜1 keV) to the NE of the cluster, and a hot (7.6 keV) shock region just NE of the center. There is a very bright radio relic to the far SW of the cluster, and a central radio halo/relic with an extension to the NE. Comparison to SLAM simulations show that this is an offset merger of a ˜5 × 10^{13} M⊙ subcluster with a ˜2.5 × 10^{14} M⊙ cluster seen after first core passage. The plume is the cool-core gas from the subcluster, which has been ``slingshot'' to the NE of the cluster. The plume gas is now falling back into the cluster center, and shocks when it hits the central gas. The model predicts a strong shock to the SW at the location of the bright radio relic, and another shock at the NE radio extension. Time permitting, the observations of Abell 2626 and Abell 3667 will also be presented.
Development of HF radar inversion algorithm for spectrum estimation (HIAS)
NASA Astrophysics Data System (ADS)
Hisaki, Yukiharu
2015-03-01
A method for estimating ocean wave directional spectra using an HF (high-frequency) ocean radar was developed. This method represents the development of work conducted in previous studies. In the present method, ocean wave directional spectra are estimated on polar coordinates whose center is the radar position, while spectra are estimated on regular grids. This method can be applied to both single and multiple radar cases. The area for wave estimation is more flexible than that of the previous method. As the signal to noise (SN) ratios of Doppler spectra are critical for wave estimation, we develop a method to exclude low SN ratio Doppler spectra. The validity of the method is demonstrated by comparing results with in situ observed wave data that it would be impossible to estimate by the methods of other groups.
The Estimation and Inversion of Magnetotelluric Data with Static Shift
NASA Astrophysics Data System (ADS)
Wang, X.; Zhou, J.; Zhang, J.; Min, G.; Xia, S.
2015-12-01
IntroductionIn magnetotelluric sounding data processing, the static shift correction is one of the most important steps. Due to the complexity of near-surface inhomogeneous bodies distribution, it is difficult to estimate the static shift of measured data. For this problem, we put forward on the basis of the inversion model for static shift estimation, and reconstructed the initial model with using the original data for 2D or 3D inversion. Estimation and Inversion methodThe magnetotelluric impedance phase has the characteristics of not influenced by the static shift in Two-dimensional electrical structure. The objective function for static shift estimation can be constructed based on impedance phase data. On the basis of normal inversion, utilizing one-dimensional linear search algorithm, combined with the forward modeling, the MT static shift can be estimated.Using estimation results for translation of anomaly measured curve. According to the inversion(1-D) of these translated curve, the initial model for two-dimensional or two-dimensional inversion can be reconstructed. On this basis, we do inversion for the original data, which not only can effectively eliminate the influence of static shift on the deep structure of inversion model, but also can get the right shallow electrical structure in the inversionConclusionThe estimation value of static shift based on impedance phase can be close to the true value. This estimation results can be used to modify the initial model, which makes the deep electric structure of the model more reasonable. On this basis, the inversion of the original data can ensure the correctness of the final inversion results (including shallow and deep).Acknowledgement This paper is supported by National Natural Science Foundation (41274078) and National 863 High Technology Research and Development Program (2014AA06A612).Reference[1] deGroot-Hedlin C. Removal of static shift in two dimensions by regularized inversion[J]. Geophysics, 1991, 56
X-ray cavities and temperature jumps in the environment of the strong cool core cluster Abell 2390
NASA Astrophysics Data System (ADS)
Sonkamble, S. S.; Vagshette, N. D.; Pawar, P. K.; Patil, M. K.
2015-10-01
We present results based on the systematic analysis of high resolution 95 ks Chandra observations of the strong cool core cluster Abell 2390 at the redshift of z = 0.228 that hosts an energetic radio AGN. This analysis has enabled us to investigate five X-ray deficient cavities in the atmosphere of Abell 2390 within central 30''. Presence of these cavities have been confirmed through a variety of image processing techniques like, the surface brightness profiles, unsharp masked image, as well as 2D elliptical model subtracted residual map. Temperature profile as well as 2D temperature map revealed structures in the distribution of ICM, in the sense that ICM in the NW direction is cooler than that on the SE direction. Temperature jump in all directions is evident near 25'' (90.5 kpc) corresponding to the average Mach number 1.44± 0.05, while another jump from 7.47 keV to 9.10 keV at 68'' (246 kpc) in the north-west direction, corresponding to Mach number 1.22± 0.06 and these jumps are associated with the cold fronts. Tricolour map as well as hardness ratio map detects cool gas clumps in the central 30 kpc region of temperature 4.45_{-0.10}^{+0.16} keV. The entropy profile derived from the X-ray analysis is found to fall systematically inward in a power-law fashion and exhibits a floor near 12.20± 2.54 keV cm2 in the central region. This flattening of the entropy profile in the core region confirms the intermittent heating at the centre by AGN. The diffuse radio emission map at 1.4 GHz using VLA L-band data exhibits highly asymmetric morphology with an edge in the north-west direction coinciding with the X-ray edge seen in the unsharp mask image. The mechanical power injected by the AGN in the form of X-ray cavities is found to be 5.94× 10^{45} erg s^{-1} and is roughly an order of magnitude higher than the energy lost by the ICM in the form of X-ray emission, confirming that AGN feedback is capable enough to quench the cooling flow in this cluster.
NASA Astrophysics Data System (ADS)
van Weeren, R. J.; Röttgering, H. J. A.; Rafferty, D. A.; Pizzo, R.; Bonafede, A.; Brüggen, M.; Brunetti, G.; Ferrari, C.; Orrù, E.; Heald, G.; McKean, J. P.; Tasse, C.; de Gasperin, F.; Bîrzan, L.; van Zwieten, J. E.; van der Tol, S.; Shulevski, A.; Jackson, N.; Offringa, A. R.; Conway, J.; Intema, H. T.; Clarke, T. E.; van Bemmel, I.; Miley, G. K.; White, G. J.; Hoeft, M.; Cassano, R.; Macario, G.; Morganti, R.; Wise, M. W.; Horellou, C.; Valentijn, E. A.; Wucknitz, O.; Kuijken, K.; Enßlin, T. A.; Anderson, J.; Asgekar, A.; Avruch, I. M.; Beck, R.; Bell, M. E.; Bell, M. R.; Bentum, M. J.; Bernardi, G.; Best, P.; Boonstra, A.-J.; Brentjens, M.; van de Brink, R. H.; Broderick, J.; Brouw, W. N.; Butcher, H. R.; van Cappellen, W.; Ciardi, B.; Eislöffel, J.; Falcke, H.; Fender, R.; Garrett, M. A.; Gerbers, M.; Gunst, A.; van Haarlem, M. P.; Hamaker, J. P.; Hassall, T.; Hessels, J. W. T.; Koopmans, L. V. E.; Kuper, G.; van Leeuwen, J.; Maat, P.; Millenaar, R.; Munk, H.; Nijboer, R.; Noordam, J. E.; Pandey, V. N.; Pandey-Pommier, M.; Polatidis, A.; Reich, W.; Scaife, A. M. M.; Schoenmakers, A.; Sluman, J.; Stappers, B. W.; Steinmetz, M.; Swinbank, J.; Tagger, M.; Tang, Y.; Vermeulen, R.; de Vos, M.; van Haarlem, M. P.
2012-07-01
Abell 2256 is one of the best known examples of a galaxy cluster hosting large-scale diffuse radio emission that is unrelated to individual galaxies. It contains both a giant radio halo and a relic, as well as a number of head-tail sources and smaller diffuse steep-spectrum radio sources. The origin of radio halos and relics is still being debated, but over the last years it has become clear that the presence of these radio sources is closely related to galaxy cluster merger events. Here we present the results from the first LOFAR low band antenna (LBA) observations of Abell 2256 between 18 and 67 MHz. To our knowledge, the image presented in this paper at 63 MHz is the deepest ever obtained at frequencies below 100 MHz in general. Both the radio halo and the giant relic are detected in the image at 63 MHz, and the diffuse radio emission remains visible at frequencies as low as 20 MHz. The observations confirm the presence of a previously claimed ultra-steep spectrum source to the west of the cluster center with a spectral index of -2.3 ± 0.4 between 63 and 153 MHz. The steep spectrum suggests that this source is an old part of a head-tail radio source in the cluster. For the radio relic we find an integrated spectral index of -0.81 ± 0.03, after removing the flux contribution from the other sources. This is relatively flat which could indicate that the efficiency of particle acceleration at the shock substantially changed in the last ~0.1 Gyr due to an increase of the shock Mach number. In an alternative scenario, particles are re-accelerated by some mechanism in the downstream region of the shock, resulting in the relatively flat integrated radio spectrum. In the radio halo region we find indications of low-frequency spectral steepening which may suggest that relativistic particles are accelerated in a rather inhomogeneous turbulent region.
Robust inverse kinematics using damped least squares with dynamic weighting
NASA Technical Reports Server (NTRS)
Schinstock, D. E.; Faddis, T. N.; Greenway, R. B.
1994-01-01
This paper presents a general method for calculating the inverse kinematics with singularity and joint limit robustness for both redundant and non-redundant serial-link manipulators. Damped least squares inverse of the Jacobian is used with dynamic weighting matrices in approximating the solution. This reduces specific joint differential vectors. The algorithm gives an exact solution away from the singularities and joint limits, and an approximate solution at or near the singularities and/or joint limits. The procedure is here implemented for a six d.o.f. teleoperator and a well behaved slave manipulator resulted under teleoperational control.
A Recursive Born Approach to Nonlinear Inverse Scattering
NASA Astrophysics Data System (ADS)
Kamilov, Ulugbek S.; Liu, Dehong; Mansour, Hassan; Boufounos, Petros T.
2016-08-01
The Iterative Born Approximation (IBA) is a well-known method for describing waves scattered by semi-transparent objects. In this paper, we present a novel nonlinear inverse scattering method that combines IBA with an edge-preserving total variation (TV) regularizer. The proposed method is obtained by relating iterations of IBA to layers of a feedforward neural network and developing a corresponding error backpropagation algorithm for efficiently estimating the permittivity of the object. Simulations illustrate that, by accounting for multiple scattering, the method successfully recovers the permittivity distribution where the traditional linear inverse scattering fails.
Spatial operator approach to flexible manipulator inverse and forward dynamics
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1990-01-01
This study extends to flexible multibody manipulators the recent results of the author on the use of spatially recursive filtering and smoothing techniques for robot arm dynamics. The configuration analyzed is that of a mechanical system of flexible bodies joined together by articulated joints. The inverse and forward dynamics problems are solved using the techniques of spatially recursive Kalman filtering and smoothing. The algorithms are easily developed using a set of identities associated with mass matrix factorization and inversion. The identities are easily derived using a spatial operator algebra developed by the author.
Modular theory of inverse systems
NASA Technical Reports Server (NTRS)
1979-01-01
The relationship between multivariable zeros and inverse systems was explored. A definition of zero module is given in such a way that it is basis independent. The existence of essential right and left inverses were established. The way in which the abstract zero module captured previous definitions of multivariable zeros is explained and examples are presented.
Inversion exercises inspired by mechanics
NASA Astrophysics Data System (ADS)
Groetsch, C. W.
2016-02-01
An elementary calculus transform, inspired by the centroid and gyration radius, is introduced as a prelude to the study of more advanced transforms. Analysis of the transform, including its inversion, makes use of several key concepts from basic calculus and exercises in the application and inversion of the transform provide practice in the use of technology in calculus.
Inverse Problems of Thermoelectricity
NASA Astrophysics Data System (ADS)
Anatychuk, L. I.; Luste, O. J.; Kuz, R. V.; Strutinsky, M. N.
2011-05-01
Classical thermoelectricity is based on the use of the Seebeck and Thomson effects that occur in the near-contact areas between n- and p-type materials. A conceptually different approach to thermoelectric power converter design that is based on the law of thermoelectric induction of currents is also known. The efficiency of this approach has already been demonstrated by its first applications. More than 10 basically new types of thermoelements were discovered with properties that cannot be achieved by thermocouple power converters. Therefore, further development of this concept is of practical interest. This paper provides a classification and theory for solving the inverse problems of thermoelectricity that form the basis for devising new thermoelement types. Computer methods for their solution for anisotropic and inhomogeneous media are elaborated. Regularities related to thermoelectric current excitation in anisotropic and inhomogeneous media are established. The possibility of obtaining eddy currents of a particular configuration through control of the temperature field and material parameters for the creation of new thermo- element types is demonstrated for three-dimensional (3D) models of anisotropic and inhomogeneous media.
Inverse problem in hydrogeology
NASA Astrophysics Data System (ADS)
Carrera, Jesús; Alcolea, Andrés; Medina, Agustín; Hidalgo, Juan; Slooten, Luit J.
2005-03-01
The state of the groundwater inverse problem is synthesized. Emphasis is placed on aquifer characterization, where modelers have to deal with conceptual model uncertainty (notably spatial and temporal variability), scale dependence, many types of unknown parameters (transmissivity, recharge, boundary conditions, etc.), nonlinearity, and often low sensitivity of state variables (typically heads and concentrations) to aquifer properties. Because of these difficulties, calibration cannot be separated from the modeling process, as it is sometimes done in other fields. Instead, it should be viewed as one step in the process of understanding aquifer behavior. In fact, it is shown that actual parameter estimation methods do not differ from each other in the essence, though they may differ in the computational details. It is argued that there is ample room for improvement in groundwater inversion: development of user-friendly codes, accommodation of variability through geostatistics, incorporation of geological information and different types of data (temperature, occurrence and concentration of isotopes, age, etc.), proper accounting of uncertainty, etc. Despite this, even with existing codes, automatic calibration facilitates enormously the task of modeling. Therefore, it is contended that its use should become standard practice. L'état du problème inverse des eaux souterraines est synthétisé. L'accent est placé sur la caractérisation de l'aquifère, où les modélisateurs doivent jouer avec l'incertitude des modèles conceptuels (notamment la variabilité spatiale et temporelle), les facteurs d'échelle, plusieurs inconnues sur différents paramètres (transmissivité, recharge, conditions aux limites, etc.), la non linéarité, et souvent la sensibilité de plusieurs variables d'état (charges hydrauliques, concentrations) des propriétés de l'aquifère. A cause de ces difficultés, le calibrage ne peut êtreséparé du processus de modélisation, comme c'est le
3D magnetic inversion by planting anomalous densities
NASA Astrophysics Data System (ADS)
Uieda, L.; Barbosa, V. C.
2013-05-01
We present a new 3D magnetic inversion algorithm based on the computationally efficient method of planting anomalous densities. The algorithm consists of an iterative growth of the anomalous bodies around prismatic elements called "seeds". These seeds are user-specified and have known magnetizations. Thus, the seeds provide a way for the interpreter to specify the desired skeleton of the anomalous bodies. The inversion algorithm is computationally efficient due to various optimizations made possible by the iterative nature of the growth process. The control provided by the use of seeds allows one to test different hypothesis about the geometry and magnetization of targeted anomalous bodies. To demonstrate this capability, we applied our inversion method to the Morro do Engenho (ME) and A2 magnetic anomalies, central Brazil (Figure 1a). ME is an outcropping alkaline intrusion formed by dunites, peridotites and pyroxenites with known magnetization. A2 is a magnetic anomaly to the Northeast of ME and is thought to be a similar intrusion that is not outcropping. Therefore, a plausible hypothesis is that A2 has the same magnetization as ME. We tested this hypothesis by performing an inversion using a single seed for each body. Both seeds had the same magnetization. Figure 1b shows that the inversion produced residuals up to 2000 nT over A2 (i.e., a poor fit) and less than 400 nT over ME (i.e., an acceptable fit). Figure 1c shows that ME is a compact outcropping body with bottom at approximately 5 km, which is in agreement with previous interpretations. However, the estimate produced by the inversion for A2 is outcropping and is not compact. In summary, the estimate for A2 provides a poor fit to the observations and is not in accordance with the geologic information. This leads to the conclusion that A2 does not have the same magnetization as ME. These results indicate the usefulness and capabilities of the inversion method here proposed.; a) total field magnetic anomaly
Polarization of inverse plasmon scattering
NASA Technical Reports Server (NTRS)
Windsor, R. A.; Kellogg, P. J.
1974-01-01
The scattering of electrostatic plasma waves by a flux of ultrarelativistic electrons passing through a plasma gives rise to a radiation spectrum which is similar to a synchrotron radiation spectrum. This mechanism, first considered by Gailitis and Tsytovich, is analagous to inverse Compton scattering, and we have named it inverse plasmon scattering. For a power-law electron flux, both inverse plasmon scattering and synchrotron radiation have the same spectral index. In an attempt to distinguish between these mechanisms, we have calculated the polarization level expected from inverse plasmon scattering. The polarization level found is similar to that obtained from a synchrotron radiation source. This means that the radiation produced by two mechanisms, synchrotron radiation and inverse plasmon scattering, is indistinguishable; and this attempt to differentiate between them by polarization effects has been unsuccessful.
Computational methods for inverse problems in geophysics: inversion of travel time observations
Pereyra, V.; Keller, H.B.; Lee, W.H.K.
1980-01-01
General ways of solving various inverse problems are studied for given travel time observations between sources and receivers. These problems are separated into three components: (a) the representation of the unknown quantities appearing in the model; (b) the nonlinear least-squares problem; (c) the direct, two-point ray-tracing problem used to compute travel time once the model parameters are given. Novel software is described for (b) and (c), and some ideas given on (a). Numerical results obtained with artificial data and an implementation of the algorithm are also presented. ?? 1980.
Fast Gibbs sampling for high-dimensional Bayesian inversion
NASA Astrophysics Data System (ADS)
Lucka, Felix
2016-11-01
Solving ill-posed inverse problems by Bayesian inference has recently attracted considerable attention. Compared to deterministic approaches, the probabilistic representation of the solution by the posterior distribution can be exploited to explore and quantify its uncertainties. In applications where the inverse solution is subject to further analysis procedures can be a significant advantage. Alongside theoretical progress, various new computational techniques allow us to sample very high dimensional posterior distributions: in (Lucka 2012 Inverse Problems 28 125012), and a Markov chain Monte Carlo posterior sampler was developed for linear inverse problems with {{\\ell }}1-type priors. In this article, we extend this single component (SC) Gibbs-type sampler to a wide range of priors used in Bayesian inversion, such as general {{\\ell }}pq priors with additional hard constraints. In addition, a fast computation of the conditional, SC densities in an explicit, parameterized form, a fast, robust and exact sampling from these one-dimensional densities is key to obtain an efficient algorithm. We demonstrate that a generalization of slice sampling can utilize their specific structure for this task and illustrate the performance of the resulting slice-within-Gibbs samplers by different computed examples. These new samplers allow us to perform sample-based Bayesian inference in high-dimensional scenarios with certain priors for the first time, including the inversion of computed tomography data with the popular isotropic total variation prior.
3D magnetotelluric inversion with full distortion matrix
NASA Astrophysics Data System (ADS)
Gribenko, A. V.; Zhdanov, M. S.
2014-12-01
Distortion of regional electric fields by local structures represent one of the major problems facing three-dimensional magnetotelluric (MT) interpretation. Effect of 3D local inhomogenities on MT data can be described by a real 2x2 distortion matrix. In this project we develop a method of simultaneous inversion of the full MT impedance data for 3D conductivity distribution and for the distortion matrix. Tikhonov regularization is employed to solve the resulting inverse problem. Integral equations method is used to compute MT responses. Minimization of the cost functional is achieved via conjugate gradient method. The inversion algorithm is tested on the synthetic data from Dublin Secret Model II (DSM 2) for which multiple inversion solutions are available for comparison. Inclusion of the distortion matrix provides faster convergence and allows coarser discretization of the near-surface while achievingsimilar or better data fits as inversion for the conductivity only with finely discretized shallow regions. As a field data example we chose a subset of the EarthScope MT dataset covering Great Basin and adjacent areas of the Western United States. Great Basin data inversion identified several prominent conductive zones which correlate well with areas of tectonic and geothermal activity.
Unlocking the spatial inversion of large scanning magnetic microscopy datasets
NASA Astrophysics Data System (ADS)
Myre, J. M.; Lascu, I.; Andrade Lima, E.; Feinberg, J. M.; Saar, M. O.; Weiss, B. P.
2013-12-01
Modern scanning magnetic microscopy provides the ability to perform high-resolution, ultra-high sensitivity moment magnetometry, with spatial resolutions better than 10^-4 m and magnetic moments as weak as 10^-16 Am^2. These microscopy capabilities have enhanced numerous magnetic studies, including investigations of the paleointensity of the Earth's magnetic field, shock magnetization and demagnetization of impacts, magnetostratigraphy, the magnetic record in speleothems, and the records of ancient core dynamos of planetary bodies. A common component among many studies utilizing scanning magnetic microscopy is solving an inverse problem to determine the non-negative magnitude of the magnetic moments that produce the measured component of the magnetic field. The two most frequently used methods to solve this inverse problem are classic fast Fourier techniques in the frequency domain and non-negative least squares (NNLS) methods in the spatial domain. Although Fourier techniques are extremely fast, they typically violate non-negativity and it is difficult to implement constraints associated with the space domain. NNLS methods do not violate non-negativity, but have typically been computation time prohibitive for samples of practical size or resolution. Existing NNLS methods use multiple techniques to attain tractable computation. To reduce computation time in the past, typically sample size or scan resolution would have to be reduced. Similarly, multiple inversions of smaller sample subdivisions can be performed, although this frequently results in undesirable artifacts at subdivision boundaries. Dipole interactions can also be filtered to only compute interactions above a threshold which enables the use of sparse methods through artificial sparsity. To improve upon existing spatial domain techniques, we present the application of the TNT algorithm, named TNT as it is a "dynamite" non-negative least squares algorithm which enhances the performance and accuracy of
Source Estimation by Full Wave Form Inversion
Sjögreen, Björn; Petersson, N. Anders
2013-08-07
Given time-dependent ground motion recordings at a number of receiver stations, we solve the inverse problem for estimating the parameters of the seismic source. The source is modeled as a point moment tensor source, characterized by its location, moment tensor components, the start time, and frequency parameter (rise time) of its source time function. In total, there are 11 unknown parameters. We use a non-linear conjugate gradient algorithm to minimize the full waveform misfit between observed and computed ground motions at the receiver stations. An important underlying assumption of the minimization problem is that the wave propagation is accurately described by the elastic wave equation in a heterogeneous isotropic material. We use a fourth order accurate finite difference method, developed in [12], to evolve the waves forwards in time. The adjoint wave equation corresponding to the discretized elastic wave equation is used to compute the gradient of the misfit, which is needed by the non-linear conjugated minimization algorithm. A new source point moment source discretization is derived that guarantees that the Hessian of the misfit is a continuous function of the source location. An efficient approach for calculating the Hessian is also presented. We show how the Hessian can be used to scale the problem to improve the convergence of the non-linear conjugated gradient algorithm. Numerical experiments are presented for estimating the source parameters from synthetic data in a layer over half-space problem (LOH.1), illustrating rapid convergence of the proposed approach.
Two and three dimensional magnetotelluric inversion
Booker, J.
1993-01-01
Electrical conductivity depends on properties such as the presence of ionic fluids in interconnected pores that are difficult to sense with other remote sensing techniques. Thus improved imaging of underground electrical structure has wide practical importance in exploring for groundwater, mineral and geothermal resources, and in assessing the diffusion of fluids in oil fields and waste sites. Because the electromagnetic inverse problem is fundamentally multi-dimensional, most imaging algorithms saturate available computer power long before they can deal with the complete data set. We have developed an algorithm to directly invert large multi-dimensional data sets that is orders of magnitude faster than competing methods. We have proven that a two-dimensional (2D) version of the algorithm is highly effective for real data and have made substantial progress towards a three-dimensional (3D) version. We are proposing to cure identified shortcomings and substantially expand the utility of the existing 2D program, overcome identified difficulties with extending our method to three-dimensions (3D) and embark on an investigation of related EM imaging techniques which may have the potential for even further increasing resolution.
NASA Astrophysics Data System (ADS)
Koehn, Daniel; Toussaint, Renaud; Ebner, Marcus; Gomez-Rivas, Enrique; Bons, Paul; Rood, Daisy
2014-05-01
Stylolites are localized dissolution seams that can be found in a variety of rocks, and can form due to sediment compaction or tectonic forces. Dissolution of the host-rock next to the stylolite is a function of the applied stress on the stylolite plane. Stylolite teeth indicate the direction of the main compressive stress. Recent advances have shown that the stylolite roughness also shows a stress scaling relation that can be used to calculate magnitudes of stress. Elastic and surface energies produce a different roughness, and the transition between the two is stress dependent and can be quantified. In order to measure the roughness a two or three-dimensional section of a stylolite plane is taken and transferred to a one-dimensional function. The cross-over in the roughness is then picked with the help of an FFT plot. Using this method the burial depth of sedimentary stylolites can be determined. Moreover, tectonic stylolites can be used to determine the full three-dimensional stress tensor if the paleodepth of the tectonic stylolite is known. Stylolites can also be used to find fault offsets and to understand when these faults were active and how the paleotopography looked like at the time the stylolites grew. However, uncertainties remain since Youngs Modulus, Poisson Ratio and surface energy may vary in rocks. In addition, the stylolites record only a snapshot in time, probably the moment when they closed and stopped dissolving. We show examples of the use of stress inversion for stylolite formation conditions in different tectonic settings, and discuss the potential of the method.
Microwave inverse Cerenkov accelerator
NASA Astrophysics Data System (ADS)
Zhang, T. B.; Marshall, T. C.; LaPointe, M. A.; Hirshfield, J. L.
1997-03-01
A Microwave Inverse Cerenkov Accelerator (MICA) is currently under construction at the Yale Beam Physics Laboratory. The accelerating structure in MICA consists of an axisymmetric dielectrically lined waveguide. For the injection of 6 MeV microbunches from a 2.856 GHz RF gun, and subsequent acceleration by the TM01 fields, particle simulation studies predict that an acceleration gradient of 6.3 MV/m can be achieved with a traveling-wave power of 15 MW applied to the structure. Synchronous injection into a narrow phase window is shown to allow trapping of all injected particles. The RF fields of the accelerating structure are shown to provide radial focusing, so that longitudinal and transverse emittance growth during acceleration is small, and that no external magnetic fields are required for focusing. For 0.16 nC, 5 psec microbunches, the normalized emittance of the accelerated beam is predicted to be less than 5πmm-mrad. Experiments on sample alumina tubes have been conducted that verify the theoretical dispersion relation for the TM01 mode over a two-to-one range in frequency. No excitation of axisymmetric or non-axisymmetric competing waveguide modes was observed. High power tests showed that tangential electric fields at the inner surface of an uncoated sample of alumina pipe could be sustained up to at least 8.4 MV/m without breakdown. These considerations suggest that a MICA test accelerator can be built to examine these predictions using an available RF power source, 6 MeV RF gun and associated beam line.
NASA Technical Reports Server (NTRS)
Thuan, Trinx X.; Puschell, Jeffery J.
1989-01-01
Eighty-four brightest cluster members (BCMs) in the complete sample of high Galactic latitude nearby Abell clusters of Hoessel, Gunn, and Thuan (HGT) are investigated. The stellar populations in BCMs using near-infrared and optical-near-infrared colors are studied. Brighter BCMs have redder (J-K) and (V-K) colors, suggesting a metallicity increase in brighter galaxies. The larger dispersion of their colors implies that BCMs possess more heterogeneous stellar populations than their lower luminosity counterparts, the normal elliptical galaxies. Special attention is paid to BCMs associated with cooling flows. BCMs with larger accretion rates have bluer (V-K) colors due to ultraviolet excesses and are brighter in the visual wavelength region, but not in the infrared. It is suggested that part of the X-ray emitting cooling gas is converted into high- and intermediate-mass stars emitting in the blue and visible, but not in the infrared. The properties of BCMs as standard candles in the near-infrared are examined and compared with those in the optical.
Zong, Jian-Fa; Peng, Yun-Ru; Bao, Guan-Hu; Hou, Ru-Yan; Wan, Xiao-Chun
2016-01-01
Two new oleanane-type saponins, named oleiferasaponins C₄ (1) and C₅ (2), were isolated from Camellia oleifera Abel. seed cake residue. Their respective structures were identified as 16α-hydroxy-22α-O-angeloyl-23α-aldehyde-28-dihydroxymethylene-olean-12-ene-3β-O-[β-d-galacto-pyranosyl-(1→2)]-[β-d-glucopyranosyl-(1→2)-β-d-galactopyranosy-(1→3)]-β-d-glucopyranosid-uronic acid methyl ester (1) and 16α-hydroxy-22α-O-angeloyl-23α-aldehyde-28-dihydroxy-methylene-olean-12-ene-3β-O-[β-d-galactopyranosyl-(1→2)]-[β-d-galactopyranosyl-(1→3)]-β-d-glucopyranosiduronic acid methyl ester (2) through 1D- and 2D-NMR, HR-ESI-MS, and GC-MS spectroscopic methods. The two compounds exhibited potent cytotoxic activities against five human tumor cell lines (BEL-7402, BGC-823, MCF-7, HL-60 and KB).
A multiwavelength view of cooling versus AGN heating in the X-ray luminous cool-core of Abell 3581
NASA Astrophysics Data System (ADS)
Canning, R. E. A.; Sun, M.; Sanders, J. S.; Clarke, T. E.; Fabian, A. C.; Giacintucci, S.; Lal, D. V.; Werner, N.; Allen, S. W.; Donahue, M.; Edge, A. C.; Johnstone, R. M.; Nulsen, P. E. J.; Salomé, P.; Sarazin, C. L.
2013-10-01
We report the results of a multiwavelength study of the nearby galaxy group, Abell 3581 (z = 0.0218). This system hosts the most luminous cool core of any nearby group and exhibits active radio mode feedback from the supermassive black hole in its brightest group galaxy, IC 4374. The brightest galaxy has suffered multiple active galactic nucleus outbursts, blowing bubbles into the surrounding hot gas, which have resulted in the uplift of cool ionized gas into the surrounding hot intragroup medium. High velocities, indicative of an outflow, are observed close to the nucleus and coincident with the radio jet. Thin dusty filaments accompany the uplifted, ionized gas. No extended star formation is observed; however, a young cluster is detected just north of the nucleus. The direction of rise of the bubbles has changed between outbursts. This directional change is likely due to sloshing motions of the intragroup medium. These sloshing motions also appear to be actively stripping the X-ray cool core, as indicated by a spiralling cold front of high-metallicity, low-temperature, low entropy gas.
NASA Astrophysics Data System (ADS)
Grainge, Keith; Jones, Michael E.; Pooley, Guy; Saunders, Richard; Edge, Alastair; Grainger, William F.; Kneissl, Rüdiger
2002-06-01
We describe our methods for measuring the Hubble constant from Ryle Telescope (RT) interferometric observations of the Sunyaev-Zel'dovich (SZ) effect from a galaxy cluster and observation of the cluster X-ray emission. We analyse the error budget in this method: as well as radio and X-ray random errors, we consider the effects of clumping and temperature differences in the cluster gas, of the kinetic SZ effect, of bremsstrahlung emission at radio wavelengths, of the gravitational lensing of background radio sources and of primary calibration error. Using RT, ASCA and ROSAT observations of the Abell 1413, we find that random errors dominate over systematic ones, and estimate
NASA Astrophysics Data System (ADS)
Kachar, H.; Mobasheri, M. R.; Abkar, A. A.; Rahim Zadegan, M.
2015-12-01
Increase of temperature with height in the troposphere is called temperature inversion. Parameters such as strength and depth are characteristics of temperature inversion. Inversion strength is defined as the temperature difference between the surface and the top of the inversion and the depth of inversion is defined as the height of the inversion from the surface. The common approach in determination of these parameters is the use of Radiosonde where these measurements are too sparse. The main objective of this study is detection and modeling the temperature inversion using MODIS thermal infrared data. There are more than 180 days per year in which the temperature inversion conditions are present in Kermanshah city. Kermanshah weather station was selected as the study area. 90 inversion days was selected from 2007 to 2008 where the sky was clear and the Radiosonde data were available. Brightness temperature for all thermal infrared bands of MODIS was calculated for these days. Brightness temperature difference between any of the thermal infrared bands of MODIS and band 31 was found to be sensitive to strength and depth of temperature inversion. Then correlation coefficients between these pairs and the inversion depth and strength both calculated from Radiosonde were evaluated. The results showed poor linear correlation. This was found to be due to the change of the atmospheric water vapor content and the relatively weak temperature inversion strength and depth occurring in Kermanshah. The polynomial mathematical models and Artificial intelligence algorithms were deployed for detection and modeling the temperature inversion. A model with the lowest terms and highest possible accuracy was obtained. The Model was tested using 20 independent test data. Results indicate that the inversion strength can be estimated with RMSE of 0.84° C and R2 of 0.90. Also inversion depth can be estimated with RMSE of 54.56 m and R2 of 0.86.
Temperature inversion in China seas
NASA Astrophysics Data System (ADS)
Hao, Jiajia; Chen, Yongli; Wang, Fan
2010-12-01
Temperature inversion was reported as a common phenomenon in the areas near the southeastern Chinese coast (region A), west and south of the Korean Peninsula (region B), and north and east of the Shandong Peninsula (region C) during October-May in the present study, based on hydrographic data archived from 1930 through 2001 (319,029 profiles). The inversion was found to be remarkable with obvious temporal and spatial variabilities in both magnitude and coverage, with higher probabilities in region A (up to about 60%) and region C (40%-50%) than in region B (15%-20%). The analysis shows that seasonal variation of the net air-sea heat flux is closely related to the occurrence time of the inversion in the three areas, while the Yangtze and Yellow river freshwater plumes in the surface layer and ocean origin saline water in the subsurface layer maintain stable stratification. It seems that the evaporation/excessive precipitation flux makes little contribution to maintaining the stable inversion. Advection of surface fresh water by the wind-driven coastal currents results in the expansion of inversion in regions A and C. The inversion lasts for the longest period in region A (October-May) sustained by the Taiwan Warm Current carrying the subsurface saline water, while evolution of the inversion in region B is mainly controlled by the Yellow Sea Warm Current.
Solving the structural inverse gravity problem by the modified gradient methods
NASA Astrophysics Data System (ADS)
Martyshko, P. S.; Akimova, E. N.; Misilov, V. E.
2016-09-01
New methods for solving the three-dimensional inverse gravity problem in the class of contact surfaces are described. Based on the approach previously suggested by the authors, new algorithms are developed. Application of these algorithms significantly reduces the number of the iterations and computing time compared to the previous ones. The algorithms have been numerically implemented on the multicore processor. The example of solving the structural inverse gravity problem for a model of four-layer medium (with the use of gravity field measurements) is constructed.
Givental Graphs and Inversion Symmetry
NASA Astrophysics Data System (ADS)
Dunin-Barkowski, Petr; Shadrin, Sergey; Spitz, Loek
2013-05-01
Inversion symmetry is a very non-trivial discrete symmetry of Frobenius manifolds. It was obtained by Dubrovin from one of the elementary Schlesinger transformations of a special ODE associated to a Frobenius manifold. In this paper, we review the Givental group action on Frobenius manifolds in terms of Feynman graphs and obtain an interpretation of the inversion symmetry in terms of the action of the Givental group. We also consider the implication of this interpretation of the inversion symmetry for the Schlesinger transformations and for the Hamiltonians of the associated principle hierarchy.
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
Source Inversion Validation: Quantifying Uncertainties in Earthquake Source Inversions
NASA Astrophysics Data System (ADS)
Mai, P. M.; Page, M. T.; Schorlemmer, D.
2010-12-01
Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Source inversion methods thus represent an important research tool in seismology to unravel the complexity of earthquake ruptures. Subsequently, source-inversion results are used to study earthquake mechanics, to develop spontaneous dynamic rupture models, to build models for generating rupture realizations for ground-motion simulations, and to perform Coulomb-stress modeling. In all these applications, the underlying finite-source rupture models are treated as “data” (input information), but the uncertainties in these data (i.e. source models obtained from solving an inherently ill-posed inverse problem) are hardly known, and almost always neglected. The Source Inversion Validation (SIV) project attempts to better understand the intra-event variability of earthquake rupture models. We plan to build a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion that also facilitates to develop robust approaches to quantify rupture-model uncertainties. Our contribution reviews the current status of the SIV project, recent forward-modeling tests for point and extended sources in layered media, and discusses the strategy of the SIV-project for the coming years.
A time domain sampling method for inverse acoustic scattering problems
NASA Astrophysics Data System (ADS)
Guo, Yukun; Hömberg, Dietmar; Hu, Guanghui; Li, Jingzhi; Liu, Hongyu
2016-06-01
This work concerns the inverse scattering problems of imaging unknown/inaccessible scatterers by transient acoustic near-field measurements. Based on the analysis of the migration method, we propose efficient and effective sampling schemes for imaging small and extended scatterers from knowledge of time-dependent scattered data due to incident impulsive point sources. Though the inverse scattering problems are known to be nonlinear and ill-posed, the proposed imaging algorithms are totally "direct" involving only integral calculations on the measurement surface. Theoretical justifications are presented and numerical experiments are conducted to demonstrate the effectiveness and robustness of our methods. In particular, the proposed static imaging functionals enhance the performance of the total focusing method (TFM) and the dynamic imaging functionals show analogous behavior to the time reversal inversion but without solving time-dependent wave equations.
A fast Stokes inversion technique based on quadratic regression
NASA Astrophysics Data System (ADS)
Teng, Fei; Deng, Yuan-Yong
2016-05-01
Stokes inversion calculation is a key process in resolving polarization information on radiation from the Sun and obtaining the associated vector magnetic fields. Even in the cases of simple local thermodynamic equilibrium (LTE) and where the Milne-Eddington approximation is valid, the inversion problem may not be easy to solve. The initial values for the iterations are important in handling the case with multiple minima. In this paper, we develop a fast inversion technique without iterations. The time taken for computation is only 1/100 the time that the iterative algorithm takes. In addition, it can provide available initial values even in cases with lower spectral resolutions. This strategy is useful for a filter-type Stokes spectrograph, such as SDO/HMI and the developed two-dimensional real-time spectrograph (2DS).
Counting Magnetic Bipoles on the Sun by Polarity Inversion
NASA Technical Reports Server (NTRS)
Jones, Harrison P.
2004-01-01
This paper presents a simple and efficient algorithm for deriving images of polarity inversion from NSO/Kitt Peak magnetograms without use of contouring routines and shows by example how these maps depend upon the spatial scale for filtering the raw data. Smaller filtering scales produce many localized closed contours in mixed polarity regions while supergranular and larger filtering scales produce more global patterns. The apparent continuity of an inversion line depends on how the spatial filtering is accomplished, but its shape depends only on scale. The total length of the magnetic polarity inversion contours varies as a power law of the filter scale with fractal dimension of order 1.9. The amplitude but nut the exponent of this power-law relation varies with solar activity. The results are compared to similar analyses of areal distributions of bipolar magnetic regions.
An application of Ray + Born inversion on real data
Forgues, E.; Beukelaar, P. de; Coppens, F.; Richard, V.; Lambare, G.
1994-12-31
The authors present a linearized 2D acoustic and elastic multiparameter inversion of real marine seismic reflection data from the Gulf of Mexico. They solve the forward problem by a combination of Ray Theory and Born approximation. It fully takes advantage of efficiency of ray tracing in terms of computing, cost and physical comprehension. Lateral variations of background velocities can be introduced in the 2D ray tracing algorithm and a 2.5D approximation is done in order to take into account 3D propagation. The multiparameter inversion method is based on minimization of a weighted cost function. This weighted cost function is estimated from parameters associated with Ray and Paraxial Ray Theory and is introduced in order to diagonalize approximately the Hessian. This form of the Hessian allows the study of spatial resolution and conditioning of inversion.
Grindinger, C.M.
1992-05-01
This study uses Hawaiian Rainband Project (HaRP) data, from the summer of 1991, to show a boundary layer wind profiler can be used to measure the trade wind inversion. An algorithm has been developed for the profiler that objectively measures the depth of the moist oceanic boundary layer. The Hilo inversion, measured by radiosonde, is highly correlated with the moist oceanic boundary layer measured by the profiler at Paradise Park. The inversion height on windward Hawaii is typically 2253 + or - 514 m. The inversion height varies not only on a daily basis, but on less than an hourly basis. It has a diurnal, as well as a three to four day cycle. There appears to be no consistent relationship between inversion height and precipitation. Currently, this profiler is capable of making high frequency (12 minute) measurements of the inversion base variation, as well as other features.
Gradient-based inverse extreme ultraviolet lithography.
Ma, Xu; Wang, Jie; Chen, Xuanbo; Li, Yanqiu; Arce, Gonzalo R
2015-08-20
Extreme ultraviolet (EUV) lithography is the most promising successor of current deep ultraviolet (DUV) lithography. The very short wavelength, reflective optics, and nontelecentric structure of EUV lithography systems bring in different imaging phenomena into the lithographic image synthesis problem. This paper develops a gradient-based inverse algorithm for EUV lithography systems to effectively improve the image fidelity by comprehensively compensating the optical proximity effect, flare, photoresist, and mask shadowing effects. A block-based method is applied to iteratively optimize the main features and subresolution assist features (SRAFs) of mask patterns, while simultaneously preserving the mask manufacturability. The mask shadowing effect may be compensated by a retargeting method based on a calibrated shadowing model. Illustrative simulations at 22 and 16 nm technology nodes are presented to validate the effectiveness of the proposed methods. PMID:26368764
Neural network fusion and inversion model for NDIR sensor measurement
NASA Astrophysics Data System (ADS)
Cieszczyk, Sławomir; Komada, Paweł
2015-12-01
This article presents the problem of the impact of environmental disturbances on the determination of information from measurements. As an example, NDIR sensor is studied, which can measure industrial or environmental gases of varying temperature. The issue of changes of influence quantities value appears in many industrial measurements. Developing of appropriate algorithms resistant to conditions changes is key problem. In the resulting mathematical model of inverse problem additional input variables appears. Due to the difficulties in the mathematical description of inverse model neural networks have been applied. They do not require initial assumptions about the structure of the created model. They provide correction of sensor non-linearity as well as correction of influence of interfering quantity. The analyzed issue requires additional measurement of disturbing quantity and its connection with measurement of primary quantity. Combining this information with the use of neural networks belongs to the class of sensor fusion algorithm.
NASA Astrophysics Data System (ADS)
Montahaei, Mansoure; Oskooi, Behrooz
2014-02-01
An extension of an artificial neural network (ANN) approach to solve the magnetotelluric (MT) inverse problem for azimuthally anisotropic resistivities is presented and applied for a real dataset. Three different model classes, containing general 1-D and 2-D azimuthally anisotropic features, have been considered. For each model class, characteristics of three-layer feed forward ANNs trained through an error back propagation algorithm have been adjusted to approximate the inverse modeling function. It appears that, at least for synthetic models, reasonable results would be obtained by applying the amplitudes of the complex impedance tensor elements as inputs. Furthermore, the Levenberg-Marquart algorithm possesses optimal performance as a learning paradigm for this problem. The evaluation of applicability of the trained ANNs for unknown data sets excluded from the learning procedure reveals that the trained ANNs possess acceptable interpolation and extrapolation abilities to estimate model parameters accurately. This method was also successfully used for a field dataset wherein anisotropy had been previously recognized.
Uterine Inversion; A case report.
Bouchikhi, C; Saadi, H; Fakhir, B; Chaara, H; Bouguern, H; Banani, A; Melhouf, Ma
2008-01-01
The puerperal uterine inversion is a rare and severe complication occurring in the third stage of labour. The mechanisms are not completely known. However, extrinsic factors such as oxytocic arrests after a prolonged labour, umbilical cord traction or abdominal expression are pointed. Other intrinsic factors such as primiparity, uterine hypotonia, various placental localizations, fundic myoma or short umbilical cord were also reported. The diagnosis of the uterine inversion is mainly supported by clinical symptoms. It is based on three elements: haemorrhage, shock and a strong pelvic pain. The immediate treatment of the uterine inversion is required. It is based on a medical reanimation associated with firstly a manual reduction then surgical treatment using various techniques. We report an observation of a 25 years old grand multiparous patient with a subacute uterine inversion after delivery at home. PMID:21516244
Uterine Inversion; A case report
Bouchikhi, C; Saadi, H; Fakhir, B; Chaara, H; Bouguern, H; Banani, A; Melhouf, MA
2008-01-01
The puerperal uterine inversion is a rare and severe complication occurring in the third stage of labour. The mechanisms are not completely known. However, extrinsic factors such as oxytocic arrests after a prolonged labour, umbilical cord traction or abdominal expression are pointed. Other intrinsic factors such as primiparity, uterine hypotonia, various placental localizations, fundic myoma or short umbilical cord were also reported. The diagnosis of the uterine inversion is mainly supported by clinical symptoms. It is based on three elements: haemorrhage, shock and a strong pelvic pain. The immediate treatment of the uterine inversion is required. It is based on a medical reanimation associated with firstly a manual reduction then surgical treatment using various techniques. We report an observation of a 25 years old grand multiparous patient with a subacute uterine inversion after delivery at home. PMID:21516244
Testing earthquake source inversion methodologies
Page, M.; Mai, P.M.; Schorlemmer, D.
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.
Mahan, G. D.
2014-09-21
We calculate the binding energy of an electron bound to a donor in a semiconductor inverse opal. Inverse opals have two kinds of cavities, which we call octahedral and tetrahedral, according to their group symmetry. We put the donor in the center of each of these two cavities and obtain the binding energy. The binding energies become very large when the inverse opal is made from templates with small spheres. For spheres less than 50 nm in diameter, the donor binding can increase to several times its unconfined value. Then electrons become tightly bound to the donor and are unlikely to be thermally activated to the semiconductor conduction band. This conclusion suggests that inverse opals will be poor conductors.
Inversion layer MOS solar cells
NASA Technical Reports Server (NTRS)
Ho, Fat Duen
1986-01-01
Inversion layer (IL) Metal Oxide Semiconductor (MOS) solar cells were fabricated. The fabrication technique and problems are discussed. A plan for modeling IL cells is presented. Future work in this area is addressed.
Temperature Inversions Have Cold Bottoms.
ERIC Educational Resources Information Center
Bohren, Craig F.; Brown, Gail M.
1982-01-01
Uses discussion and illustrations of several demonstrations on air temperature differences and atmospheric stability to explain the phenomena of temperature inversions. Relates this to the smog in Los Angeles and discusses the implications. (DC)
NASA Astrophysics Data System (ADS)
Mahan, G. D.
2014-09-01
We calculate the binding energy of an electron bound to a donor in a semiconductor inverse opal. Inverse opals have two kinds of cavities, which we call octahedral and tetrahedral, according to their group symmetry. We put the donor in the center of each of these two cavities and obtain the binding energy. The binding energies become very large when the inverse opal is made from templates with small spheres. For spheres less than 50 nm in diameter, the donor binding can increase to several times its unconfined value. Then electrons become tightly bound to the donor and are unlikely to be thermally activated to the semiconductor conduction band. This conclusion suggests that inverse opals will be poor conductors.
Inverse polynomial reconstruction method in DCT domain
NASA Astrophysics Data System (ADS)
Dadkhahi, Hamid; Gotchev, Atanas; Egiazarian, Karen
2012-12-01
The discrete cosine transform (DCT) offers superior energy compaction properties for a large class of functions and has been employed as a standard tool in many signal and image processing applications. However, it suffers from spurious behavior in the vicinity of edge discontinuities in piecewise smooth signals. To leverage the sparse representation provided by the DCT, in this article, we derive a framework for the inverse polynomial reconstruction in the DCT expansion. It yields the expansion of a piecewise smooth signal in terms of polynomial coefficients, obtained from the DCT representation of the same signal. Taking advantage of this framework, we show that it is feasible to recover piecewise smooth signals from a relatively small number of DCT coefficients with high accuracy. Furthermore, automatic methods based on minimum description length principle and cross-validation are devised to select the polynomial orders, as a requirement of the inverse polynomial reconstruction method in practical applications. The developed framework can considerably enhance the performance of the DCT in sparse representation of piecewise smooth signals. Numerical results show that denoising and image approximation algorithms based on the proposed framework indicate significant improvements over wavelet counterparts for this class of signals.
3D stochastic inversion of magnetic data
NASA Astrophysics Data System (ADS)
Shamsipour, Pejman; Chouteau, Michel; Marcotte, Denis
2011-04-01
A stochastic inversion method based on a geostatistical approach is presented to recover 3D susceptibility models from magnetic data. The aim of applying geostatistics is to provide quantitative descriptions of natural variables distributed in space or in time and space. Cokriging, the method which is used in this paper, is a method of estimation that minimizes the theoretical estimation error variance by using auto- and cross-correlations of several variables. The covariances for total field, susceptibility and total field-susceptibility are estimated using the observed data. Then, the susceptibility is cokriged or simulated as the primary variable. In order to avoid the natural tendency of the estimated structure to lay near the surface, depth weighting is included in the cokriging system. The algorithm assumes there is no remanent magnetization and the observation data represent only induced magnetization effects. The method is applied on different synthetic models to demonstrate its suitability for 3D inversion of magnetic data. A case study using ground measurements of total field at the Perseverance mine (Quebec, Canada) is presented. The recovered 3D susceptibility model provides beneficial information that can be used to analyze the geology of massive sulfide for the domain under study.
Inverse Ising inference with correlated samples
NASA Astrophysics Data System (ADS)
Obermayer, Benedikt; Levine, Erel
2014-12-01
Correlations between two variables of a high-dimensional system can be indicative of an underlying interaction, but can also result from indirect effects. Inverse Ising inference is a method to distinguish one from the other. Essentially, the parameters of the least constrained statistical model are learned from the observed correlations such that direct interactions can be separated from indirect correlations. Among many other applications, this approach has been helpful for protein structure prediction, because residues which interact in the 3D structure often show correlated substitutions in a multiple sequence alignment. In this context, samples used for inference are not independent but share an evolutionary history on a phylogenetic tree. Here, we discuss the effects of correlations between samples on global inference. Such correlations could arise due to phylogeny but also via other slow dynamical processes. We present a simple analytical model to address the resulting inference biases, and develop an exact method accounting for background correlations in alignment data by combining phylogenetic modeling with an adaptive cluster expansion algorithm. We find that popular reweighting schemes are only marginally effective at removing phylogenetic bias, suggest a rescaling strategy that yields better results, and provide evidence that our conclusions carry over to the frequently used mean-field approach to the inverse Ising problem.
Inverse statistical mechanics, lattice packings, and glasses
NASA Astrophysics Data System (ADS)
Marcotte, Etienne
Computer simulation methods enable the investigation of systems and properties that are intractable by purely analytical or experimental approaches. Each chapter of this dissertation contains an application of simulation methods to solve complex physical problems consisting of interacting many-particle or many-spin systems. The problems studied in this dissertation can be divided up into the following two broad categories: inverse and forward problems. The inverse problems considered are those in which we construct an interaction potential such that the corresponding ground state is a targeted configuration. In Chapters 2 and 3, we devise convex pair-potential functions that result in low-coordinated ground states. Chapter 2 describes targeted ground states that are the square and honeycomb crystals, while in Chapter 3 the targeted ground state is the diamond crystal. Chapter 4 applies similar techniques to explicitly enumerate all unique ground states up to a given system size, for spin configurations that interact according to generalized isotropic Ising potentials with finite range. We also consider forward statistical-mechanical problems. In Chapter 5, we adapt a linear programming algorithm to find the densest lattice packings across Euclidean space dimensions. In Chapter 6, we demonstrate that for two different glass models a signature of the glass transition is apparent well before the transition temperature is reached. In both models, this signature appears as nonequilibrium length scales that grow upon supercooling.
Moebius inversion formula and inverting lattice sums
NASA Astrophysics Data System (ADS)
Millane, Rick P.
2000-11-01
The Mobius inversion formula is an interesting theorem from number theory that has application to a number inverse problems, particularly lattice problems. Specific inverse problems, however, often require related Mobius inversion formulae that can be derived from the fundamental formula. Derivation of such formulae is not easy for the non- specialist, however. Examples of the kinds of inversion formulae that can be derived and their application to inverse lattice problems are described.
Fast Linear Algebra Applications in Stochastic Inversion and Data Assimilation
NASA Astrophysics Data System (ADS)
Kitanidis, P. K.; Ambikasaran, S.; Saibaba, A.; Li, J. Y.; Darve, E. F.
2012-12-01
Inverse problems and data assimilation problems arise frequently in earth-science applications, such as hydraulic tomography, cross-well seismic travel-time tomography, electrical resistivity tomography, contaminant source identification, assimilation of weather data, etc. A common feature amongst inverse problems is that the parameters we are interested in estimating are hard to measure directly, and a crucial component of inverse modeling is using sparse data to evaluate many model parameters. To quantify uncertainty, stochastic methods such as the geostatistical approach to inverse problems and Kalman filtering are often used. The algorithms for the implementation of these methods were originally developed for small-size problems and their cost of implementation increases quickly with the size of the problem, which is usually defined by the number of observations and the number of unknowns. From a practical standpoint, it is critical to develop computational algorithms in linear algebra for which the computational effort, both in terms of storage and computational time, increases roughly linearly with the size of the problem. This is in contrast, for example, with matrix-vector products (resp. LU factorization) that scale quadratically (resp. cubically). This objective is achieved by tailoring methods to the structure of problems. We present an overview of the challenges and general approaches available for reducing computational cost and then present applications focusing on algorithms that use the hierarchical matrix approach. The hierarchical method reduces matrix vector products involving the dense covariance matrix from O(m2) to O(m log m), where m is the number of unknowns. We illustrate the performance of our algorithm on a few applications, such as monitoring CO2 concentrations using crosswell seismic tomography.
Direct inversion methods for spectral amplitude modulation of femtosecond pulses.
Delgado-Aguillón, Jesús; Garduño-Mejía, Jesús; López-Téllez, Juan Manuel; Bruce, Neil C; Rosete-Aguilar, Martha; Román-Moreno, Carlos Jesús; Ortega-Martínez, Roberto
2014-04-01
In the present work, we applied an amplitude-spatial light modulator to shape the spectral amplitude of femtosecond pulses in a single step, without an iterative algorithm, by using an inversion method defined as the generalized retardance function. Additionally, we also present a single step method to shape the intensity profile defined as the influence matrix. Numerical and experimental results are presented for both methods.
3D stochastic inversion and joint inversion of potential fields for multi scale parameters
NASA Astrophysics Data System (ADS)
Shamsipour, Pejman
In this thesis we present the development of new techniques for the interpretation of potential field (gravity and magnetic data), which are the most widespread economic geophysical methods used for oil and mineral exploration. These new techniques help to address the long-standing issue with the interpretation of potential fields, namely the intrinsic non-uniqueness inversion of these types of data. The thesis takes the form of three papers (four including Appendix), which have been published, or soon to be published, in respected international journals. The purpose of the thesis is to introduce new methods based on 3D stochastical approaches for: 1) Inversion of potential field data (magnetic), 2) Multiscale Inversion using surface and borehole data and 3) Joint inversion of geophysical potential field data. We first present a stochastic inversion method based on a geostatistical approach to recover 3D susceptibility models from magnetic data. The aim of applying geostatistics is to provide quantitative descriptions of natural variables distributed in space or in time and space. We evaluate the uncertainty on the parameter model by using geostatistical unconditional simulations. The realizations are post-conditioned by cokriging to observation data. In order to avoid the natural tendency of the estimated structure to lay near the surface, depth weighting is included in the cokriging system. Then, we introduce algorithm for multiscale inversion, the presented algorithm has the capability of inverting data on multiple supports. The method involves four main steps: i. upscaling of borehole parameters (It could be density or susceptibility) to block parameters, ii. selection of block to use as constraints based on a threshold on kriging variance, iii. inversion of observation data with selected block densities as constraints, and iv. downscaling of inverted parameters to small prisms. Two modes of application are presented: estimation and simulation. Finally, a novel
Efficiency of Pareto joint inversion of 2D geophysical data using global optimization methods
NASA Astrophysics Data System (ADS)
Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek
2016-04-01
Pareto joint inversion of two or more sets of data is a promising new tool of modern geophysical exploration. In the first stage of our investigation we created software enabling execution of forward solvers of two geophysical methods (2D magnetotelluric and gravity) as well as inversion with possibility of constraining solution with seismic data. In the algorithm solving MT forward solver Helmholtz's equations, finite element method and Dirichlet's boundary conditions were applied. Gravity forward solver was based on Talwani's algorithm. To limit dimensionality of solution space we decided to describe model as sets of polygons, using Sharp Boundary Interface (SBI) approach. The main inversion engine was created using Particle Swarm Optimization (PSO) algorithm adapted to handle two or more target functions and to prevent acceptance of solutions which are non - realistic or incompatible with Pareto scheme. Each inversion run generates single Pareto solution, which can be added to Pareto Front. The PSO inversion engine was parallelized using OpenMP standard, what enabled execution code for practically unlimited amount of threads at once. Thereby computing time of inversion process was significantly decreased. Furthermore, computing efficiency increases with number of PSO iterations. In this contribution we analyze the efficiency of created software solution taking under consideration details of chosen global optimization engine used as a main joint minimization engine. Additionally we study the scale of possible decrease of computational time caused by different methods of parallelization applied for both forward solvers and inversion algorithm. All tests were done for 2D magnetotelluric and gravity data based on real geological media. Obtained results show that even for relatively simple mid end computational infrastructure proposed solution of inversion problem can be applied in practice and used for real life problems of geophysical inversion and interpretation.
A spatiotemporal dynamic distributed solution to the MEG inverse problem
Lamus, Camilo; Hämäläinen, Matti S.; Temereanca, Simona; Brown, Emery N.; Purdon, Patrick L.
2012-01-01
MEG/EEG are non-invasive imaging techniques that record brain activity with high temporal resolution. However, estimation of brain source currents from surface recordings requires solving an ill-conditioned inverse problem. Converging lines of evidence in neuroscience, from neuronal network models to resting-state imaging and neurophysiology, suggest that cortical activation is a distributed spatiotemporal dynamic process, supported by both local and long-distance neuroanatomic connections. Because spatiotemporal dynamics of this kind are central to brain physiology, inverse solutions could be improved by incorporating models of these dynamics. In this article, we present a model for cortical activity based on nearest-neighbor autoregression that incorporates local spatiotemporal interactions between distributed sources in a manner consistent with neurophysiology and neuroanatomy. We develop a dynamic Maximum a Posteriori Expectation-Maximization (dMAP-EM) source localization algorithm for estimation of cortical sources and model parameters based on the Kalman Filter, the Fixed Interval Smoother, and the EM algorithms. We apply the dMAP-EM algorithm to simulated experiments as well as to human experimental data. Furthermore, we derive expressions to relate our dynamic estimation formulas to those of standard static models, and show how dynamic methods optimally assimilate past and future data. Our results establish the feasibility of spatiotemporal dynamic estimation in large-scale distributed source spaces with several thousand source locations and hundreds of sensors, with resulting inverse solutions that provide substantial performance improvements over static methods. PMID:22155043
Goal Directed Model Inversion: A Study of Dynamic Behavior
NASA Technical Reports Server (NTRS)
Colombano, Silvano P.; Compton, Michael; Raghavan, Bharathi; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
Goal Directed Model Inversion (GDMI) is an algorithm designed to generalize supervised learning to the case where target outputs are not available to the learning system. The output of the learning system becomes the input to some external device or transformation, and only the output of this device or transformation can be compared to a desired target. The fundamental driving mechanism of GDMI is to learn from success. Given that a wrong outcome is achieved, one notes that the action that produced that outcome 0 "would have been right if the outcome had been the desired one." The algorithm then proceeds as follows: (1) store the action that produced the wrong outcome as a "target" (2) redefine the wrong outcome as a desired goal (3) submit the new desired goal to the system (4) compare the new action with the target action and modify the system by using a suitable algorithm for credit assignment (Back propagation in our example) (5) resubmit the original goal. Prior publications by our group in this area focused on demonstrating empirical results based on the inverse kinematic problem for a simulated robotic arm. In this paper we apply the inversion process to much simpler analytic functions in order to elucidate the dynamic behavior of the system and to determine the sensitivity of the learning process to various parameters. This understanding will be necessary for the acceptance of GDMI as a practical tool.
Inverse problems in statistical mechanics and photonics
NASA Astrophysics Data System (ADS)
Rechtsman, Mikael C.
In an inverse problem, one seeks the nature of the components of a system with known (or targeted) resultant behavior---perhaps opposite to the traditional trajectory of problem solving in physical research. In this thesis, a number of inverse problems in two categories are considered. In the first, in many-body classical systems with isotropic two-body interactions, we target uncharacteristic, technologically relevant thermodynamic behavior. In the second, we consider two problems in electromagnetic scattering and photonics. Increasingly, experimentalists have been able to tailor isotropic interactions between micron-scale colloidal spheres, allowing for the possibility of targeted self-assembly of a desired crystal structure upon freezing. Self-assembly of certain structures, the diamond lattice in particular, has a great deal of technological potential in the fields of optoelectronics and photonics. We present here new computational algorithms that find isotropic interaction potentials that yield targeted ground state crystal structures. These algorithms are applied to find interaction potentials for the honeycomb lattice (which is the two-dimensional analog of diamond), the square lattice, the simple cubic lattice, the wurtzite as well as the diamond lattice. We also present an isotropic interaction potential that gives rise to negative thermal expansion, a macroscopic behavior that has previously been associated with a highly anisotropic microscopic mechanism. Furthermore, we show that systems with only isotropic interactions may exhibit a negative Poisson's ratio, as long as they are under tension. We derive linear constraints involving the derivatives of the pair potential that gives rise to this behavior. In a study of electromagnetic scattering in random dielectric two-component composites, we use a strong-contrast perturbation expansion to obtain analytic expressions for the effective dielectric tensor to arbitrary order in the dielectric contrast between
Breast ultrasound computed tomography using waveform inversion with source encoding
NASA Astrophysics Data System (ADS)
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
FBP Algorithms for Attenuated Fan-Beam Projections
You, Jiangsheng; Zeng, Gengsheng L.; Liang, Zhengrong
2005-01-01
A filtered backprojection (FBP) reconstruction algorithm for attenuated fan-beam projections has been derived based on Novikov’s inversion formula. The derivation uses a common transformation between parallel-beam and fan-beam coordinates. The filtering is shift-invariant. Numerical evaluation of the FBP algorithm is presented as well. As a special application, we also present a shift-invariant FBP algorithm for fan-beam SPECT reconstruction with uniform attenuation compensation. Several other fan-beam reconstruction algorithms are also discussed. In the attenuation-free case, our algorithm reduces to the conventional fan-beam FBP reconstruction algorithm. PMID:16570111
Kinematics and stellar populations of low-luminosity early-type galaxies in the Abell 496 cluster
NASA Astrophysics Data System (ADS)
Chilingarian, I. V.; Cayatte, V.; Durret, F.; Adami, C.; Balkowski, C.; Chemin, L.; Laganá, T. F.; Prugniel, P.
2008-07-01
Context: The morphology and stellar populations of low-luminosity early-type galaxies in clusters have until now been limited to a few relatively nearby clusters such as Virgo or Fornax. Scenarii for the formation and evolution of dwarf galaxies in clusters are therefore not well constrained. Aims: We investigate here the morphology and stellar populations of low-luminosity galaxies in the relaxed richness class 1 cluster Abell 496 (z = 0.0330). Methods: Deep multiband imaging obtained with the CFHT Megacam allowed us to select a sample of faint galaxies, defined here as objects with magnitudes 18 < r' < 22 mag within a 1.2 arcsec fibre (-18.8 < MB < -15.1 mag). We observed 118 galaxies spectroscopically with the ESO VLT FLAMES/Giraffe spectrograph with a resolving power R = 6300. We present structural analysis and colour maps for the 48 galaxies belonging to the cluster. We fit the spectra of 46 objects with PEGASE.HR synthetic spectra to estimate the ages, metallicities, and velocity dispersions. We estimated possible biases by similarly analysing spectra of ~1200 early-type galaxies from the Sloan Digital Sky Survey Data Release 6 (SDSS DR6). We computed values of α/Fe abundance ratios from the measurements of Lick indices. We briefly discuss effects of the fixed aperture size on the measurements. Results: For the first time, high-precision estimates of stellar population properties have been obtained for a large sample of faint galaxies in a cluster, allowing for the extension of relations between stellar populations and internal kinematics to the low-velocity dispersion regime. We have revealed a peculiar population of elliptical galaxies in the core of the cluster, resembling massive early-type galaxies by their stellar population properties and velocity dispersions, but having luminosities of about 2 mag fainter. Conclusions: External mechanisms of gas removal (ram pressure stripping and gravitational harassment) are more likely to have occurred than
Kirkpatrick, C. C.; McNamara, B. R.; Kazemzadeh, F.; Cavagnolo, K. W.; Rafferty, D. A.; BIrzan, L.; Nulsen, P. E. J.; Wise, M. W.; Gitti, M.
2009-05-20
The brightest cluster galaxy (BCG) in the Abell 1664 cluster is unusually blue and is forming stars at a rate of {approx} 23 M {sub sun} yr{sup -1}. The BCG is located within 5 kpc of the X-ray peak, where the cooling time of 3.5 x 10{sup 8} yr and entropy of 10.4 keV cm{sup 2} are consistent with other star-forming BCGs in cooling flow clusters. The center of A1664 has an elongated, 'barlike' X-ray structure whose mass is comparable to the mass of molecular hydrogen, {approx}10{sup 10} M {sub sun} in the BCG. We show that this gas is unlikely to have been stripped from interloping galaxies. The cooling rate in this region is roughly consistent with the star formation rate, suggesting that the hot gas is condensing onto the BCG. We use the scaling relations of BIrzan et al. to show that the active galactic nucleus (AGN) is underpowered compared to the central X-ray cooling luminosity by roughly a factor of three. We suggest that A1664 is experiencing rapid cooling and star formation during a low state of an AGN feedback cycle that regulates the rates of cooling and star formation. Modeling the emission as a single-temperature plasma, we find that the metallicity peaks 100 kpc from the X-ray center, resulting in a central metallicity dip. However, a multi-temperature cooling flow model improves the fit to the X-ray emission and is able to recover the expected, centrally peaked metallicity profile.
NASA Astrophysics Data System (ADS)
Terlevich, Roberto; Melnick, Jorge; Terlevich, Elena; Chávez, Ricardo; Telles, Eduardo; Bresolin, Fabio; Plionis, Manolis; Basilakos, Spyros; Fernández Arenas, David; González Morán, Ana Luisa; Díaz, Ángeles I.; Aretxaga, Itziar
2016-08-01
ID11 is an actively star-forming, extremely compact galaxy and Lyα emitter at z = 3.117 that is gravitationally magnified by a factor of ~17 by the cluster of galaxies Hubble Frontier Fields AS1063. The observed properties of this galaxy resemble those of low luminosity HII galaxies or giant HII regions such as 30 Doradus in the Large Magellanic Cloud. Using the tight correlation correlation between the Balmer-line luminosities and the width of the emission lines (typically L(Hβ) - σ(Hβ)), which are valid for HII galaxies and giant HII regions to estimate their total luminosity, we are able to measure the lensing amplification of ID11. We obtain an amplification of 23 ± 11 that is similar within errors to the value of ~17 estimated or predicted by the best lensing models of the massive cluster Abell S1063. We also compiled, from the literature, luminosities and velocity dispersions for a set of lensed compact star-forming regions. There is more scatter in the L-σ correlation for these lensed systems, but on the whole the results tend to support the lensing model estimates of the magnification. Our result indicates that the amplification can be independently measured using the L - σ relation in lensed giant HII regions or HII galaxies. It also supports the suggestion, even if lensing is model dependent, that the L - σ relation is valid for low luminosity high-z objects. Ad hoc observations of lensed star-forming systems are required to determine the lensing amplification accurately.
Global inversion for anisotropy during full-waveform inversion
NASA Astrophysics Data System (ADS)
Debens, H. A.; Warner, M.; Umpleby, A.
2015-12-01
Full-waveform inversion (FWI) is a powerful tool for quantitative estimation of high-resolution high-fidelity models of subsurface seismic parameters, typically P-wave velocity. The solution to FWI's posed nonlinear inverse problem is obtained via an iterative series of linearized local updates to a start model, assuming this model lies within the basin of attraction to the global minimum. Thanks to many successful published applications to three-dimensional (3D) field datasets, its advance has been rapid and driven in large-part by the oil and gas industry. The consideration of seismic anisotropy during FWI is of vital importance, as it holds influence over both the kinematics and dynamics of seismic waveforms. If not appropriately taken into account then inadequacies in the anisotropy model are likely to manifest as significant error in the recovered velocity model. Conventionally, anisotropic FWI employs either an a priori anisotropy model, held fixed during FWI, or it uses a multi-parameter local inversion scheme to recover the anisotropy as part of the FWI; both of these methods can be problematic. Constructing an anisotropy model prior to FWI often involves intensive (and hence expensive) iterative procedures, such as travel-time tomography or moveout velocity analysis. On the other hand, introducing multiple parameters to FWI itself increases the complexity of what is already an underdetermined inverse problem. We propose that global rather than local FWI can be used to recover the long-wavelength acoustic anisotropy model, and that this can then be followed by more-conventional local FWI to recover the detailed model. We validate this approach using a full 3D field dataset, demonstrating that it avoids problems associated to crosstalk that can bedevil local inversion schemes, and reconciles well with in situ borehole measurements. Although our approach includes a global inversion for anisotropy, it is nonetheless affordable and practical for 3D field data.
Pareto joint inversion of 2D magnetotelluric and gravity data
NASA Astrophysics Data System (ADS)
Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek
2015-04-01
In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where
Inverse lithography source optimization via compressive sensing.
Song, Zhiyang; Ma, Xu; Gao, Jie; Wang, Jie; Li, Yanqiu; Arce, Gonzalo R
2014-06-16
Source optimization (SO) has emerged as a key technique for improving lithographic imaging over a range of process variations. Current SO approaches are pixel-based, where the source pattern is designed by solving a quadratic optimization problem using gradient-based algorithms or solving a linear programming problem. Most of these methods, however, are either computational intensive or result in a process window (PW) that may be further extended. This paper applies the rich theory of compressive sensing (CS) to develop an efficient and robust SO method. In order to accelerate the SO design, the source optimization is formulated as an underdetermined linear problem, where the number of equations can be much less than the source variables. Assuming the source pattern is a sparse pattern on a certain basis, the SO problem is transformed into a l_{1}-norm image reconstruction problem based on CS theory. The linearized Bregman algorithm is applied to synthesize the sparse optimal source pattern on a representation basis, which effectively improves the source manufacturability. It is shown that the proposed linear SO formulation is more effective for improving the contrast of the aerial image than the traditional quadratic formulation. The proposed SO method shows that sparse-regularization in inverse lithography can indeed extend the PW of lithography systems. A set of simulations and analysis demonstrate the superiority of the proposed SO method over the traditional approaches.
Inversion strategies for visco-acoustic waveform inversion
NASA Astrophysics Data System (ADS)
Kamei, R.; Pratt, R. G.
2013-08-01
Visco-acoustic waveform inversion can potentially yield quantitative images of the distribution of both velocity and the attenuation parameters from seismic data. Intrinsic P-wave attenuation has been of particular interest, but has also proven challenging. Frequency-domain inversion allows attenuation and velocity relations to be easily incorporated, and allows a natural multiscale approach. The Laplace-Fourier approach extends this to allow the natural damping of waveforms to enhance early arrivals. Nevertheless, simultaneous inversion of velocity and attenuation leads to significant `cross-talk' between the resulting images, reflecting a lack of parameter resolution and indicating the need for pre-conditioning and regularization of the inverse problem. We analyse the cross-talk issue by partitioning the inversion parameters into two classes; the velocity parameter class, and the attenuation parameter class. Both parameters are defined at a reference frequency, and a dispersion relation is assumed that describes these parameters at any other frequency. We formulate the model gradients at a forward modelling frequency, and convert them to the reference frequency by employing the Jacobian of the coordinate change represented by the dispersion relation. We show that at a given modelling frequency, the Fréchet derivatives corresponding to these two parameter classes differ only by a 90° phase shift, meaning that the magnitudes of resulting model updates will be unscaled, and will not reflect the expected magnitudes in realistic (Q-1 ≪ 1) media. Due to the lack of scaling, cross-talk will be enhanced by poor subsurface illumination, by errors in kinematics, and by data noise. To solve these issues, we introduce an attenuation scaling term (the inverse of a penalty term) that is used to pre-condition the gradient by controlling the magnitudes of the updates to the attenuation parameters. Initial results from a suite of synthetic cross-hole tests using a three
A hierarchical Bayesian-MAP approach to inverse problems in imaging
NASA Astrophysics Data System (ADS)
Raj, Raghu G.
2016-07-01
We present a novel approach to inverse problems in imaging based on a hierarchical Bayesian-MAP (HB-MAP) formulation. In this paper we specifically focus on the difficult and basic inverse problem of multi-sensor (tomographic) imaging wherein the source object of interest is viewed from multiple directions by independent sensors. Given the measurements recorded by these sensors, the problem is to reconstruct the image (of the object) with a high degree of fidelity. We employ a probabilistic graphical modeling extension of the compound Gaussian distribution as a global image prior into a hierarchical Bayesian inference procedure. Since the prior employed by our HB-MAP algorithm is general enough to subsume a wide class of priors including those typically employed in compressive sensing (CS) algorithms, HB-MAP algorithm offers a vehicle to extend the capabilities of current CS algorithms to include truly global priors. After rigorously deriving the regression algorithm for solving our inverse problem from first principles, we demonstrate the performance of the HB-MAP algorithm on Monte Carlo trials and on real empirical data (natural scenes). In all cases we find that our algorithm outperforms previous approaches in the literature including filtered back-projection and a variety of state-of-the-art CS algorithms. We conclude with directions of future research emanating from this work.
Transitionless driving on adiabatic search algorithm
Oh, Sangchul; Kais, Sabre
2014-12-14
We study quantum dynamics of the adiabatic search algorithm with the equivalent two-level system. Its adiabatic and non-adiabatic evolution is studied and visualized as trajectories of Bloch vectors on a Bloch sphere. We find the change in the non-adiabatic transition probability from exponential decay for the short running time to inverse-square decay in asymptotic running time. The scaling of the critical running time is expressed in terms of the Lambert W function. We derive the transitionless driving Hamiltonian for the adiabatic search algorithm, which makes a quantum state follow the adiabatic path. We demonstrate that a uniform transitionless driving Hamiltonian, approximate to the exact time-dependent driving Hamiltonian, can alter the non-adiabatic transition probability from the inverse square decay to the inverse fourth power decay with the running time. This may open up a new but simple way of speeding up adiabatic quantum dynamics.
Thermoelectric properties of inverse opals
NASA Astrophysics Data System (ADS)
Mahan, G. D.; Poilvert, N.; Crespi, V. H.
2016-02-01
Rayleigh's method [Philos. Mag. Ser. 5 34, 481 (1892)] is used to solve for the classical thermoelectric equations in inverse opals. His theory predicts that in an inverse opal, with periodic holes, the Seebeck coefficient and the figure of merit are identical to that of the bulk material. We also provide a major revision to Rayleigh's method, in using the electrochemical potential as an important variable, instead of the electrostatic potential. We also show that in some cases, the thermal boundary resistance is important in the effective thermal conductivity.
Darwin's "strange inversion of reasoning".
Dennett, Daniel
2009-06-16
Darwin's theory of evolution by natural selection unifies the world of physics with the world of meaning and purpose by proposing a deeply counterintuitive "inversion of reasoning" (according to a 19th century critic): "to make a perfect and beautiful machine, it is not requisite to know how to make it" [MacKenzie RB (1868) (Nisbet & Co., London)]. Turing proposed a similar inversion: to be a perfect and beautiful computing machine, it is not requisite to know what arithmetic is. Together, these ideas help to explain how we human intelligences came to be able to discern the reasons for all of the adaptations of life, including our own.
Population inversion by chirped pulses
Lu Tianshi
2011-09-15
In this paper, we analyze the condition for complete population inversion by a chirped pulse over a finite duration. The nonadiabatic transition probability is mapped in the two-dimensional parameter space of coupling strength and detuning amplitude. Asymptotic forms of the probability are derived by the interference of nonadiabatic transitions for sinusoidal and triangular pulses. The qualitative difference between the maps for the two types of pulses is accounted for. The map is used for the design of stable inversion pulses under specific accuracy thresholds.
Multiphase inverse modeling: An Overview
Finsterle, S.
1998-03-01
Inverse modeling is a technique to derive model-related parameters from a variety of observations made on hydrogeologic systems, from small-scale laboratory experiments to field tests to long-term geothermal reservoir responses. If properly chosen, these observations contain information about the system behavior that is relevant to the performance of a geothermal field. Estimating model-related parameters and reducing their uncertainty is an important step in model development, because errors in the parameters constitute a major source of prediction errors. This paper contains an overview of inverse modeling applications using the ITOUGH2 code, demonstrating the possibilities and limitations of a formalized approach to the parameter estimation problem.
Sparsity in Bayesian inversion of parametric operator equations
NASA Astrophysics Data System (ADS)
Schillings, Cl; Schwab, Ch
2014-06-01
We establish posterior sparsity in Bayesian inversion for systems governed by operator equations with distributed parameter uncertainty subject to noisy observation data δ. We generalize the results and algorithms introduced in C Schillings and C Schwab (2013 Inverse Problems 29 065011) for the particular case of scalar diffusion problems with random coefficients to broad classes of forward problems, including general elliptic and parabolic operators with uncertain coefficients, and in random domains. For countably parametric, deterministic representations of uncertain parameters in the forward problem, which belong to a specified sparsity class, we quantify analytic regularity of the likewise countably parametric, deterministic Bayesian posterior density with respect to a uniform prior on the uncertain parameter sequences and prove that the parametric, deterministic density of the Bayesian posterior belongs to the same sparsity class. Generalizing C Schillings and C Schwab (2013 Inverse Problems 29 065011) and C Schwab and A M Stuart (2012 Inverse Problems 28 045003) the forward problems are converted to countably parametric, deterministic operator equations. Computational Bayesian inversion amounts to numerically evaluating expectations of quantities of interest (QoIs) under the Bayesian posterior, conditional on noisy observation data. Our results imply, on the one hand, sparsity of Legendre (generalized) polynomial chaos expansions of the density of the Bayesian posterior with respect to uniform prior and, on the other hand, convergence rates for data-adaptive Smolyak integration algorithms for computational Bayesian estimation, which are independent of the dimension of the parameter space. We prove, mathematically and computationally, that for uncertain inputs with sufficient sparsity convergence rates are, in particular, superior to Markov chain Monte-Carlo sampling of the posterior, in terms of the number N of instances of the parametric forward problem to
VLSI architectures for computing multiplications and inverses in GF(2-m)
NASA Technical Reports Server (NTRS)
Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.; Reed, I. S.
1983-01-01
Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.
VLSI architectures for computing multiplications and inverses in GF(2m)
NASA Technical Reports Server (NTRS)
Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.
1985-01-01
Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.
Andersson, Karl E.; Peterson, J.R.; Madejski, G.M.; /SLAC /KIPAC, Menlo Park
2007-04-17
We propose a new Monte Carlo method to study extended X-ray sources with the European Photon Imaging Camera (EPIC) aboard XMM Newton. The Smoothed Particle Inference (SPI) technique, described in a companion paper, is applied here to the EPIC data for the clusters of galaxies Abell 1689, Centaurus and RXJ 0658-55 (the ''bullet cluster''). We aim to show the advantages of this method of simultaneous spectral-spatial modeling over traditional X-ray spectral analysis. In Abell 1689 we confirm our earlier findings about structure in temperature distribution and produce a high resolution temperature map. We also confirm our findings about velocity structure within the gas. In the bullet cluster, RXJ 0658-55, we produce the highest resolution te