Speech processing using conditional observable maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, John; Nix, David
A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less
Method for removing atomic-model bias in macromolecular crystallography
Terwilliger, Thomas C [Santa Fe, NM
2006-08-01
Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.
Oregon Cascades Play Fairway Analysis: Faults and Heat Flow maps
Adam Brandt
2015-11-15
This submission includes a fault map of the Oregon Cascades and backarc, a probability map of heat flow, and a fault density probability layer. More extensive metadata can be found within each zip file.
Compositional cokriging for mapping the probability risk of groundwater contamination by nitrates.
Pardo-Igúzquiza, Eulogio; Chica-Olmo, Mario; Luque-Espinar, Juan A; Rodríguez-Galiano, Víctor
2015-11-01
Contamination by nitrates is an important cause of groundwater pollution and represents a potential risk to human health. Management decisions must be made using probability maps that assess the nitrate concentration potential of exceeding regulatory thresholds. However these maps are obtained with only a small number of sparse monitoring locations where the nitrate concentrations have been measured. It is therefore of great interest to have an efficient methodology for obtaining those probability maps. In this paper, we make use of the fact that the discrete probability density function is a compositional variable. The spatial discrete probability density function is estimated by compositional cokriging. There are several advantages in using this approach: (i) problems of classical indicator cokriging, like estimates outside the interval (0,1) and order relations, are avoided; (ii) secondary variables (e.g. aquifer parameters) can be included in the estimation of the probability maps; (iii) uncertainty maps of the probability maps can be obtained; (iv) finally there are modelling advantages because the variograms and cross-variograms of real variables that do not have the restrictions of indicator variograms and indicator cross-variograms. The methodology was applied to the Vega de Granada aquifer in Southern Spain and the advantages of the compositional cokriging approach were demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.
A tool for the estimation of the distribution of landslide area in R
NASA Astrophysics Data System (ADS)
Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.
2012-04-01
We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.
NASA Astrophysics Data System (ADS)
Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.
2017-06-01
In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.
Eulerian Mapping Closure Approach for Probability Density Function of Concentration in Shear Flows
NASA Technical Reports Server (NTRS)
He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The Eulerian mapping closure approach is developed for uncertainty propagation in computational fluid mechanics. The approach is used to study the Probability Density Function (PDF) for the concentration of species advected by a random shear flow. An analytical argument shows that fluctuation of the concentration field at one point in space is non-Gaussian and exhibits stretched exponential form. An Eulerian mapping approach provides an appropriate approximation to both convection and diffusion terms and leads to a closed mapping equation. The results obtained describe the evolution of the initial Gaussian field, which is in agreement with direct numerical simulations.
Improving experimental phases for strong reflections prior to density modification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.
Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Improving experimental phases for strong reflections prior to density modification
Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.; ...
2013-09-20
Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
On Schrödinger's bridge problem
NASA Astrophysics Data System (ADS)
Friedland, S.
2017-11-01
In the first part of this paper we generalize Georgiou-Pavon's result that a positive square matrix can be scaled uniquely to a column stochastic matrix which maps a given positive probability vector to another given positive probability vector. In the second part we prove that a positive quantum channel can be scaled to another positive quantum channel which maps a given positive definite density matrix to another given positive definite density matrix using Brouwer's fixed point theorem. This result proves the Georgiou-Pavon conjecture for two positive definite density matrices, made in their recent paper. We show that the fixed points are unique for certain pairs of positive definite density matrices. Bibliography: 15 titles.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing
2018-03-01
The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.
NASA Astrophysics Data System (ADS)
Mastrolorenzo, G.; Pappalardo, L.; Troise, C.; Panizza, A.; de Natale, G.
2005-05-01
Integrated volcanological-probabilistic approaches has been used in order to simulate pyroclastic density currents and fallout and produce hazard maps for Campi Flegrei and Somma Vesuvius areas. On the basis of the analyses of all types of pyroclastic flows, surges, secondary pyroclastic density currents and fallout events occurred in the volcanological history of the two volcanic areas and the evaluation of probability for each type of events, matrixs of input parameters for a numerical simulation have been performed. The multi-dimensional input matrixs include the main controlling parameters of the pyroclasts transport and deposition dispersion, as well as the set of possible eruptive vents used in the simulation program. Probabilistic hazard maps provide of each points of campanian area, the yearly probability to be interested by a given event with a given intensity and resulting demage. Probability of a few events in one thousand years are typical of most areas around the volcanoes whitin a range of ca 10 km, including Neaples. Results provide constrains for the emergency plans in Neapolitan area.
Improving experimental phases for strong reflections prior to density modification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uervirojnangkoorn, Monarin; University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck; Hilgenfeld, Rolf, E-mail: hilgenfeld@biochem.uni-luebeck.de
A genetic algorithm has been developed to optimize the phases of the strongest reflections in SIR/SAD data. This is shown to facilitate density modification and model building in several test cases. Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the mapsmore » can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005 ▶), Acta Cryst. D61, 899–902], the impact of identifying optimized phases for a small number of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. A computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Evolution of probability densities in stochastic coupled map lattices
NASA Astrophysics Data System (ADS)
Losson, Jérôme; Mackey, Michael C.
1995-08-01
This paper describes the statistical properties of coupled map lattices subjected to the influence of stochastic perturbations. The stochastic analog of the Perron-Frobenius operator is derived for various types of noise. When the local dynamics satisfy rather mild conditions, this equation is shown to possess either stable, steady state solutions (i.e., a stable invariant density) or density limit cycles. Convergence of the phase space densities to these limit cycle solutions explains the nonstationary behavior of statistical quantifiers at equilibrium. Numerical experiments performed on various lattices of tent, logistic, and shift maps with diffusivelike interelement couplings are examined in light of these theoretical results.
Distinguishability notion based on Wootters statistical distance: Application to discrete maps
NASA Astrophysics Data System (ADS)
Gomez, Ignacio S.; Portesi, M.; Lamberti, P. W.
2017-08-01
We study the distinguishability notion given by Wootters for states represented by probability density functions. This presents the particularity that it can also be used for defining a statistical distance in chaotic unidimensional maps. Based on that definition, we provide a metric d ¯ for an arbitrary discrete map. Moreover, from d ¯ , we associate a metric space with each invariant density of a given map, which results to be the set of all distinguished points when the number of iterations of the map tends to infinity. Also, we give a characterization of the wandering set of a map in terms of the metric d ¯ , which allows us to identify the dissipative regions in the phase space. We illustrate the results in the case of the logistic and the circle maps numerically and analytically, and we obtain d ¯ and the wandering set for some characteristic values of their parameters. Finally, an extension of the metric space associated for arbitrary probability distributions (not necessarily invariant densities) is given along with some consequences. The statistical properties of distributions given by histograms are characterized in terms of the cardinal of the associated metric space. For two conjugate variables, the uncertainty principle is expressed in terms of the diameters of the associated metric space with those variables.
Automated side-chain model building and sequence assignment by template matching.
Terwilliger, Thomas C
2003-01-01
An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, S; Tianjin University, Tianjin; Hara, W
Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less
Sekiguchi, Yuki; Oroguchi, Tomotaka; Nakasako, Masayoshi
2016-01-01
Coherent X-ray diffraction imaging (CXDI) is one of the techniques used to visualize structures of non-crystalline particles of micrometer to submicrometer size from materials and biological science. In the structural analysis of CXDI, the electron density map of a sample particle can theoretically be reconstructed from a diffraction pattern by using phase-retrieval (PR) algorithms. However, in practice, the reconstruction is difficult because diffraction patterns are affected by Poisson noise and miss data in small-angle regions due to the beam stop and the saturation of detector pixels. In contrast to X-ray protein crystallography, in which the phases of diffracted waves are experimentally estimated, phase retrieval in CXDI relies entirely on the computational procedure driven by the PR algorithms. Thus, objective criteria and methods to assess the accuracy of retrieved electron density maps are necessary in addition to conventional parameters monitoring the convergence of PR calculations. Here, a data analysis scheme, named ASURA, is proposed which selects the most probable electron density maps from a set of maps retrieved from 1000 different random seeds for a diffraction pattern. Each electron density map composed of J pixels is expressed as a point in a J-dimensional space. Principal component analysis is applied to describe characteristics in the distribution of the maps in the J-dimensional space. When the distribution is characterized by a small number of principal components, the distribution is classified using the k-means clustering method. The classified maps are evaluated by several parameters to assess the quality of the maps. Using the proposed scheme, structure analysis of a diffraction pattern from a non-crystalline particle is conducted in two stages: estimation of the overall shape and determination of the fine structure inside the support shape. In each stage, the most accurate and probable density maps are objectively selected. The validity of the proposed scheme is examined by application to diffraction data that were obtained from an aggregate of metal particles and a biological specimen at the XFEL facility SACLA using custom-made diffraction apparatus.
The response analysis of fractional-order stochastic system via generalized cell mapping method.
Wang, Liang; Xue, Lili; Sun, Chunyan; Yue, Xiaole; Xu, Wei
2018-01-01
This paper is concerned with the response of a fractional-order stochastic system. The short memory principle is introduced to ensure that the response of the system is a Markov process. The generalized cell mapping method is applied to display the global dynamics of the noise-free system, such as attractors, basins of attraction, basin boundary, saddle, and invariant manifolds. The stochastic generalized cell mapping method is employed to obtain the evolutionary process of probability density functions of the response. The fractional-order ϕ 6 oscillator and the fractional-order smooth and discontinuous oscillator are taken as examples to give the implementations of our strategies. Studies have shown that the evolutionary direction of the probability density function of the fractional-order stochastic system is consistent with the unstable manifold. The effectiveness of the method is confirmed using Monte Carlo results.
A fully traits-based approach to modeling global vegetation distribution.
van Bodegom, Peter M; Douma, Jacob C; Verheijen, Lieneke M
2014-09-23
Dynamic Global Vegetation Models (DGVMs) are indispensable for our understanding of climate change impacts. The application of traits in DGVMs is increasingly refined. However, a comprehensive analysis of the direct impacts of trait variation on global vegetation distribution does not yet exist. Here, we present such analysis as proof of principle. We run regressions of trait observations for leaf mass per area, stem-specific density, and seed mass from a global database against multiple environmental drivers, making use of findings of global trait convergence. This analysis explained up to 52% of the global variation of traits. Global trait maps, generated by coupling the regression equations to gridded soil and climate maps, showed up to orders of magnitude variation in trait values. Subsequently, nine vegetation types were characterized by the trait combinations that they possess using Gaussian mixture density functions. The trait maps were input to these functions to determine global occurrence probabilities for each vegetation type. We prepared vegetation maps, assuming that the most probable (and thus, most suited) vegetation type at each location will be realized. This fully traits-based vegetation map predicted 42% of the observed vegetation distribution correctly. Our results indicate that a major proportion of the predictive ability of DGVMs with respect to vegetation distribution can be attained by three traits alone if traits like stem-specific density and seed mass are included. We envision that our traits-based approach, our observation-driven trait maps, and our vegetation maps may inspire a new generation of powerful traits-based DGVMs.
NASA Astrophysics Data System (ADS)
Dietterich, H. R.; Stelten, M. E.; Downs, D. T.; Champion, D. E.
2017-12-01
Harrat Rahat is a predominantly mafic, 20,000 km2 volcanic field in western Saudi Arabia with an elongate volcanic axis extending 310 km north-south. Prior mapping suggests that the youngest eruptions were concentrated in northernmost Harrat Rahat, where our new geologic mapping and geochronology reveal >300 eruptive vents with ages ranging from 1.2 Ma to a historic eruption in 1256 CE. Eruption compositions and styles vary spatially and temporally within the volcanic field, where extensive alkali basaltic lavas dominate, but more evolved compositions erupted episodically as clusters of trachytic domes and small-volume pyroclastic flows. Analysis of vent locations, compositions, and eruption styles shows the evolution of the volcanic field and allows assessment of the spatio-temporal probabilities of vent opening and eruption styles. We link individual vents and fissures to eruptions and their deposits using field relations, petrography, geochemistry, paleomagnetism, and 40Ar/39Ar and 36Cl geochronology. Eruption volumes and deposit extents are derived from geologic mapping and topographic analysis. Spatial density analysis with kernel density estimation captures vent densities of up to 0.2 %/km2 along the north-south running volcanic axis, decaying quickly away to the east but reaching a second, lower high along a secondary axis to the west. Temporal trends show slight younging of mafic eruption ages to the north in the past 300 ka, as well as clustered eruptions of trachytes over the past 150 ka. Vent locations, timing, and composition are integrated through spatial probability weighted by eruption age for each compositional range to produce spatio-temporal models of vent opening probability. These show that the next mafic eruption is most probable within the north end of the main (eastern) volcanic axis, whereas more evolved compositions are most likely to erupt within the trachytic centers further to the south. These vent opening probabilities, combined with corresponding eruption properties, can be used as the basis for lava flow and tephra fall hazard maps.
Sekiguchi, Yuki; Hashimoto, Saki; Kobayashi, Amane; Oroguchi, Tomotaka; Nakasako, Masayoshi
2017-09-01
Coherent X-ray diffraction imaging (CXDI) is a technique for visualizing the structures of non-crystalline particles with size in the submicrometer to micrometer range in material sciences and biology. In the structural analysis of CXDI, the electron density map of a specimen particle projected along the direction of the incident X-rays can be reconstructed only from the diffraction pattern by using phase-retrieval (PR) algorithms. However, in practice, the reconstruction, relying entirely on the computational procedure, sometimes fails because diffraction patterns miss the data in small-angle regions owing to the beam stop and saturation of the detector pixels, and are modified by Poisson noise in X-ray detection. To date, X-ray free-electron lasers have allowed us to collect a large number of diffraction patterns within a short period of time. Therefore, the reconstruction of correct electron density maps is the bottleneck for efficiently conducting structure analyses of non-crystalline particles. To automatically address the correctness of retrieved electron density maps, a data analysis protocol to extract the most probable electron density maps from a set of maps retrieved from 1000 different random seeds for a single diffraction pattern is proposed. Through monitoring the variations of the phase values during PR calculations, the tendency for the PR calculations to succeed when the retrieved phase sets converged on a certain value was found. On the other hand, if the phase set was in persistent variation, the PR calculation tended to fail to yield the correct electron density map. To quantify this tendency, here a figure of merit for the variation of the phase values during PR calculation is introduced. In addition, a PR protocol to evaluate the similarity between a map of the highest figure of merit and other independently reconstructed maps is proposed. The protocol is implemented and practically examined in the structure analyses for diffraction patterns from aggregates of gold colloidal particles. Furthermore, the feasibility of the protocol in the structure analysis of organelles from biological cells is examined.
NASA Astrophysics Data System (ADS)
Neri, Augusto; Bevilacqua, Andrea; Esposti Ongaro, Tomaso; Isaia, Roberto; Aspinall, Willy P.; Bisson, Marina; Flandoli, Franco; Baxter, Peter J.; Bertagnini, Antonella; Iannuzzi, Enrico; Orsucci, Simone; Pistolesi, Marco; Rosi, Mauro; Vitale, Stefano
2015-04-01
Campi Flegrei (CF) is an example of an active caldera containing densely populated settlements at very high risk of pyroclastic density currents (PDCs). We present here an innovative method for assessing background spatial PDC hazard in a caldera setting with probabilistic invasion maps conditional on the occurrence of an explosive event. The method encompasses the probabilistic assessment of potential vent opening positions, derived in the companion paper, combined with inferences about the spatial density distribution of PDC invasion areas from a simplified flow model, informed by reconstruction of deposits from eruptions in the last 15 ka. The flow model describes the PDC kinematics and accounts for main effects of topography on flow propagation. Structured expert elicitation is used to incorporate certain sources of epistemic uncertainty, and a Monte Carlo approach is adopted to produce a set of probabilistic hazard maps for the whole CF area. Our findings show that, in case of eruption, almost the entire caldera is exposed to invasion with a mean probability of at least 5%, with peaks greater than 50% in some central areas. Some areas outside the caldera are also exposed to this danger, with mean probabilities of invasion of the order of 5-10%. Our analysis suggests that these probability estimates have location-specific uncertainties which can be substantial. The results prove to be robust with respect to alternative elicitation models and allow the influence on hazard mapping of different sources of uncertainty, and of theoretical and numerical assumptions, to be quantified.
NASA Astrophysics Data System (ADS)
Bevilacqua, Andrea; Neri, Augusto; Esposti Ongaro, Tomaso; Isaia, Roberto; Flandoli, Franco; Bisson, Marina
2016-04-01
Today hundreds of thousands people live inside the Campi Flegrei caldera (Italy) and in the adjacent part of the city of Naples making a future eruption of such volcano an event with huge consequences. Very high risks are associated with the occurrence of pyroclastic density currents (PDCs). Mapping of background or long-term PDC hazard in the area is a great challenge due to the unknown eruption time, scale and vent location of the next event as well as the complex dynamics of the flow over the caldera topography. This is additionally complicated by the remarkable epistemic uncertainty on the eruptive record, affecting the time of past events, the location of vents as well as the PDCs areal extent estimates. First probability maps of PDC invasion were produced combining a vent-opening probability map, statistical estimates concerning the eruptive scales and a Cox-type temporal model including self-excitement effects, based on the eruptive record of the last 15 kyr. Maps were produced by using a Monte Carlo approach and adopting a simplified inundation model based on the "box model" integral approximation tested with 2D transient numerical simulations of flow dynamics. In this presentation we illustrate the independent effects of eruption scale, vent location and time of forecast of the next event. Specific focus was given to the remarkable differences between the eastern and western sectors of the caldera and their effects on the hazard maps. The analysis allowed to identify areas with elevated probabilities of flow invasion as a function of the diverse assumptions made. With the quantification of some sources of uncertainty in relation to the system, we were also able to provide mean and percentile maps of PDC hazard levels.
NASA Technical Reports Server (NTRS)
Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.
1984-01-01
On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.
Kim, Youngwoo; Hong, Byung Woo; Kim, Seung Ja; Kim, Jong Hyo
2014-07-01
A major challenge when distinguishing glandular tissues on mammograms, especially for area-based estimations, lies in determining a boundary on a hazy transition zone from adipose to glandular tissues. This stems from the nature of mammography, which is a projection of superimposed tissues consisting of different structures. In this paper, the authors present a novel segmentation scheme which incorporates the learned prior knowledge of experts into a level set framework for fully automated mammographic density estimations. The authors modeled the learned knowledge as a population-based tissue probability map (PTPM) that was designed to capture the classification of experts' visual systems. The PTPM was constructed using an image database of a selected population consisting of 297 cases. Three mammogram experts extracted regions for dense and fatty tissues on digital mammograms, which was an independent subset used to create a tissue probability map for each ROI based on its local statistics. This tissue class probability was taken as a prior in the Bayesian formulation and was incorporated into a level set framework as an additional term to control the evolution and followed the energy surface designed to reflect experts' knowledge as well as the regional statistics inside and outside of the evolving contour. A subset of 100 digital mammograms, which was not used in constructing the PTPM, was used to validate the performance. The energy was minimized when the initial contour reached the boundary of the dense and fatty tissues, as defined by experts. The correlation coefficient between mammographic density measurements made by experts and measurements by the proposed method was 0.93, while that with the conventional level set was 0.47. The proposed method showed a marked improvement over the conventional level set method in terms of accuracy and reliability. This result suggests that the proposed method successfully incorporated the learned knowledge of the experts' visual systems and has potential to be used as an automated and quantitative tool for estimations of mammographic breast density levels.
ERIC Educational Resources Information Center
McKean, Cristina; Letts, Carolyn; Howard, David
2013-01-01
Neighbourhood Density (ND) and Phonotactic Probability (PP) influence word learning in children. This influence appears to change over development but the separate developmental trajectories of influence of PP and ND on word learning have not previously been mapped. This study examined the cross-sectional developmental trajectories of influence of…
Averaged kick maps: less noise, more signal…and probably less bias
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pražnikar, Jure; Afonine, Pavel V.; Gunčar, Gregor
2009-09-01
Averaged kick maps are the sum of a series of individual kick maps, where each map is calculated from atomic coordinates modified by random shifts. These maps offer the possibility of an improved and less model-biased map interpretation. Use of reliable density maps is crucial for rapid and successful crystal structure determination. Here, the averaged kick (AK) map approach is investigated, its application is generalized and it is compared with other map-calculation methods. AK maps are the sum of a series of kick maps, where each kick map is calculated from atomic coordinates modified by random shifts. As such, theymore » are a numerical analogue of maximum-likelihood maps. AK maps can be unweighted or maximum-likelihood (σ{sub A}) weighted. Analysis shows that they are comparable and correspond better to the final model than σ{sub A} and simulated-annealing maps. The AK maps were challenged by a difficult structure-validation case, in which they were able to clarify the problematic region in the density without the need for model rebuilding. The conclusion is that AK maps can be useful throughout the entire progress of crystal structure determination, offering the possibility of improved map interpretation.« less
NASA Astrophysics Data System (ADS)
Han, Qun; Xu, Wei; Sun, Jian-Qiao
2016-09-01
The stochastic response of nonlinear oscillators under periodic and Gaussian white noise excitations is studied with the generalized cell mapping based on short-time Gaussian approximation (GCM/STGA) method. The solutions of the transition probability density functions over a small fraction of the period are constructed by the STGA scheme in order to construct the GCM over one complete period. Both the transient and steady-state probability density functions (PDFs) of a smooth and discontinuous (SD) oscillator are computed to illustrate the application of the method. The accuracy of the results is verified by direct Monte Carlo simulations. The transient responses show the evolution of the PDFs from being Gaussian to non-Gaussian. The effect of a chaotic saddle on the stochastic response is also studied. The stochastic P-bifurcation in terms of the steady-state PDFs occurs with the decrease of the smoothness parameter, which corresponds to the deterministic pitchfork bifurcation.
Mapping Wildfire Ignition Probability Using Sentinel 2 and LiDAR (Jerte Valley, Cáceres, Spain)
Sánchez Sánchez, Yolanda; Mateos Picado, Marina
2018-01-01
Wildfire is a major threat to the environment, and this threat is aggravated by different climatic and socioeconomic factors. The availability of detailed, reliable mapping and periodic and immediate updates makes wildfire prevention and extinction work more effective. An analyst protocol has been generated that allows the precise updating of high-resolution thematic maps. For this protocol, images obtained through the Sentinel 2A satellite, with a return time of five days, have been merged with Light Detection and Ranging (LiDAR) data with a density of 0.5 points/m2 in order to obtain vegetation mapping with an accuracy of 88% (kappa = 0.86), which is then extrapolated to fuel model mapping through a decision tree. This process, which is fast and reliable, serves as a cartographic base for the later calculation of ignition-probability mapping. The generated cartography is a fundamental tool to be used in the decision making involved in the planning of preventive silvicultural treatments, extinguishing media distribution, infrastructure construction, etc. PMID:29522460
Mapping Wildfire Ignition Probability Using Sentinel 2 and LiDAR (Jerte Valley, Cáceres, Spain).
Sánchez Sánchez, Yolanda; Martínez-Graña, Antonio; Santos Francés, Fernando; Mateos Picado, Marina
2018-03-09
Wildfire is a major threat to the environment, and this threat is aggravated by different climatic and socioeconomic factors. The availability of detailed, reliable mapping and periodic and immediate updates makes wildfire prevention and extinction work more effective. An analyst protocol has been generated that allows the precise updating of high-resolution thematic maps. For this protocol, images obtained through the Sentinel 2A satellite, with a return time of five days, have been merged with Light Detection and Ranging (LiDAR) data with a density of 0.5 points/m² in order to obtain vegetation mapping with an accuracy of 88% (kappa = 0.86), which is then extrapolated to fuel model mapping through a decision tree. This process, which is fast and reliable, serves as a cartographic base for the later calculation of ignition-probability mapping. The generated cartography is a fundamental tool to be used in the decision making involved in the planning of preventive silvicultural treatments, extinguishing media distribution, infrastructure construction, etc.
NASA Astrophysics Data System (ADS)
Bevilacqua, Andrea; Neri, Augusto; Bisson, Marina; Esposti Ongaro, Tomaso; Flandoli, Franco; Isaia, Roberto; Rosi, Mauro; Vitale, Stefano
2017-09-01
This study presents a new method for producing long-term hazard maps for pyroclastic density currents (PDC) originating at Campi Flegrei caldera. Such method is based on a doubly stochastic approach and is able to combine the uncertainty assessments on the spatial location of the volcanic vent, the size of the flow and the expected time of such an event. The results are obtained by using a Monte Carlo approach and adopting a simplified invasion model based on the box model integral approximation. Temporal assessments are modelled through a Cox-type process including self-excitement effects, based on the eruptive record of the last 15 kyr. Mean and percentile maps of PDC invasion probability are produced, exploring their sensitivity to some sources of uncertainty and to the effects of the dependence between PDC scales and the caldera sector where they originated. Conditional maps representative of PDC originating inside limited zones of the caldera, or of PDC with a limited range of scales are also produced. Finally, the effect of assuming different time windows for the hazard estimates is explored, also including the potential occurrence of a sequence of multiple events. Assuming that the last eruption of Monte Nuovo (A.D. 1538) marked the beginning of a new epoch of activity similar to the previous ones, results of the statistical analysis indicate a mean probability of PDC invasion above 5% in the next 50 years on almost the entire caldera (with a probability peak of 25% in the central part of the caldera). In contrast, probability values reduce by a factor of about 3 if the entire eruptive record is considered over the last 15 kyr, i.e. including both eruptive epochs and quiescent periods.
The North Alabama Lightning Warning Product
NASA Technical Reports Server (NTRS)
Buechler, Dennis E.; Blakeslee, R. J.; Stano, G. T.
2009-01-01
The North Alabama Lightning Mapping Array NALMA has been collecting total lightning data on storms in the Tennessee Valley region since 2001. Forecasters from nearby National Weather Service (NWS) offices have been ingesting this data for display with other AWIPS products. The current lightning product used by the offices is the lightning source density plot. The new product provides a probabalistic, short-term, graphical forecast of the probability of lightning activity occurring at 5 min intervals over the next 30 minutes . One of the uses of the current lightning source density product by the Huntsville National Weather Service Office is to identify areas of potential for cloud-to-ground flashes based on where LMA total lightning is occurring. This product quantifies that observation. The Lightning Warning Product is derived from total lightning observations from the Washington, D.C. (DCLMA) and North Alabama Lightning Mapping Arrays and cloud-to-ground lightning flashes detected by the National Lightning Detection Network (NLDN). Probability predictions are provided for both intracloud and cloud-to-ground flashes. The gridded product can be displayed on AWIPS workstations in a manner similar to that of the lightning source density product.
Positive contraction mappings for classical and quantum Schrödinger systems
NASA Astrophysics Data System (ADS)
Georgiou, Tryphon T.; Pavon, Michele
2015-03-01
The classical Schrödinger bridge seeks the most likely probability law for a diffusion process, in path space, that matches marginals at two end points in time; the likelihood is quantified by the relative entropy between the sought law and a prior. Jamison proved that the new law is obtained through a multiplicative functional transformation of the prior. This transformation is characterised by an automorphism on the space of endpoints probability measures, which has been studied by Fortet, Beurling, and others. A similar question can be raised for processes evolving in a discrete time and space as well as for processes defined over non-commutative probability spaces. The present paper builds on earlier work by Pavon and Ticozzi and begins by establishing solutions to Schrödinger systems for Markov chains. Our approach is based on the Hilbert metric and shows that the solution to the Schrödinger bridge is provided by the fixed point of a contractive map. We approach, in a similar manner, the steering of a quantum system across a quantum channel. We are able to establish existence of quantum transitions that are multiplicative functional transformations of a given Kraus map for the cases where the marginals are either uniform or pure states. As in the Markov chain case, and for uniform density matrices, the solution of the quantum bridge can be constructed from the fixed point of a certain contractive map. For arbitrary marginal densities, extensive numerical simulations indicate that iteration of a similar map leads to fixed points from which we can construct a quantum bridge. For this general case, however, a proof of convergence remains elusive.
Probabilistic mapping of flood-induced backscatter changes in SAR time series
NASA Astrophysics Data System (ADS)
Schlaffer, Stefan; Chini, Marco; Giustarini, Laura; Matgen, Patrick
2017-04-01
The information content of flood extent maps can be increased considerably by including information on the uncertainty of the flood area delineation. This additional information can be of benefit in flood forecasting and monitoring. Furthermore, flood probability maps can be converted to binary maps showing flooded and non-flooded areas by applying a threshold probability value pF = 0.5. In this study, a probabilistic change detection approach for flood mapping based on synthetic aperture radar (SAR) time series is proposed. For this purpose, conditional probability density functions (PDFs) for land and open water surfaces were estimated from ENVISAT ASAR Wide Swath (WS) time series containing >600 images using a reference mask of permanent water bodies. A pixel-wise harmonic model was used to account for seasonality in backscatter from land areas caused by soil moisture and vegetation dynamics. The approach was evaluated for a large-scale flood event along the River Severn, United Kingdom. The retrieved flood probability maps were compared to a reference flood mask derived from high-resolution aerial imagery by means of reliability diagrams. The obtained performance measures indicate both high reliability and confidence although there was a slight under-estimation of the flood extent, which may in part be attributed to topographically induced radar shadows along the edges of the floodplain. Furthermore, the results highlight the importance of local incidence angle for the separability between flooded and non-flooded areas as specular reflection properties of open water surfaces increase with a more oblique viewing geometry.
Huang, Guangzao; Yuan, Mingshun; Chen, Moliang; Li, Lei; You, Wenjie; Li, Hanjie; Cai, James J; Ji, Guoli
2017-10-07
The application of machine learning in cancer diagnostics has shown great promise and is of importance in clinic settings. Here we consider applying machine learning methods to transcriptomic data derived from tumor-educated platelets (TEPs) from individuals with different types of cancer. We aim to define a reliability measure for diagnostic purposes to increase the potential for facilitating personalized treatments. To this end, we present a novel classification method called MFRB (for Multiple Fitting Regression and Bayes decision), which integrates the process of multiple fitting regression (MFR) with Bayes decision theory. MFR is first used to map multidimensional features of the transcriptomic data into a one-dimensional feature. The probability density function of each class in the mapped space is then adjusted using the Gaussian probability density function. Finally, the Bayes decision theory is used to build a probabilistic classifier with the estimated probability density functions. The output of MFRB can be used to determine which class a sample belongs to, as well as to assign a reliability measure for a given class. The classical support vector machine (SVM) and probabilistic SVM (PSVM) are used to evaluate the performance of the proposed method with simulated and real TEP datasets. Our results indicate that the proposed MFRB method achieves the best performance compared to SVM and PSVM, mainly due to its strong generalization ability for limited, imbalanced, and noisy data.
A search for radio emission from flare stars in the Pleiades
NASA Technical Reports Server (NTRS)
Bastian, T. S.; Dulk, G. A.; Slee, O. B.
1988-01-01
The VLA has been used to search for radio emission from flare stars in the Pleiades. Two observational strategies were employed. First, about 1/2 sq deg of cluster, containing about 40 known flare stars, was mapped at 1.4 GHz at two epochs. More than 120 sources with flux densities greater than 0.3 mJy exist on the maps. Detailed analysis shows that all but two of these sources are probably extragalactic. The two sources identified as stellar are probably not Pleiades members as judged by their proper motions; rather, based on their colors and magnitudes, they seem to be foreground G stars. One is a known X-ray source. The second observational strategy, where five rapidly rotating flare stars were observed at three frequencies, yielded no detections. The 0.3 mJy flux-density limit of this survey is such that only the most intense outbursts of flare stars in the solar neighborhood could have been detected if those stars were at the distance of the Pleiades.
NASA Astrophysics Data System (ADS)
Reimberg, Paulo; Bernardeau, Francis
2018-01-01
We present a formalism based on the large deviation principle (LDP) applied to cosmological density fields, and more specifically to the arbitrary functional of density profiles, and we apply it to the derivation of the cumulant generating function and one-point probability distribution function (PDF) of the aperture mass (Map ), a common observable for cosmic shear observations. We show that the LDP can indeed be used in practice for a much larger family of observables than previously envisioned, such as those built from continuous and nonlinear functionals of density profiles. Taking advantage of this formalism, we can extend previous results, which were based on crude definitions of the aperture mass, with top-hat windows and the use of the reduced shear approximation (replacing the reduced shear with the shear itself). We were precisely able to quantify how this latter approximation affects the Map statistical properties. In particular, we derive the corrective term for the skewness of the Map and reconstruct its one-point PDF.
Minimal entropy approximation for cellular automata
NASA Astrophysics Data System (ADS)
Fukś, Henryk
2014-02-01
We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim.
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
Estimating the Probability of Elevated Nitrate Concentrations in Ground Water in Washington State
Frans, Lonna M.
2008-01-01
Logistic regression was used to relate anthropogenic (manmade) and natural variables to the occurrence of elevated nitrate concentrations in ground water in Washington State. Variables that were analyzed included well depth, ground-water recharge rate, precipitation, population density, fertilizer application amounts, soil characteristics, hydrogeomorphic regions, and land-use types. Two models were developed: one with and one without the hydrogeomorphic regions variable. The variables in both models that best explained the occurrence of elevated nitrate concentrations (defined as concentrations of nitrite plus nitrate as nitrogen greater than 2 milligrams per liter) were the percentage of agricultural land use in a 4-kilometer radius of a well, population density, precipitation, soil drainage class, and well depth. Based on the relations between these variables and measured nitrate concentrations, logistic regression models were developed to estimate the probability of nitrate concentrations in ground water exceeding 2 milligrams per liter. Maps of Washington State were produced that illustrate these estimated probabilities for wells drilled to 145 feet below land surface (median well depth) and the estimated depth to which wells would need to be drilled to have a 90-percent probability of drawing water with a nitrate concentration less than 2 milligrams per liter. Maps showing the estimated probability of elevated nitrate concentrations indicated that the agricultural regions are most at risk followed by urban areas. The estimated depths to which wells would need to be drilled to have a 90-percent probability of obtaining water with nitrate concentrations less than 2 milligrams per liter exceeded 1,000 feet in the agricultural regions; whereas, wells in urban areas generally would need to be drilled to depths in excess of 400 feet.
Failure Maps for Rectangular 17-4PH Stainless Steel Sandwiched Foam Panels
NASA Technical Reports Server (NTRS)
Raj, S. V.; Ghosn, L. J.
2007-01-01
A new and innovative concept is proposed for designing lightweight fan blades for aircraft engines using commercially available 17-4PH precipitation hardened stainless steel. Rotating fan blades in aircraft engines experience a complex loading state consisting of combinations of centrifugal, distributed pressure and torsional loads. Theoretical failure plastic collapse maps, showing plots of the foam relative density versus face sheet thickness, t, normalized by the fan blade span length, L, have been generated for rectangular 17-4PH sandwiched foam panels under these three loading modes assuming three failure plastic collapse modes. These maps show that the 17-4PH sandwiched foam panels can fail by either the yielding of the face sheets, yielding of the foam core or wrinkling of the face sheets depending on foam relative density, the magnitude of t/L and the loading mode. The design envelop of a generic fan blade is superimposed on the maps to provide valuable insights on the probable failure modes in a sandwiched foam fan blade.
Spatial vent opening probability map of El Hierro Island (Canary Islands, Spain)
NASA Astrophysics Data System (ADS)
Becerril, Laura; Cappello, Annalisa; Galindo, Inés; Neri, Marco; Del Negro, Ciro
2013-04-01
The assessment of the probable spatial distribution of new eruptions is useful to manage and reduce the volcanic risk. It can be achieved in different ways, but it becomes especially hard when dealing with volcanic areas less studied, poorly monitored and characterized by a low frequent activity, as El Hierro. Even though it is the youngest of the Canary Islands, before the 2011 eruption in the "Las Calmas Sea", El Hierro had been the least studied volcanic Island of the Canaries, with more historically devoted attention to La Palma, Tenerife and Lanzarote. We propose a probabilistic method to build the susceptibility map of El Hierro, i.e. the spatial distribution of vent opening for future eruptions, based on the mathematical analysis of the volcano-structural data collected mostly on the Island and, secondly, on the submerged part of the volcano, up to a distance of ~10-20 km from the coast. The volcano-structural data were collected through new fieldwork measurements, bathymetric information, and analysis of geological maps, orthophotos and aerial photographs. They have been divided in different datasets and converted into separate and weighted probability density functions, which were then included in a non-homogeneous Poisson process to produce the volcanic susceptibility map. Future eruptive events on El Hierro is mainly concentrated on the rifts zones, extending also beyond the shoreline. The major probabilities to host new eruptions are located on the distal parts of the South and West rifts, with the highest probability reached in the south-western area of the West rift. High probabilities are also observed in the Northeast and South rifts, and the submarine parts of the rifts. This map represents the first effort to deal with the volcanic hazard at El Hierro and can be a support tool for decision makers in land planning, emergency plans and civil defence actions.
NASA Astrophysics Data System (ADS)
Hale, Stephen Roy
Landsat-7 Enhanced Thematic Mapper satellite imagery was used to model Bicknell's Thrush (Catharus bicknelli) distribution in the White Mountains of New Hampshire. The proof-of-concept was established for using satellite imagery in species-habitat modeling, where for the first time imagery spectral features were used to estimate a species-habitat model variable. The model predicted rising probabilities of thrush presence with decreasing dominant vegetation height, increasing elevation, and decreasing distance to nearest Fir Sapling cover type. To solve the model at all locations required regressor estimates at every pixel, which were not available for the dominant vegetation height and elevation variables. Topographically normalized imagery features Normalized Difference Vegetation Index and Band 1 (blue) were used to estimate dominant vegetation height using multiple linear regression; and a Digital Elevation Model was used to estimate elevation. Distance to nearest Fir Sapling cover type was obtained for each pixel from a land cover map specifically constructed for this project. The Bicknell's Thrush habitat model was derived using logistic regression, which produced the probability of detecting a singing male based on the pattern of model covariates. Model validation using Bicknell's Thrush data not used in model calibration, revealed that the model accurately estimated thrush presence at probabilities ranging from 0 to <0.40 and from 0.50 to <0.60. Probabilities from 0.40 to <0.50 and greater than 0.60 significantly underestimated and overestimated presence, respectively. Applying the model to the study area illuminated an important implication for Bicknell's Thrush conservation. The model predicted increasing numbers of presences and increasing relative density with rising elevation, with which exists a concomitant decrease in land area. Greater land area of lower density habitats may account for more total individuals and reproductive output than higher density less abundant land area. Efforts to conserve areas of highest individual density under the assumption that density reflects habitat quality could target the smallest fraction of the total population.
Active contours on statistical manifolds and texture segmentation
Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman
2005-01-01
A new approach to active contours on statistical manifolds is presented. The statistical manifolds are 2- dimensional Riemannian manifolds that are statistically defined by maps that transform a parameter domain onto a set of probability density functions. In this novel framework, color or texture features are measured at each image point and their statistical...
Active contours on statistical manifolds and texture segmentaiton
Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman
2005-01-01
A new approach to active contours on statistical manifolds is presented. The statistical manifolds are 2- dimensional Riemannian manifolds that are statistically defined by maps that transform a parameter domain onto-a set of probability density functions. In this novel framework, color or texture features are measured at each Image point and their statistical...
Large scale IRAM 30 m CO-observations in the giant molecular cloud complex W43
NASA Astrophysics Data System (ADS)
Carlhoff, P.; Nguyen Luong, Q.; Schilke, P.; Motte, F.; Schneider, N.; Beuther, H.; Bontemps, S.; Heitsch, F.; Hill, T.; Kramer, C.; Ossenkopf, V.; Schuller, F.; Simon, R.; Wyrowski, F.
2013-12-01
We aim to fully describe the distribution and location of dense molecular clouds in the giant molecular cloud complex W43. It was previously identified as one of the most massive star-forming regions in our Galaxy. To trace the moderately dense molecular clouds in the W43 region, we initiated W43-HERO, a large program using the IRAM 30 m telescope, which covers a wide dynamic range of scales from 0.3 to 140 pc. We obtained on-the-fly-maps in 13CO (2-1) and C18O (2-1) with a high spectral resolution of 0.1 km s-1 and a spatial resolution of 12''. These maps cover an area of ~1.5 square degrees and include the two main clouds of W43 and the lower density gas surrounding them. A comparison to Galactic models and previous distance calculations confirms the location of W43 near the tangential point of the Scutum arm at approximately 6 kpc from the Sun. The resulting intensity cubes of the observed region are separated into subcubes, which are centered on single clouds and then analyzed in detail. The optical depth, excitation temperature, and H2 column density maps are derived out of the 13CO and C18O data. These results are then compared to those derived from Herschel dust maps. The mass of a typical cloud is several 104 M⊙ while the total mass in the dense molecular gas (>102 cm-3) in W43 is found to be ~1.9 × 106 M⊙. Probability distribution functions obtained from column density maps derived from molecular line data and Herschel imaging show a log-normal distribution for low column densities and a power-law tail for high densities. A flatter slope for the molecular line data probability distribution function may imply that those selectively show the gravitationally collapsing gas. Appendices are available in electronic form at http://www.aanda.orgThe final datacubes (13CO and C18O) for the entire survey are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/560/A24
Quantitative Risk Mapping of Urban Gas Pipeline Networks Using GIS
NASA Astrophysics Data System (ADS)
Azari, P.; Karimi, M.
2017-09-01
Natural gas is considered an important source of energy in the world. By increasing growth of urbanization, urban gas pipelines which transmit natural gas from transmission pipelines to consumers, will become a dense network. The increase in the density of urban pipelines will influence probability of occurring bad accidents in urban areas. These accidents have a catastrophic effect on people and their property. Within the next few years, risk mapping will become an important component in urban planning and management of large cities in order to decrease the probability of accident and to control them. Therefore, it is important to assess risk values and determine their location on urban map using an appropriate method. In the history of risk analysis of urban natural gas pipeline networks, the pipelines has always been considered one by one and their density in urban area has not been considered. The aim of this study is to determine the effect of several pipelines on the risk value of a specific grid point. This paper outlines a quantitative risk assessment method for analysing the risk of urban natural gas pipeline networks. It consists of two main parts: failure rate calculation where the EGIG historical data are used and fatal length calculation that involves calculation of gas release and fatality rate of consequences. We consider jet fire, fireball and explosion for investigating the consequences of gas pipeline failure. The outcome of this method is an individual risk and is shown as a risk map.
Crock, J.G.; Severson, R.C.; Gough, L.P.
1992-01-01
Recent investigations on the Kenai Peninsula had two major objectives: (1) to establish elemental baseline concentrations ranges for native vegetation and soils; and, (2) to determine the sampling density required for preparing stable regional geochemical maps for various elements in native plants and soils. These objectives were accomplished using an unbalanced, nested analysis-of-variance (ANOVA) barbell sampling design. Hylocomium splendens (Hedw.) BSG (feather moss, whole plant), Picea glauca (Moench) Voss (white spruce, twigs and needles), and soil horizons (02 and C) were collected and analyzed for major and trace total element concentrations. Using geometric means and geometric deviations, expected baseline ranges for elements were calculated. Results of the ANOVA show that intensive soil or plant sampling is needed to reliably map the geochemistry of the area, due to large local variability. For example, producing reliable element maps of feather moss using a 50 km cell (at 95% probability) would require sampling densities of from 4 samples per cell for Al, Co, Fe, La, Li, and V, to more than 15 samples per cell for Cu, Pb, Se, and Zn.Recent investigations on the Kenai Peninsula had two major objectives: (1) to establish elemental baseline concentrations ranges for native vegetation and soils; and, (2) to determine the sampling density required for preparing stable regional geochemical maps for various elements in native plants and soils. These objectives were accomplished using an unbalanced, nested analysis-of-variance (ANOVA) barbell sampling design. Hylocomium splendens (Hedw.) BSG (feather moss, whole plant), Picea glauca (Moench) Voss (white spruce, twigs and needles), and soil horizons (02 and C) were collected and analyzed for major and trace total element concentrations. Using geometric means and geometric deviations, expected baseline ranges for elements were calculated. Results of the ANOVA show that intensive soil or plant sampling is needed to reliably map the geochemistry of the area, due to large local variability. For example, producing reliable element maps of feather moss using a 50 km cell (at 95% probability) would require sampling densities of from 4 samples per cell Al, Co, Fe, La, Li, and V, to more than 15 samples per cell for Cu, Pb, Se, and Zn.
Extinction time of a stochastic predator-prey model by the generalized cell mapping method
NASA Astrophysics Data System (ADS)
Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao
2018-03-01
The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.
Long-term multi-hazard assessment for El Misti volcano (Peru)
NASA Astrophysics Data System (ADS)
Sandri, Laura; Thouret, Jean-Claude; Constantinescu, Robert; Biass, Sébastien; Tonini, Roberto
2014-02-01
We propose a long-term probabilistic multi-hazard assessment for El Misti Volcano, a composite cone located <20 km from Arequipa. The second largest Peruvian city is a rapidly expanding economic centre and is classified by UNESCO as World Heritage. We apply the Bayesian Event Tree code for Volcanic Hazard (BET_VH) to produce probabilistic hazard maps for the predominant volcanic phenomena that may affect c.900,000 people living around the volcano. The methodology accounts for the natural variability displayed by volcanoes in their eruptive behaviour, such as different types/sizes of eruptions and possible vent locations. For this purpose, we treat probabilistically several model runs for some of the main hazardous phenomena (lahars, pyroclastic density currents (PDCs), tephra fall and ballistic ejecta) and data from past eruptions at El Misti (tephra fall, PDCs and lahars) and at other volcanoes (PDCs). The hazard maps, although neglecting possible interactions among phenomena or cascade effects, have been produced with a homogeneous method and refer to a common time window of 1 year. The probability maps reveal that only the north and east suburbs of Arequipa are exposed to all volcanic threats except for ballistic ejecta, which are limited to the uninhabited but touristic summit cone. The probability for pyroclastic density currents reaching recently expanding urban areas and the city along ravines is around 0.05 %/year, similar to the probability obtained for roof-critical tephra loading during the rainy season. Lahars represent by far the most probable threat (around 10 %/year) because at least four radial drainage channels can convey them approximately 20 km away from the volcano across the entire city area in heavy rain episodes, even without eruption. The Río Chili Valley represents the major concern to city safety owing to the probable cascading effect of combined threats: PDCs and rockslides, dammed lake break-outs and subsequent lahars or floods. Although this study does not intend to replace the current El Misti hazard map, the quantitative results of this probabilistic multi-hazard assessment can be incorporated into a multi-risk analysis, to support decision makers in any future improvement of the current hazard evaluation, such as further land-use planning and possible emergency management.
Understanding star formation in molecular clouds. II. Signatures of gravitational collapse of IRDCs
NASA Astrophysics Data System (ADS)
Schneider, N.; Csengeri, T.; Klessen, R. S.; Tremblin, P.; Ossenkopf, V.; Peretto, N.; Simon, R.; Bontemps, S.; Federrath, C.
2015-06-01
We analyse column density and temperature maps derived from Herschel dust continuum observations of a sample of prominent, massive infrared dark clouds (IRDCs) i.e. G11.11-0.12, G18.82-0.28, G28.37+0.07, and G28.53-0.25. We disentangle the velocity structure of the clouds using 13CO 1→0 and 12CO 3→2 data, showing that these IRDCs are the densest regions in massive giant molecular clouds (GMCs) and not isolated features. The probability distribution function (PDF) of column densities for all clouds have a power-law distribution over all (high) column densities, regardless of the evolutionary stage of the cloud: G11.11-0.12, G18.82-0.28, and G28.37+0.07 contain (proto)-stars, while G28.53-0.25 shows no signs of star formation. This is in contrast to the purely log-normal PDFs reported for near and/or mid-IR extinction maps. We only find a log-normal distribution for lower column densities, if we perform PDFs of the column density maps of the whole GMC in which the IRDCs are embedded. By comparing the PDF slope and the radial column density profile of three of our clouds, we attribute the power law to the effect of large-scale gravitational collapse and to local free-fall collapse of pre- and protostellar cores for the highest column densities. A significant impact on the cloud properties from radiative feedback is unlikely because the clouds are mostly devoid of star formation. Independent from the PDF analysis, we find infall signatures in the spectral profiles of 12CO for G28.37+0.07 and G11.11-0.12, supporting the scenario of gravitational collapse. Our results are in line with earlier interpretations that see massive IRDCs as the densest regions within GMCs, which may be the progenitors of massive stars or clusters. At least some of the IRDCs are probably the same features as ridges (high column density regions with N> 1023 cm-2 over small areas), which were defined for nearby IR-bright GMCs. Because IRDCs are only confined to the densest (gravity dominated) cloud regions, the PDF constructed from this kind of a clipped image does not represent the (turbulence dominated) low column density regime of the cloud. The column density maps (FITS files) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/578/A29
NASA Astrophysics Data System (ADS)
Akgün, Aykut; Türk, Necdet
2011-09-01
Erosion is one of the most important natural hazard phenomena in the world, and it poses a significant threat to Turkey in terms of land degredation and desertification. To cope with this problem, we must determine which areas are erosion-prone. Many studies have been carried out and different models and methods have been used to this end. In this study, we used a logistic regression to prepare an erosion susceptibility map for the Ayvalık region in Balıkesir (NW Turkey). The following were our assessment parameters: weathering grades of rocks, slope gradient, structural lineament density, drainage density, land cover, stream power index (SPI) and profile curvature. These were processed by Idrisi Kilimanjaro GIS software. We used logistic regression analysis to relate predictor variables to the occurrence or non-occurrence of gully erosion sites within geographic cells, and then we used this relationship to produce a probability map for future erosion sites. The results indicate that lineament density, weathering grades of rocks and drainage density are the most important variables governing erosion susceptibility. Other variables, such as land cover and slope gradient, were revealed as secondary important variables. Highly weathered basalt, andesite, basaltic andesite and lacustrine sediments were the units most susceptible to erosion. In order to calculate the prediction accuracy of the erosion susceptibility map generated, we compared it with the map showing the gully erosion areas. On the basis of this comparison, the area under curvature (AUC) value was found to be 0.81. This result suggests that the erosion susceptibility map we generated is accurate.
Liu, Chan; Zeng, Liangbin; Zhu, Siyuan; Wu, Lingqing; Wang, Yanzhou; Tang, Shouwei; Wang, Hongwu; Zheng, Xia; Zhao, Jian; Chen, Xiaorong; Dai, Qiuzhong; Liu, Touming
2017-11-15
Plentiful bast fiber, a high crude protein content, and vigorous vegetative growth make ramie a popular fiber and forage crop. Here, we report the draft genome of ramie, along with a genomic comparison and evolutionary analysis. The draft genome contained a sequence of approximately 335.6 Mb with 42,463 predicted genes. A high-density genetic map with 4,338 single nucleotide polymorphisms (SNPs) was developed and used to anchor the genome sequence, thus, creating an integrated genetic and physical map containing a 58.2-Mb genome sequence and 4,304 molecular markers. A genomic comparison identified 1,075 unique gene families in ramie, containing 4,082 genes. Among these unique genes, five were cellulose synthase genes that were specifically expressed in stem bark, and 3 encoded a WAT1-related protein, suggesting that they are probably related to high bast fiber yield. An evolutionary analysis detected 106 positively selected genes, 22 of which were related to nitrogen metabolism, indicating that they are probably responsible for the crude protein content and vegetative growth of domesticated varieties. This study is the first to characterize the genome and develop a high-density genetic map of ramie and provides a basis for the genetic and molecular study of this crop. © The Author 2017. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.
Argañaraz, J P; Radeloff, V C; Bar-Massada, A; Gavier-Pizarro, G I; Scavuzzo, C M; Bellis, L M
2017-07-01
Wildfires are a major threat to people and property in Wildland Urban Interface (WUI) communities worldwide, but while the patterns of the WUI in North America, Europe and Oceania have been studied before, this is not the case in Latin America. Our goals were to a) map WUI areas in central Argentina, and b) assess wildfire exposure for WUI communities in relation to historic fires, with special emphasis on large fires and estimated burn probability based on an empirical model. We mapped the WUI in the mountains of central Argentina (810,000 ha), after digitizing the location of 276,700 buildings and deriving vegetation maps from satellite imagery. The areas where houses and wildland vegetation intermingle were classified as Intermix WUI (housing density > 6.17 hu/km 2 and wildland vegetation cover > 50%), and the areas where wildland vegetation abuts settlements were classified as Interface WUI (housing density > 6.17 hu/km 2 , wildland vegetation cover < 50%, but within 600 m of a vegetated patch larger than 5 km 2 ). We generated burn probability maps based on historical fire data from 1999 to 2011; as well as from an empirical model of fire frequency. WUI areas occupied 15% of our study area and contained 144,000 buildings (52%). Most WUI area was Intermix WUI, but most WUI buildings were in the Interface WUI. Our findings suggest that central Argentina has a WUI fire problem. WUI areas included most of the buildings exposed to wildfires and most of the buildings located in areas of higher burn probability. Our findings can help focus fire management activities in areas of higher risk, and ultimately provide support for landscape management and planning aimed at reducing wildfire risk in WUI communities. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mahalingam, Rajasekaran; Peng, Hung-Pin; Yang, An-Suei
2014-08-01
Protein-fatty acid interaction is vital for many cellular processes and understanding this interaction is important for functional annotation as well as drug discovery. In this work, we present a method for predicting the fatty acid (FA)-binding residues by using three-dimensional probability density distributions of interacting atoms of FAs on protein surfaces which are derived from the known protein-FA complex structures. A machine learning algorithm was established to learn the characteristic patterns of the probability density maps specific to the FA-binding sites. The predictor was trained with five-fold cross validation on a non-redundant training set and then evaluated with an independent test set as well as on holo-apo pair's dataset. The results showed good accuracy in predicting the FA-binding residues. Further, the predictor developed in this study is implemented as an online server which is freely accessible at the following website, http://ismblab.genomics.sinica.edu.tw/. Copyright © 2014 Elsevier B.V. All rights reserved.
Chen, Ching-Tai; Peng, Hung-Pin; Jian, Jhih-Wei; Tsai, Keng-Chang; Chang, Jeng-Yih; Yang, Ei-Wen; Chen, Jun-Bo; Ho, Shinn-Ying; Hsu, Wen-Lian; Yang, An-Suei
2012-01-01
Protein-protein interactions are key to many biological processes. Computational methodologies devised to predict protein-protein interaction (PPI) sites on protein surfaces are important tools in providing insights into the biological functions of proteins and in developing therapeutics targeting the protein-protein interaction sites. One of the general features of PPI sites is that the core regions from the two interacting protein surfaces are complementary to each other, similar to the interior of proteins in packing density and in the physicochemical nature of the amino acid composition. In this work, we simulated the physicochemical complementarities by constructing three-dimensional probability density maps of non-covalent interacting atoms on the protein surfaces. The interacting probabilities were derived from the interior of known structures. Machine learning algorithms were applied to learn the characteristic patterns of the probability density maps specific to the PPI sites. The trained predictors for PPI sites were cross-validated with the training cases (consisting of 432 proteins) and were tested on an independent dataset (consisting of 142 proteins). The residue-based Matthews correlation coefficient for the independent test set was 0.423; the accuracy, precision, sensitivity, specificity were 0.753, 0.519, 0.677, and 0.779 respectively. The benchmark results indicate that the optimized machine learning models are among the best predictors in identifying PPI sites on protein surfaces. In particular, the PPI site prediction accuracy increases with increasing size of the PPI site and with increasing hydrophobicity in amino acid composition of the PPI interface; the core interface regions are more likely to be recognized with high prediction confidence. The results indicate that the physicochemical complementarity patterns on protein surfaces are important determinants in PPIs, and a substantial portion of the PPI sites can be predicted correctly with the physicochemical complementarity features based on the non-covalent interaction data derived from protein interiors. PMID:22701576
Predicting the spatial extent of liquefaction from geospatial and earthquake specific parameters
Zhu, Jing; Baise, Laurie G.; Thompson, Eric M.; Wald, David J.; Knudsen, Keith L.; Deodatis, George; Ellingwood, Bruce R.; Frangopol, Dan M.
2014-01-01
The spatially extensive damage from the 2010-2011 Christchurch, New Zealand earthquake events are a reminder of the need for liquefaction hazard maps for anticipating damage from future earthquakes. Liquefaction hazard mapping as traditionally relied on detailed geologic mapping and expensive site studies. These traditional techniques are difficult to apply globally for rapid response or loss estimation. We have developed a logistic regression model to predict the probability of liquefaction occurrence in coastal sedimentary areas as a function of simple and globally available geospatial features (e.g., derived from digital elevation models) and standard earthquake-specific intensity data (e.g., peak ground acceleration). Some of the geospatial explanatory variables that we consider are taken from the hydrology community, which has a long tradition of using remotely sensed data as proxies for subsurface parameters. As a result of using high resolution, remotely-sensed, and spatially continuous data as a proxy for important subsurface parameters such as soil density and soil saturation, and by using a probabilistic modeling framework, our liquefaction model inherently includes the natural spatial variability of liquefaction occurrence and provides an estimate of spatial extent of liquefaction for a given earthquake. To provide a quantitative check on how the predicted probabilities relate to spatial extent of liquefaction, we report the frequency of observed liquefaction features within a range of predicted probabilities. The percentage of liquefaction is the areal extent of observed liquefaction within a given probability contour. The regional model and the results show that there is a strong relationship between the predicted probability and the observed percentage of liquefaction. Visual inspection of the probability contours for each event also indicates that the pattern of liquefaction is well represented by the model.
Modeling shape and topology of low-resolution density maps of biological macromolecules.
De-Alarcón, Pedro A; Pascual-Montano, Alberto; Gupta, Amarnath; Carazo, Jose M
2002-01-01
In the present work we develop an efficient way of representing the geometry and topology of volumetric datasets of biological structures from medium to low resolution, aiming at storing and querying them in a database framework. We make use of a new vector quantization algorithm to select the points within the macromolecule that best approximate the probability density function of the original volume data. Connectivity among points is obtained with the use of the alpha shapes theory. This novel data representation has a number of interesting characteristics, such as 1) it allows us to automatically segment and quantify a number of important structural features from low-resolution maps, such as cavities and channels, opening the possibility of querying large collections of maps on the basis of these quantitative structural features; 2) it provides a compact representation in terms of size; 3) it contains a subset of three-dimensional points that optimally quantify the densities of medium resolution data; and 4) a general model of the geometry and topology of the macromolecule (as opposite to a spatially unrelated bunch of voxels) is easily obtained by the use of the alpha shapes theory. PMID:12124252
Measurement Model Nonlinearity in Estimation of Dynamical Systems
NASA Astrophysics Data System (ADS)
Majji, Manoranjan; Junkins, J. L.; Turner, J. D.
2012-06-01
The role of nonlinearity of the measurement model and its interactions with the uncertainty of measurements and geometry of the problem is studied in this paper. An examination of the transformations of the probability density function in various coordinate systems is presented for several astrodynamics applications. Smooth and analytic nonlinear functions are considered for the studies on the exact transformation of uncertainty. Special emphasis is given to understanding the role of change of variables in the calculus of random variables. The transformation of probability density functions through mappings is shown to provide insight in to understanding the evolution of uncertainty in nonlinear systems. Examples are presented to highlight salient aspects of the discussion. A sequential orbit determination problem is analyzed, where the transformation formula provides useful insights for making the choice of coordinates for estimation of dynamic systems.
Making Sense of 'Big Data' in Provenance Studies
NASA Astrophysics Data System (ADS)
Vermeesch, P.
2014-12-01
Huge online databases can be 'mined' to reveal previously hidden trends and relationships in society. One could argue that sedimentary geology has entered a similar era of 'Big Data', as modern provenance studies routinely apply multiple proxies to dozens of samples. Just like the Internet, sedimentary geology now requires specialised statistical tools to interpret such large datasets. These can be organised on three levels of progressively higher order:A single sample: The most effective way to reveal the provenance information contained in a representative sample of detrital zircon U-Pb ages are probability density estimators such as histograms and kernel density estimates. The widely popular 'probability density plots' implemented in IsoPlot and AgeDisplay compound analytical uncertainty with geological scatter and are therefore invalid.Several samples: Multi-panel diagrams comprising many detrital age distributions or compositional pie charts quickly become unwieldy and uninterpretable. For example, if there are N samples in a study, then the number of pairwise comparisons between samples increases quadratically as N(N-1)/2. This is simply too much information for the human eye to process. To solve this problem, it is necessary to (a) express the 'distance' between two samples as a simple scalar and (b) combine all N(N-1)/2 such values in a single two-dimensional 'map', grouping similar and pulling apart dissimilar samples. This can be easily achieved using simple statistics-based dissimilarity measures and a standard statistical method called Multidimensional Scaling (MDS).Several methods: Suppose that we use four provenance proxies: bulk petrography, chemistry, heavy minerals and detrital geochronology. This will result in four MDS maps, each of which likely show slightly different trends and patterns. To deal with such cases, it may be useful to use a related technique called 'three way multidimensional scaling'. This results in two graphical outputs: an MDS map, and a map with 'weights' showing to what extent the different provenance proxies influence the horizontal and vertical axis of the MDS map. Thus, detrital data can not only inform the user about the provenance of sediments, but also about the causal relationships between the mineralogy, geochronology and chemistry.
Ramos, Rogelio; Zlatev, Roumen; Valdez, Benjamin; Stoytcheva, Margarita; Carrillo, Mónica; García, Juan-Francisco
2013-01-01
A virtual instrumentation (VI) system called VI localized corrosion image analyzer (LCIA) based on LabVIEW 2010 was developed allowing rapid automatic and subjective error-free determination of the pits number on large sized corroded specimens. The VI LCIA controls synchronously the digital microscope image taking and its analysis, finally resulting in a map file containing the coordinates of the detected probable pits containing zones on the investigated specimen. The pits area, traverse length, and density are also determined by the VI using binary large objects (blobs) analysis. The resulting map file can be used further by a scanning vibrating electrode technique (SVET) system for rapid (one pass) "true/false" SVET check of the probable zones only passing through the pit's centers avoiding thus the entire specimen scan. A complete SVET scan over the already proved "true" zones could determine the corrosion rate in any of the zones.
Singer, Donald A.; Kouda, Ryoichi
1991-01-01
The FINDER system employs geometric probability, Bayesian statistics, and the normal probability density function to integrate spatial and frequency information to produce a map of probabilities of target centers. Target centers can be mineral deposits, alteration associated with mineral deposits, or any other target that can be represented by a regular shape on a two dimensional map. The size, shape, mean, and standard deviation for each variable are characterized in a control area and the results applied by means of FINDER to the study area. The Kushikino deposit consists of groups of quartz-calcite-adularia veins that produced 55 tonnes of gold and 456 tonnes of silver since 1660. Part of a 6 by 10 km area near Kushikino served as a control area. Within the control area, data plotting, contouring, and cluster analysis were used to identify the barren and mineralized populations. Sodium was found to be depleted in an elliptically shaped area 3.1 by 1.6 km, potassium was both depleted and enriched locally in an elliptically shaped area 3.0 by 1.3 km, and sulfur was enriched in an elliptically shaped area 5.8 by 1.6 km. The potassium, sodium, and sulfur content from 233 surface rock samples were each used in FINDER to produce probability maps for the 12 by 30 km study area which includes Kushikino. High probability areas for each of the individual variables are over and offset up to 4 km eastward from the main Kushikino veins. In general, high probability areas identified by FINDER are displaced from the main veins and cover not only the host andesite and the dacite-andesite that is about the same age as the Kushikino mineralization, but also younger sedimentary rocks, andesite, and tuff units east and northeast of Kushikino. The maps also display the same patterns observed near Kushikino, but with somewhat lower probabilities, about 1.5 km east of the old gold prospect, Hajima, and in a broad zone 2.5 km east-west and 1 km north-south, centered 2 km west of the old gold prospect, Yaeyama.
NASA Astrophysics Data System (ADS)
Brus, Dick J.; van den Akker, Jan J. H.
2018-02-01
Although soil compaction is widely recognized as a soil threat to soil resources, reliable estimates of the acreage of overcompacted soil and of the level of soil compaction parameters are not available. In the Netherlands data on subsoil compaction were collected at 128 locations selected by stratified random sampling. A map showing the risk of subsoil compaction in five classes was used for stratification. Measurements of bulk density, porosity, clay content and organic matter content were used to compute the relative bulk density and relative porosity, both expressed as a fraction of a threshold value. A subsoil was classified as overcompacted if either the relative bulk density exceeded 1 or the relative porosity was below 1. The sample data were used to estimate the means of the two subsoil compaction parameters and the overcompacted areal fraction. The estimated global means of relative bulk density and relative porosity were 0.946 and 1.090, respectively. The estimated areal fraction of the Netherlands with overcompacted subsoils was 43 %. The estimates per risk map unit showed two groups of map units: a low-risk
group (units 1 and 2, covering only 4.6 % of the total area) and a high-risk
group (units 3, 4 and 5). The estimated areal fraction of overcompacted subsoil was 0 % in the low-risk unit and 47 % in the high-risk unit. The map contains no information about where overcompacted subsoils occur. This was caused by the poor association of the risk map units 3, 4 and 5 with the subsoil compaction parameters and subsoil overcompaction. This can be explained by the lack of time for recuperation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashiwagi, Toshiya; Suto, Yasushi; Taruya, Atsushi
The most widely used Galactic extinction map is constructed assuming that the observed far-infrared (FIR) fluxes come entirely from Galactic dust. According to the earlier suggestion by Yahata et al., we consider how FIR emission of galaxies affects the SFD map. We first compute the surface number density of Sloan Digital Sky Survey (SDSS) DR7 galaxies as a function of the r-band extinction, A {sub r,} {sub SFD}. We confirm that the surface densities of those galaxies positively correlate with A {sub r,} {sub SFD} for A {sub r,} {sub SFD} < 0.1, as first discovered by Yahata et al.more » for SDSS DR4 galaxies. Next we construct an analytical model to compute the surface density of galaxies, taking into account the contamination of their FIR emission. We adopt a log-normal probability distribution for the ratio of 100 μm and r-band luminosities of each galaxy, y ≡ (νL){sub 100} {sub μm}/(νL) {sub r}. Then we search for the mean and rms values of y that fit the observed anomaly, using the analytical model. The required values to reproduce the anomaly are roughly consistent with those measured from the stacking analysis of SDSS galaxies. Due to the limitation of our statistical modeling, we are not yet able to remove the FIR contamination of galaxies from the extinction map. Nevertheless, the agreement with the model prediction suggests that the FIR emission of galaxies is mainly responsible for the observed anomaly. Whereas the corresponding systematic error in the Galactic extinction map is 0.1-1 mmag, it is directly correlated with galaxy clustering and thus needs to be carefully examined in precision cosmology.« less
A Riemannian framework for orientation distribution function computing.
Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid
2009-01-01
Compared with Diffusion Tensor Imaging (DTI), High Angular Resolution Imaging (HARDI) can better explore the complex microstructure of white matter. Orientation Distribution Function (ODF) is used to describe the probability of the fiber direction. Fisher information metric has been constructed for probability density family in Information Geometry theory and it has been successfully applied for tensor computing in DTI. In this paper, we present a state of the art Riemannian framework for ODF computing based on Information Geometry and sparse representation of orthonormal bases. In this Riemannian framework, the exponential map, logarithmic map and geodesic have closed forms. And the weighted Frechet mean exists uniquely on this manifold. We also propose a novel scalar measurement, named Geometric Anisotropy (GA), which is the Riemannian geodesic distance between the ODF and the isotropic ODF. The Renyi entropy H1/2 of the ODF can be computed from the GA. Moreover, we present an Affine-Euclidean framework and a Log-Euclidean framework so that we can work in an Euclidean space. As an application, Lagrange interpolation on ODF field is proposed based on weighted Frechet mean. We validate our methods on synthetic and real data experiments. Compared with existing Riemannian frameworks on ODF, our framework is model-free. The estimation of the parameters, i.e. Riemannian coordinates, is robust and linear. Moreover it should be noted that our theoretical results can be used for any probability density function (PDF) under an orthonormal basis representation.
Landslide susceptibility map: from research to application
NASA Astrophysics Data System (ADS)
Fiorucci, Federica; Reichenbach, Paola; Ardizzone, Francesca; Rossi, Mauro; Felicioni, Giulia; Antonini, Guendalina
2014-05-01
Susceptibility map is an important and essential tool in environmental planning, to evaluate landslide hazard and risk and for a correct and responsible management of the territory. Landslide susceptibility is the likelihood of a landslide occurring in an area on the basis of local terrain conditions. Can be expressed as the probability that any given region will be affected by landslides, i.e. an estimate of "where" landslides are likely to occur. In this work we present two examples of landslide susceptibility map prepared for the Umbria Region and for the Perugia Municipality. These two maps were realized following official request from the Regional and Municipal government to the Research Institute for the Hydrogeological Protection (CNR-IRPI). The susceptibility map prepared for the Umbria Region represents the development of previous agreements focused to prepare: i) a landslide inventory map that was included in the Urban Territorial Planning (PUT) and ii) a series of maps for the Regional Plan for Multi-risk Prevention. The activities carried out for the Umbria Region were focused to define and apply methods and techniques for landslide susceptibility zonation. Susceptibility maps were prepared exploiting a multivariate statistical model (linear discriminant analysis) for the five Civil Protection Alert Zones defined in the regional territory. The five resulting maps were tested and validated using the spatial distribution of recent landslide events that occurred in the region. The susceptibility map for the Perugia Municipality was prepared to be integrated as one of the cartographic product in the Municipal development plan (PRG - Piano Regolatore Generale) as required by the existing legislation. At strategic level, one of the main objectives of the PRG, is to establish a framework of knowledge and legal aspects for the management of geo-hydrological risk. At national level most of the susceptibility maps prepared for the PRG, were and still are obtained qualitatively classifying the territory according to slope classes. For the Perugia Municipality the susceptibility map was obtained combining results of statistical multivariate models and landslide density map. In particular, in the first phase a susceptibility zonation was prepared using different single and combined probability statistical multivariate techniques. The zonation was then combined and compared with the landslide density map in order to reclassify the false negative (portion of the territory classified by the model as stable affected by slope failures). The semi-quantitative resulting map was classified in five susceptibility classes. For each class a set of technical regulation was established to manage the territory.
You are lost without a map: Navigating the sea of protein structures.
Lamb, Audrey L; Kappock, T Joseph; Silvaggi, Nicholas R
2015-04-01
X-ray crystal structures propel biochemistry research like no other experimental method, since they answer many questions directly and inspire new hypotheses. Unfortunately, many users of crystallographic models mistake them for actual experimental data. Crystallographic models are interpretations, several steps removed from the experimental measurements, making it difficult for nonspecialists to assess the quality of the underlying data. Crystallographers mainly rely on "global" measures of data and model quality to build models. Robust validation procedures based on global measures now largely ensure that structures in the Protein Data Bank (PDB) are largely correct. However, global measures do not allow users of crystallographic models to judge the reliability of "local" features in a region of interest. Refinement of a model to fit into an electron density map requires interpretation of the data to produce a single "best" overall model. This process requires inclusion of most probable conformations in areas of poor density. Users who misunderstand this can be misled, especially in regions of the structure that are mobile, including active sites, surface residues, and especially ligands. This article aims to equip users of macromolecular models with tools to critically assess local model quality. Structure users should always check the agreement of the electron density map and the derived model in all areas of interest, even if the global statistics are good. We provide illustrated examples of interpreted electron density as a guide for those unaccustomed to viewing electron density. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Holland, Katharina; van Gils, Carla H.; Wanders, Johanna OP; Mann, Ritse M.; Karssemeijer, Nico
2016-03-01
The sensitivity of mammograms is low for women with dense breasts, since cancers may be masked by dense tissue. In this study, we investigated methods to identify women with density patterns associated with a high masking risk. Risk measures are derived from volumetric breast density maps. We used the last negative screening mammograms of 93 women who subsequently presented with an interval cancer (IC), and, as controls, 930 randomly selected normal screening exams from women without cancer. Volumetric breast density maps were computed from the mammograms, which provide the dense tissue thickness at each location. These were used to compute absolute and percentage glandular tissue volume. We modeled the masking risk for each pixel location using the absolute and percentage dense tissue thickness and we investigated the effect of taking the cancer location probability distribution (CLPD) into account. For each method, we selected cases with the highest masking measure (by thresholding) and computed the fraction of ICs as a function of the fraction of controls selected. The latter can be interpreted as the negative supplemental screening rate (NSSR). Between the models, when incorporating CLPD, no significant differences were found. In general, the methods performed better when CLPD was included. At higher NSSRs some of the investigated masking measures had a significantly higher performance than volumetric breast density. These measures may therefore serve as an alternative to identify women with a high risk for a masked cancer.
Comparison of SOM point densities based on different criteria.
Kohonen, T
1999-11-15
Point densities of model (codebook) vectors in self-organizing maps (SOMs) are evaluated in this article. For a few one-dimensional SOMs with finite grid lengths and a given probability density function of the input, the numerically exact point densities have been computed. The point density derived from the SOM algorithm turned out to be different from that minimizing the SOM distortion measure, showing that the model vectors produced by the basic SOM algorithm in general do not exactly coincide with the optimum of the distortion measure. A new computing technique based on the calculus of variations has been introduced. It was applied to the computation of point densities derived from the distortion measure for both the classical vector quantization and the SOM with general but equal dimensionality of the input vectors and the grid, respectively. The power laws in the continuum limit obtained in these cases were found to be identical.
Spencer, Amy V; Cox, Angela; Lin, Wei-Yu; Easton, Douglas F; Michailidou, Kyriaki; Walters, Kevin
2016-04-01
There is a large amount of functional genetic data available, which can be used to inform fine-mapping association studies (in diseases with well-characterised disease pathways). Single nucleotide polymorphism (SNP) prioritization via Bayes factors is attractive because prior information can inform the effect size or the prior probability of causal association. This approach requires the specification of the effect size. If the information needed to estimate a priori the probability density for the effect sizes for causal SNPs in a genomic region isn't consistent or isn't available, then specifying a prior variance for the effect sizes is challenging. We propose both an empirical method to estimate this prior variance, and a coherent approach to using SNP-level functional data, to inform the prior probability of causal association. Through simulation we show that when ranking SNPs by our empirical Bayes factor in a fine-mapping study, the causal SNP rank is generally as high or higher than the rank using Bayes factors with other plausible values of the prior variance. Importantly, we also show that assigning SNP-specific prior probabilities of association based on expert prior functional knowledge of the disease mechanism can lead to improved causal SNPs ranks compared to ranking with identical prior probabilities of association. We demonstrate the use of our methods by applying the methods to the fine mapping of the CASP8 region of chromosome 2 using genotype data from the Collaborative Oncological Gene-Environment Study (COGS) Consortium. The data we analysed included approximately 46,000 breast cancer case and 43,000 healthy control samples. © 2016 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.
Olea, R.A.; Houseknecht, D.W.; Garrity, C.P.; Cook, T.A.
2011-01-01
Shale gas is a form of continuous unconventional hydrocarbon accumulation whose resource estimation is unfeasible through the inference of pore volume. Under these circumstances, the usual approach is to base the assessment on well productivity through estimated ultimate recovery (EUR). Unconventional resource assessments that consider uncertainty are typically done by applying analytical procedures based on classical statistics theory that ignores geographical location, does not take into account spatial correlation, and assumes independence of EUR from other variables that may enter into the modeling. We formulate a new, more comprehensive approach based on sequential simulation to test methodologies known to be capable of more fully utilizing the data and overcoming unrealistic simplifications. Theoretical requirements demand modeling of EUR as areal density instead of well EUR. The new experimental methodology is illustrated by evaluating a gas play in the Woodford Shale in the Arkoma Basin of Oklahoma. Differently from previous assessments, we used net thickness and vitrinite reflectance as secondary variables correlated to cell EUR. In addition to the traditional probability distribution for undiscovered resources, the new methodology provides maps of EUR density and maps with probabilities to reach any given cell EUR, which are useful to visualize geographical variations in prospectivity.
Kobayashi, Amane; Sekiguchi, Yuki; Takayama, Yuki; Oroguchi, Tomotaka; Nakasako, Masayoshi
2014-11-17
Coherent X-ray diffraction imaging (CXDI) is a lensless imaging technique that is suitable for visualizing the structures of non-crystalline particles with micrometer to sub-micrometer dimensions from material science and biology. One of the difficulties inherent to CXDI structural analyses is the reconstruction of electron density maps of specimen particles from diffraction patterns because saturated detector pixels and a beam stopper result in missing data in small-angle regions. To overcome this difficulty, the dark-field phase-retrieval (DFPR) method has been proposed. The DFPR method reconstructs electron density maps from diffraction data, which are modified by multiplying Gaussian masks with an observed diffraction pattern in the high-angle regions. In this paper, we incorporated Friedel centrosymmetry for diffraction patterns into the DFPR method to provide a constraint for the phase-retrieval calculation. A set of model simulations demonstrated that this constraint dramatically improved the probability of reconstructing correct electron density maps from diffraction patterns that were missing data in the small-angle region. In addition, the DFPR method with the constraint was applied successfully to experimentally obtained diffraction patterns with significant quantities of missing data. We also discuss this method's limitations with respect to the level of Poisson noise in X-ray detection.
NASA Astrophysics Data System (ADS)
Frič, Roman; Papčo, Martin
2017-12-01
Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.
NASA Astrophysics Data System (ADS)
Rezaei Kh., S.; Bailer-Jones, C. A. L.; Hanson, R. J.; Fouesneau, M.
2017-02-01
We present a non-parametric model for inferring the three-dimensional (3D) distribution of dust density in the Milky Way. Our approach uses the extinction measured towards stars at different locations in the Galaxy at approximately known distances. Each extinction measurement is proportional to the integrated dust density along its line of sight (LoS). Making simple assumptions about the spatial correlation of the dust density, we can infer the most probable 3D distribution of dust across the entire observed region, including along sight lines which were not observed. This is possible because our model employs a Gaussian process to connect all LoS. We demonstrate the capability of our model to capture detailed dust density variations using mock data and simulated data from the Gaia Universe Model Snapshot. We then apply our method to a sample of giant stars observed by APOGEE and Kepler to construct a 3D dust map over a small region of the Galaxy. Owing to our smoothness constraint and its isotropy, we provide one of the first maps which does not show the "fingers of God" effect.
Park, Wooram; Liu, Yan; Zhou, Yu; Moses, Matthew; Chirikjian, Gregory S
2008-04-11
A nonholonomic system subjected to external noise from the environment, or internal noise in its own actuators, will evolve in a stochastic manner described by an ensemble of trajectories. This ensemble of trajectories is equivalent to the solution of a Fokker-Planck equation that typically evolves on a Lie group. If the most likely state of such a system is to be estimated, and plans for subsequent motions from the current state are to be made so as to move the system to a desired state with high probability, then modeling how the probability density of the system evolves is critical. Methods for solving Fokker-Planck equations that evolve on Lie groups then become important. Such equations can be solved using the operational properties of group Fourier transforms in which irreducible unitary representation (IUR) matrices play a critical role. Therefore, we develop a simple approach for the numerical approximation of all the IUR matrices for two of the groups of most interest in robotics: the rotation group in three-dimensional space, SO(3), and the Euclidean motion group of the plane, SE(2). This approach uses the exponential mapping from the Lie algebras of these groups, and takes advantage of the sparse nature of the Lie algebra representation matrices. Other techniques for density estimation on groups are also explored. The computed densities are applied in the context of probabilistic path planning for kinematic cart in the plane and flexible needle steering in three-dimensional space. In these examples the injection of artificial noise into the computational models (rather than noise in the actual physical systems) serves as a tool to search the configuration spaces and plan paths. Finally, we illustrate how density estimation problems arise in the characterization of physical noise in orientational sensors such as gyroscopes.
Analysis of interstellar cloud structure based on IRAS images
NASA Technical Reports Server (NTRS)
Scalo, John M.
1992-01-01
The goal of this project was to develop new tools for the analysis of the structure of densely sampled maps of interstellar star-forming regions. A particular emphasis was on the recognition and characterization of nested hierarchical structure and fractal irregularity, and their relation to the level of star formation activity. The panoramic IRAS images provided data with the required range in spatial scale, greater than a factor of 100, and in column density, greater than a factor of 50. In order to construct densely sampled column density maps of star-forming clouds, column density images of four nearby cloud complexes were constructed from IRAS data. The regions have various degrees of star formation activity, and most of them have probably not been affected much by the disruptive effects of young massive stars. The largest region, the Scorpius-Ophiuchus cloud complex, covers about 1000 square degrees (it was subdivided into a few smaller regions for analysis). Much of the work during the early part of the project focused on an 80 square degree region in the core of the Taurus complex, a well-studied region of low-mass star formation.
Weak lensing shear and aperture mass from linear to non-linear scales
NASA Astrophysics Data System (ADS)
Munshi, Dipak; Valageas, Patrick; Barber, Andrew J.
2004-05-01
We describe the predictions for the smoothed weak lensing shear, γs, and aperture mass,Map, of two simple analytical models of the density field: the minimal tree model and the stellar model. Both models give identical results for the statistics of the three-dimensional density contrast smoothed over spherical cells and only differ by the detailed angular dependence of the many-body density correlations. We have shown in previous work that they also yield almost identical results for the probability distribution function (PDF) of the smoothed convergence, κs. We find that the two models give rather close results for both the shear and the positive tail of the aperture mass. However, we note that at small angular scales (θs<~ 2 arcmin) the tail of the PDF, , for negative Map shows a strong variation between the two models, and the stellar model actually breaks down for θs<~ 0.4 arcmin and Map < 0. This shows that the statistics of the aperture mass provides a very precise probe of the detailed structure of the density field, as it is sensitive to both the amplitude and the detailed angular behaviour of the many-body correlations. On the other hand, the minimal tree model shows good agreement with numerical simulations over all the scales and redshifts of interest, while both models provide a good description of the PDF, , of the smoothed shear components. Therefore, the shear and the aperture mass provide robust and complementary tools to measure the cosmological parameters as well as the detailed statistical properties of the density field.
A Tool for Modelling the Probability of Landslides Impacting Road Networks
NASA Astrophysics Data System (ADS)
Taylor, Faith E.; Santangelo, Michele; Marchesini, Ivan; Malamud, Bruce D.; Guzzetti, Fausto
2014-05-01
Triggers such as earthquakes or heavy rainfall can result in hundreds to thousands of landslides occurring across a region within a short space of time. These landslides can in turn result in blockages across the road network, impacting how people move about a region. Here, we show the development and application of a semi-stochastic model to simulate how landslides intersect with road networks during a triggered landslide event. This was performed by creating 'synthetic' triggered landslide inventory maps and overlaying these with a road network map to identify where road blockages occur. Our landslide-road model has been applied to two regions: (i) the Collazzone basin (79 km2) in Central Italy where 422 landslides were triggered by rapid snowmelt in January 1997, (ii) the Oat Mountain quadrangle (155 km2) in California, USA, where 1,350 landslides were triggered by the Northridge Earthquake (M = 6.7) in January 1994. For both regions, detailed landslide inventory maps for the triggered events were available, in addition to maps of landslide susceptibility and road networks of primary, secondary and tertiary roads. To create 'synthetic' landslide inventory maps, landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL. The number of landslide areas selected was based on the observed density of landslides (number of landslides km-2) in the triggered event inventories. Landslide shapes were approximated as ellipses, where the ratio of the major and minor axes varies with AL. Landslides were then dropped over the region semi-stochastically, conditioned by a landslide susceptibility map, resulting in a synthetic landslide inventory map. The originally available landslide susceptibility maps did not take into account susceptibility changes in the immediate vicinity of roads, therefore our landslide susceptibility map was adjusted to further reduce the susceptibility near each road based on the road level (primary, secondary, tertiary). For each model run, we superimposed the spatial location of landslide drops with the road network, and recorded the number, size and location of road blockages recorded, along with landslides within 50 and 100 m of the different road levels. Network analysis tools available in GRASS GIS were also applied to measure the impact upon the road network in terms of connectivity. The model was performed 100 times in a Monte-Carlo simulation for each region. Initial results show reasonable agreement between model output and the observed landslide inventories in terms of the number of road blockages. In Collazzone (length of road network = 153 km, landslide density = 5.2 landslides km-2), the median number of modelled road blockages over 100 model runs was 5 (±2.5 standard deviation) compared to the mapped inventory observed number of 5 road blockages. In Northridge (length of road network = 780 km, landslide density = 8.7 landslides km-2), the median number of modelled road blockages over 100 model runs was 108 (±17.2 standard deviation) compared to the mapped inventory observed number of 48 road blockages. As we progress with model development, we believe this semi-stochastic modelling approach will potentially aid civil protection agencies to explore different scenarios of road network potential damage as the result of different magnitude landslide triggering event scenarios.
Investigations of turbulent scalar fields using probability density function approach
NASA Technical Reports Server (NTRS)
Gao, Feng
1991-01-01
Scalar fields undergoing random advection have attracted much attention from researchers in both the theoretical and practical sectors. Research interest spans from the study of the small scale structures of turbulent scalar fields to the modeling and simulations of turbulent reacting flows. The probability density function (PDF) method is an effective tool in the study of turbulent scalar fields, especially for those which involve chemical reactions. It has been argued that a one-point, joint PDF approach is the one to choose from among many simulation and closure methods for turbulent combustion and chemically reacting flows based on its practical feasibility in the foreseeable future for multiple reactants. Instead of the multi-point PDF, the joint PDF of a scalar and its gradient which represents the roles of both scalar and scalar diffusion is introduced. A proper closure model for the molecular diffusion term in the PDF equation is investigated. Another direction in this research is to study the mapping closure method that has been recently proposed to deal with the PDF's in turbulent fields. This method seems to have captured the physics correctly when applied to diffusion problems. However, if the turbulent stretching is included, the amplitude mapping has to be supplemented by either adjusting the parameters representing turbulent stretching at each time step or by introducing the coordinate mapping. This technique is still under development and seems to be quite promising. The final objective of this project is to understand some fundamental properties of the turbulent scalar fields and to develop practical numerical schemes that are capable of handling turbulent reacting flows.
Lee, Won June; Kim, Young Kook; Jeoung, Jin Wook; Park, Ki Ho
2017-12-01
To determine the usefulness of swept-source optical coherence tomography (SS-OCT) probability maps in detecting locations with significant reduction in visual field (VF) sensitivity or predicting future VF changes, in patients with classically defined preperimetric glaucoma (PPG). Of 43 PPG patients, 43 eyes were followed-up on every 6 months for at least 2 years were analyzed in this longitudinal study. The patients underwent wide-field SS-OCT scanning and standard automated perimetry (SAP) at the time of enrollment. With this wide-scan protocol, probability maps originating from the corresponding thickness map and overlapped with SAP VF test points could be generated. We evaluated the vulnerable VF points with SS-OCT probability maps as well as the prevalence of locations with significant VF reduction or subsequent VF changes observed in the corresponding damaged areas of the probability maps. The vulnerable VF points were shown in superior and inferior arcuate patterns near the central fixation. In 19 of 43 PPG eyes (44.2%), significant reduction in baseline VF was detected within the areas of structural change on the SS-OCT probability maps. In 16 of 43 PPG eyes (37.2%), subsequent VF changes within the areas of SS-OCT probability map change were observed over the course of the follow-up. Structural changes on SS-OCT probability maps could detect or predict VF changes using SAP, in a considerable number of PPG eyes. Careful comparison of probability maps with SAP results could be useful in diagnosing and monitoring PPG patients in the clinical setting.
Determination of a Limited Scope Network's Lightning Detection Efficiency
NASA Technical Reports Server (NTRS)
Rompala, John T.; Blakeslee, R.
2008-01-01
This paper outlines a modeling technique to map lightning detection efficiency variations over a region surveyed by a sparse array of ground based detectors. A reliable flash peak current distribution (PCD) for the region serves as the technique's base. This distribution is recast as an event probability distribution function. The technique then uses the PCD together with information regarding: site signal detection thresholds, type of solution algorithm used, and range attenuation; to formulate the probability that a flash at a specified location will yield a solution. Applying this technique to the full region produces detection efficiency contour maps specific to the parameters employed. These contours facilitate a comparative analysis of each parameter's effect on the network's detection efficiency. In an alternate application, this modeling technique gives an estimate of the number, strength, and distribution of events going undetected. This approach leads to a variety of event density contour maps. This application is also illustrated. The technique's base PCD can be empirical or analytical. A process for formulating an empirical PCD specific to the region and network being studied is presented. A new method for producing an analytical representation of the empirical PCD is also introduced.
Ramos, Rogelio; Zlatev, Roumen; Valdez, Benjamin; Stoytcheva, Margarita; Carrillo, Mónica; García, Juan-Francisco
2013-01-01
A virtual instrumentation (VI) system called VI localized corrosion image analyzer (LCIA) based on LabVIEW 2010 was developed allowing rapid automatic and subjective error-free determination of the pits number on large sized corroded specimens. The VI LCIA controls synchronously the digital microscope image taking and its analysis, finally resulting in a map file containing the coordinates of the detected probable pits containing zones on the investigated specimen. The pits area, traverse length, and density are also determined by the VI using binary large objects (blobs) analysis. The resulting map file can be used further by a scanning vibrating electrode technique (SVET) system for rapid (one pass) “true/false” SVET check of the probable zones only passing through the pit's centers avoiding thus the entire specimen scan. A complete SVET scan over the already proved “true” zones could determine the corrosion rate in any of the zones. PMID:23691434
SOMKE: kernel density estimation over data streams by sequences of self-organizing maps.
Cao, Yuan; He, Haibo; Man, Hong
2012-08-01
In this paper, we propose a novel method SOMKE, for kernel density estimation (KDE) over data streams based on sequences of self-organizing map (SOM). In many stream data mining applications, the traditional KDE methods are infeasible because of the high computational cost, processing time, and memory requirement. To reduce the time and space complexity, we propose a SOM structure in this paper to obtain well-defined data clusters to estimate the underlying probability distributions of incoming data streams. The main idea of this paper is to build a series of SOMs over the data streams via two operations, that is, creating and merging the SOM sequences. The creation phase produces the SOM sequence entries for windows of the data, which obtains clustering information of the incoming data streams. The size of the SOM sequences can be further reduced by combining the consecutive entries in the sequence based on the measure of Kullback-Leibler divergence. Finally, the probability density functions over arbitrary time periods along the data streams can be estimated using such SOM sequences. We compare SOMKE with two other KDE methods for data streams, the M-kernel approach and the cluster kernel approach, in terms of accuracy and processing time for various stationary data streams. Furthermore, we also investigate the use of SOMKE over nonstationary (evolving) data streams, including a synthetic nonstationary data stream, a real-world financial data stream and a group of network traffic data streams. The simulation results illustrate the effectiveness and efficiency of the proposed approach.
NASA Technical Reports Server (NTRS)
Lambert, Winnie; Sharp, David; Spratt, Scott; Volkmer, Matthew
2005-01-01
Each morning, the forecasters at the National Weather Service in Melbourn, FL (NWS MLB) produce an experimental cloud-to-ground (CG) lightning threat index map for their county warning area (CWA) that is posted to their web site (http://www.srh.weather.gov/mlb/ghwo/lightning.shtml) . Given the hazardous nature of lightning in central Florida, especially during the warm season months of May-September, these maps help users factor the threat of lightning, relative to their location, into their daily plans. The maps are color-coded in five levels from Very Low to Extreme, with threat level definitions based on the probability of lightning occurrence and the expected amount of CG activity. On a day in which thunderstorms are expected, there are typically two or more threat levels depicted spatially across the CWA. The locations of relative lightning threat maxima and minima often depend on the position and orientation of the low-level ridge axis, forecast propagation and interaction of sea/lake/outflow boundaries, expected evolution of moisture and stability fields, and other factors that can influence the spatial distribution of thunderstorms over the CWA. The lightning threat index maps are issued for the 24-hour period beginning at 1200 UTC (0700 AM EST) each day with a grid resolution of 5 km x 5 km. Product preparation is performed on the AWIPS Graphical Forecast Editor (GFE), which is the standard NWS platform for graphical editing. Currently, the forecasters create each map manually, starting with a blank map. To improve efficiency of the forecast process, NWS MLB requested that the Applied Meteorology Unit (AMU) create gridded warm season lightning climatologies that could be used as first-guess inputs to initialize lightning threat index maps. The gridded values requested included CG strike densities and frequency of occurrence stratified by synoptic-scale flow regime. The intent is to increase consistency between forecasters while enabling them to focus on the mesoscale detail of the forecast, ultimately benefiting the end-users of the product. Several studies took place at the Florida State University (FSU) and NWS Tallahassee (TAE) for which they created daily flow regimes using Florida 1200 UTC synoptic soundings and CG strike densities from National Lightning Detection Network (NLDN) data. The densities were created on a 2.5 km x 2.5 km grid for every hour of every day during the warm seasons in the years 1989-2004. The grids encompass an area that includes the entire state of Florida and adjacent Atlantic and Gulf of Mexico waters. Personnel at the two organizations provided this data and supporting software for the work performed by the AMU. The densities were first stratified by flow regime, then by time in 1-, 3-, 6-, 12-, and 24-hour increments while maintaining the 2.5 km x 2.5 km grid resolution. A CG frequency of occurrence was calculated for each stratification and grid box by counting the number of days with lightning and dividing by the total number of days in the data set. New CG strike densities were calculated for each stratification and grid box by summing the strike number values over all warm seasons, then normalized by dividing the summed values by the number of lightning days. This makes the densities conditional on whether lightning occurred. The frequency climatology values will be used by forecasters as proxy inputs for lightning prObability, while the density climatology values will be used for CG amount. In addition to the benefits outlined above, these climatologies will provide improved temporal and spatial resolution, expansion of the lightning threat area to include adjacent coastal waters, and potential to extend the forecast to include the day-2 period. This presentation will describe the lightning threat index map, discuss the work done to create the maps initialized with climatological guidance, and show examples of the climatological CG lightning densities and frequencies of occurren based on flow regime.
2MASS wide-field extinction maps. V. Corona Australis
NASA Astrophysics Data System (ADS)
Alves, João; Lombardi, Marco; Lada, Charles J.
2014-05-01
We present a near-infrared extinction map of a large region (~870 deg2) covering the isolated Corona Australis complex of molecular clouds. We reach a 1-σ error of 0.02 mag in the K-band extinction with a resolution of 3 arcmin over the entire map. We find that the Corona Australis cloud is about three times as large as revealed by previous CO and dust emission surveys. The cloud consists of a 45 pc long complex of filamentary structure from the well known star forming Western-end (the head, N ≥ 1023 cm-2) to the diffuse Eastern-end (the tail, N ≤ 1021 cm-2). Remarkably, about two thirds of the complex both in size and mass lie beneath AV ~ 1 mag. We find that the probability density function (PDF) of the cloud cannot be described by a single log-normal function. Similar to prior studies, we found a significant excess at high column densities, but a log-normal + power-law tail fit does not work well at low column densities. We show that at low column densities near the peak of the observed PDF, both the amplitude and shape of the PDF are dominated by noise in the extinction measurements making it impractical to derive the intrinsic cloud PDF below AK < 0.15 mag. Above AK ~ 0.15 mag, essentially the molecular component of the cloud, the PDF appears to be best described by a power-law with index -3, but could also described as the tail of a broad and relatively low amplitude, log-normal PDF that peaks at very low column densities. FITS files of the extinction maps are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/565/A18
NASA Astrophysics Data System (ADS)
Bura, E.; Zhmurov, A.; Barsegov, V.
2009-01-01
Dynamic force spectroscopy and steered molecular simulations have become powerful tools for analyzing the mechanical properties of proteins, and the strength of protein-protein complexes and aggregates. Probability density functions of the unfolding forces and unfolding times for proteins, and rupture forces and bond lifetimes for protein-protein complexes allow quantification of the forced unfolding and unbinding transitions, and mapping the biomolecular free energy landscape. The inference of the unknown probability distribution functions from the experimental and simulated forced unfolding and unbinding data, as well as the assessment of analytically tractable models of the protein unfolding and unbinding requires the use of a bandwidth. The choice of this quantity is typically subjective as it draws heavily on the investigator's intuition and past experience. We describe several approaches for selecting the "optimal bandwidth" for nonparametric density estimators, such as the traditionally used histogram and the more advanced kernel density estimators. The performance of these methods is tested on unimodal and multimodal skewed, long-tailed distributed data, as typically observed in force spectroscopy experiments and in molecular pulling simulations. The results of these studies can serve as a guideline for selecting the optimal bandwidth to resolve the underlying distributions from the forced unfolding and unbinding data for proteins.
Imaging electron wave functions inside open quantum rings.
Martins, F; Hackens, B; Pala, M G; Ouisse, T; Sellier, H; Wallart, X; Bollaert, S; Cappy, A; Chevrier, J; Bayot, V; Huant, S
2007-09-28
Combining scanning gate microscopy (SGM) experiments and simulations, we demonstrate low temperature imaging of the electron probability density |Psi|(2)(x,y) in embedded mesoscopic quantum rings. The tip-induced conductance modulations share the same temperature dependence as the Aharonov-Bohm effect, indicating that they originate from electron wave function interferences. Simulations of both |Psi|(2)(x,y) and SGM conductance maps reproduce the main experimental observations and link fringes in SGM images to |Psi|(2)(x,y).
2011-03-01
capability of FTS to estimate plume effluent concentrations by comparing intrusive measurements of aircraft engine exhaust with those from an FTS. A... turbojet engine. Temporal averaging was used to reduce SCAs in the spectra, and spatial maps of temperature and concentration were generated. The time...density function ( PDF ) is the de- fined as the derivative of the CDF, and describes the probability of obtaining a given value of X. For a normally
Yang, Q.; Jung, H.B.; Culbertson, C.W.; Marvinney, R.G.; Loiselle, M.C.; Locke, D.B.; Cheek, H.; Thibodeau, H.; Zheng, Yen
2009-01-01
In New England, groundwater arsenic occurrence has been linked to bedrock geology on regional scales. To ascertain and quantify this linkage at intermediate (100-101 km) scales, 790 groundwater samples from fractured bedrock aquifers in the greater Augusta, Maine area are analyzed, and 31% of the sampled wells have arsenic concentrations >10 ??g/L. The probability of [As] exceeding 10 ??g/L mapped by indicator kriging is highest in Silurian pelite-sandstone and pelite-limestone units (???40%). This probability differs significantly (p < 0.001) from those in the Silurian - Ordovician sandstone (24%), the Devonian granite (15%), and the Ordovician - Cambrian volcanic rocks (9%). The spatial pattern of groundwater arsenic distribution resembles the bedrock map. Thus, bedrock geology is associated with arsenic occurrence in fractured bedrock aquifers of the study area at intermediate scales relevant to water resources planning. The arsenic exceedance rate for each rock unit is considered robust because low, medium, and high arsenic occurrences in four cluster areas (3-20 km2) with a low sampling density of 1-6 wells per km2 are comparable to those with a greater density of 5-42 wells per km2. About 12,000 people (21% of the population) in the greater Augusta area (???1135 km2) are at risk of exposure to >10 ??g/L arsenic in groundwater. ?? 2009 American Chemical Society.
Yang, Qiang; Jung, Hun Bok; Culbertson, Charles W.; Marvinney, Robert G.; Loiselle, Marc C.; Locke, Daniel B.; Cheek, Heidi; Thibodeau, Hilary; Zheng, Yan
2009-01-01
In New England, groundwater arsenic occurrence has been linked to bedrock geology on regional scales. To ascertain and quantify this linkage at intermediate (100-101 km) scales, 790 groundwater samples from fractured bedrock aquifers in the greater Augusta, Maine area are analyzed. 31% of the sampled wells have arsenic >10 μg/L. The probability of [As] exceeding 10 μg/L mapped by indicator kriging is highest in Silurian pelite-sandstone and pelite-limestone units (~40%). This probability differs significantly (p<0.001) from those in the Silurian-Ordovician sandstone (24%), the Devonian granite (15%) and the Ordovician-Cambrian volcanic rocks (9%). The spatial pattern of groundwater arsenic distribution resembles the bedrock map. Thus, bedrock geology is associated with arsenic occurrence in fractured bedrock aquifers of the study area at intermediate scales relevant to water resources planning. The arsenic exceedance rate for each rock unit is considered robust because low, medium and high arsenic occurrences in 4 cluster areas (3-20 km2) with a low sampling density of 1-6 wells per km2 are comparable to those with a greater density of 5-42 wells per km2. About 12,000 people (21% of the population) in the greater Augusta area (~1135 km2) are at risk of exposure to >10 μg/L arsenic in groundwater. PMID:19475939
NASA Astrophysics Data System (ADS)
Gulyaeva, T. L.; Arikan, F.; Stanislawska, I.
2014-11-01
The ionospheric W index allows to distinguish state of the ionosphere and plasmasphere from quiet conditions (W = 0 or ±1) to intense storm (W = ±4) ranging the plasma density enhancements (positive phase) or plasma density depletions (negative phase) regarding the quiet ionosphere. The global W index maps are produced for a period 1999-2014 from Global Ionospheric Maps of Total Electron Content, GIM-TEC, designed by Jet Propulson Laboratory, converted from geographic frame (-87.5:2.5:87.5° in latitude, -180:5:180° in longitude) to geomagnetic frame (-85:5:85° in magnetic latitude, -180:5:180° in magnetic longitude). The probability of occurrence of planetary ionosphere storm during the magnetic disturbance storm time, Dst, event is evaluated with the superposed epoch analysis for 77 intense storms (Dst ≤ -100 nT) and 230 moderate storms (-100 < Dst ≤ -50 nT) with start time, t0, defined at Dst storm main phase onset. It is found that the intensity of negative storm, iW-, exceeds the intensity of positive storm, iW+, by 1.5-2 times. An empirical formula of iW+ and iW- in terms of peak Dst is deduced exhibiting an opposite trends of relation of intensity of ionosphere-plasmasphere storm with regard to intensity of Dst storm.
Yang, Qiang; Jung, Hun Bok; Culbertson, Charles W; Marvinney, Robert G; Loiselle, Marc C; Locke, Daniel B; Cheek, Heidi; Thibodeau, Hilary; Zheng, Yan
2009-04-15
In New England, groundwater arsenic occurrence has been linked to bedrock geology on regional scales. To ascertain and quantify this linkage at intermediate (10(0)-10(1) km) scales, 790 groundwater samples from fractured bedrock aquifers in the greater Augusta, Maine area are analyzed, and 31% of the sampled wells have arsenic concentrations >10 microg/L. The probability of [As] exceeding 10 microg/L mapped by indicator kriging is highest in Silurian pelite-sandstone and pelite-limestone units (approximately 40%). This probability differs significantly (p < 0.001) from those in the Silurian-Ordovician sandstone (24%),the Devonian granite (15%), and the Ordovician-Cambrian volcanic rocks (9%). The spatial pattern of groundwater arsenic distribution resembles the bedrock map. Thus, bedrock geology is associated with arsenic occurrence in fractured bedrock aquifers of the study area at intermediate scales relevant to water resources planning. The arsenic exceedance rate for each rock unit is considered robust because low, medium, and high arsenic occurrences in four cluster areas (3-20 km2) with a low sampling density of 1-6 wells per km2 are comparable to those with a greater density of 5-42 wells per km2. About 12,000 people (21% of the population) in the greater Augusta area (approximately 1135 km2) are at risk of exposure to >10 microg/L arsenic in groundwater.
Back to Normal! Gaussianizing posterior distributions for cosmological probes
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2014-05-01
We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.
Picturing Data With Uncertainty
NASA Technical Reports Server (NTRS)
Kao, David; Love, Alison; Dungan, Jennifer L.; Pang, Alex
2004-01-01
NASA is in the business of creating maps for scientific purposes to represent important biophysical or geophysical quantities over space and time. For example, maps of surface temperature over the globe tell scientists where and when the Earth is heating up; regional maps of the greenness of vegetation tell scientists where and when plants are photosynthesizing. There is always uncertainty associated with each value in any such map due to various factors. When uncertainty is fully modeled, instead of a single value at each map location, there is a distribution expressing a set of possible outcomes at each location. We consider such distribution data as multi-valued data since it consists of a collection of values about a single variable. Thus, a multi-valued data represents both the map and its uncertainty. We have been working on ways to visualize spatial multi-valued data sets effectively for fields with regularly spaced units or grid cells such as those in NASA's Earth science applications. A new way to display distributions at multiple grid locations is to project the distributions from an individual row, column or other user-selectable straight transect from the 2D domain. First at each grid cell in a given slice (row, column or transect), we compute a smooth density estimate from the underlying data. Such a density estimate for the probability density function (PDF) is generally more useful than a histogram, which is a classic density estimate. Then, the collection of PDFs along a given slice are presented vertically above the slice and form a wall. To minimize occlusion of intersecting slices, the corresponding walls are positioned at the far edges of the boundary. The PDF wall depicts the shapes of the distributions very dearly since peaks represent the modes (or bumps) in the PDFs. We've defined roughness as the number of peaks in the distribution. Roughness is another useful summary information for multimodal distributions. The uncertainty of the multi-valued data can also be interpreted by the number of peaks and the widths of the peaks as shown by the PDF walls.
NASA Astrophysics Data System (ADS)
Cannon, Alex J.
2018-01-01
Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin, particularly for annual maxima of the FWI distribution and spatiotemporal autocorrelation of precipitation fields.
A multi-source probabilistic hazard assessment of tephra dispersal in the Neapolitan area
NASA Astrophysics Data System (ADS)
Sandri, Laura; Costa, Antonio; Selva, Jacopo; Folch, Arnau; Macedonio, Giovanni; Tonini, Roberto
2015-04-01
In this study we present the results obtained from a long-term Probabilistic Hazard Assessment (PHA) of tephra dispersal in the Neapolitan area. Usual PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping eruption sizes and possible vent positions in a limited number of classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA then results from combining simulations considering different volcanological and meteorological conditions through weights associated to their specific probability of occurrence. However, volcanological parameters (i.e., erupted mass, eruption column height, eruption duration, bulk granulometry, fraction of aggregates) typically encompass a wide range of values. Because of such a natural variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. In the present study, we use a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological input values are chosen by using a stratified sampling method. This procedure allows for quantifying hazard without relying on the definition of scenarios, thus avoiding potential biases introduced by selecting single representative scenarios. Embedding this procedure into the Bayesian Event Tree scheme enables the tephra fall PHA and its epistemic uncertainties. We have appied this scheme to analyze long-term tephra fall PHA from Vesuvius and Campi Flegrei, in a multi-source paradigm. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained show that PHA accounting for the whole natural variability are consistent with previous probabilities maps elaborated for Vesuvius and Campi Flegrei on the basis of single representative scenarios, but show significant differences. In particular, the area characterized by a 300 kg/m2-load exceedance probability larger than 5%, accounting for the whole range of variability (that is, from small violent strombolian to plinian eruptions), is similar to that displayed in the maps based on the medium magnitude reference eruption, but it is of a smaller extent. This is due to the relatively higher weight of the small magnitude eruptions considered in this study, but neglected in the reference scenario maps. On the other hand, in our new maps the area characterized by a 300 kg/m2-load exceedance probability larger than 1% is much larger than that of the medium magnitude reference eruption, due to the contribution of plinian eruptions at lower probabilities, again neglected in the reference scenario maps.
Active Free Surface Density Maps
NASA Astrophysics Data System (ADS)
Çelen, S.
2016-10-01
Percolation problems were occupied to many physical problems after their establishment in 1957 by Broadbent and Hammersley. They can be used to solve complex systems such as bone remodeling. Volume fraction method was adopted to set some algorithms in the literature. However, different rate of osteoporosis could be observed for different microstructures which have the same mass density, mechanical stimuli, hormonal stimuli and nutrition. Thus it was emphasized that the bone might have identical porosity with different specific surfaces. Active free surface density of bone refers the used total area for its effective free surface. The purpose of this manuscript is to consolidate a mathematical approach which can be called as “active free surface density maps” for different surface patterns and derive their formulations. Active free surface density ratios were calculated for different Archimedean lattice models according to Helmholtz free energy and they were compared with their site and bond percolation thresholds from the background studies to derive their potential probability for bone remodeling.
Applying quantum principles to psychology
NASA Astrophysics Data System (ADS)
Busemeyer, Jerome R.; Wang, Zheng; Khrennikov, Andrei; Basieva, Irina
2014-12-01
This article starts out with a detailed example illustrating the utility of applying quantum probability to psychology. Then it describes several alternative mathematical methods for mapping fundamental quantum concepts (such as state preparation, measurement, state evolution) to fundamental psychological concepts (such as stimulus, response, information processing). For state preparation, we consider both pure states and densities with mixtures. For measurement, we consider projective measurements and positive operator valued measurements. The advantages and disadvantages of each method with respect to applications in psychology are discussed.
Statistical density modification using local pattern matching
Terwilliger, Thomas C.
2007-01-23
A computer implemented method modifies an experimental electron density map. A set of selected known experimental and model electron density maps is provided and standard templates of electron density are created from the selected experimental and model electron density maps by clustering and averaging values of electron density in a spherical region about each point in a grid that defines each selected known experimental and model electron density maps. Histograms are also created from the selected experimental and model electron density maps that relate the value of electron density at the center of each of the spherical regions to a correlation coefficient of a density surrounding each corresponding grid point in each one of the standard templates. The standard templates and the histograms are applied to grid points on the experimental electron density map to form new estimates of electron density at each grid point in the experimental electron density map.
NASA Astrophysics Data System (ADS)
Vio, R.; Andreani, P.
2016-05-01
The reliable detection of weak signals is a critical issue in many astronomical contexts and may have severe consequences for determining number counts and luminosity functions, but also for optimizing the use of telescope time in follow-up observations. Because of its optimal properties, one of the most popular and widely-used detection technique is the matched filter (MF). This is a linear filter designed to maximise the detectability of a signal of known structure that is buried in additive Gaussian random noise. In this work we show that in the very common situation where the number and position of the searched signals within a data sequence (e.g. an emission line in a spectrum) or an image (e.g. a point-source in an interferometric map) are unknown, this technique, when applied in its standard form, may severely underestimate the probability of false detection. This is because the correct use of the MF relies upon a priori knowledge of the position of the signal of interest. In the absence of this information, the statistical significance of features that are actually noise is overestimated and detections claimed that are actually spurious. For this reason, we present an alternative method of computing the probability of false detection that is based on the probability density function (PDF) of the peaks of a random field. It is able to provide a correct estimate of the probability of false detection for the one-, two- and three-dimensional case. We apply this technique to a real two-dimensional interferometric map obtained with ALMA.
Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-07-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.
Park, Wooram; Liu, Yan; Zhou, Yu; Moses, Matthew; Chirikjian, Gregory S.
2010-01-01
SUMMARY A nonholonomic system subjected to external noise from the environment, or internal noise in its own actuators, will evolve in a stochastic manner described by an ensemble of trajectories. This ensemble of trajectories is equivalent to the solution of a Fokker–Planck equation that typically evolves on a Lie group. If the most likely state of such a system is to be estimated, and plans for subsequent motions from the current state are to be made so as to move the system to a desired state with high probability, then modeling how the probability density of the system evolves is critical. Methods for solving Fokker-Planck equations that evolve on Lie groups then become important. Such equations can be solved using the operational properties of group Fourier transforms in which irreducible unitary representation (IUR) matrices play a critical role. Therefore, we develop a simple approach for the numerical approximation of all the IUR matrices for two of the groups of most interest in robotics: the rotation group in three-dimensional space, SO(3), and the Euclidean motion group of the plane, SE(2). This approach uses the exponential mapping from the Lie algebras of these groups, and takes advantage of the sparse nature of the Lie algebra representation matrices. Other techniques for density estimation on groups are also explored. The computed densities are applied in the context of probabilistic path planning for kinematic cart in the plane and flexible needle steering in three-dimensional space. In these examples the injection of artificial noise into the computational models (rather than noise in the actual physical systems) serves as a tool to search the configuration spaces and plan paths. Finally, we illustrate how density estimation problems arise in the characterization of physical noise in orientational sensors such as gyroscopes. PMID:20454468
Rome: sinkhole events and network of underground cavities (Italy)
NASA Astrophysics Data System (ADS)
Nisio, Stefania; Ciotoli, Giancarlo
2016-04-01
The anthropogenic sinkholes in the city of Rome are closely linked to the network of underground cavities produced by human activities in more than two thousand years of history. Over the past fifteen years the increased frequency of intense rainfall events, favors sinkhole formation. The risk assessment induced by anthropogenic sinkhole is really difficult. However, a susceptibility of the territory to sinkholes can be more easily determined as the probability that an event may occur in a given space, with unique geological-morphological characteristics, and in an infinite time. A sinkhole susceptibility map of the Rome territory, up to the ring road, has been constructed by using Geographically Weighted Regression technique and geostatistics. The spatial regression model includes the analysis of more than 2700 anthropogenic sinkholes (recorded from 1875 to 2015), as well as geological, morphological, hydrological and predisposing anthropogenic characteristics of the study area. The numerous available data (underground cavities, the ancient entrances to the quarry, bunkers, etc.) facilitate the creation of a series of maps. The density map of the cavity, updated to 2015, showed that more than 20 km2 of the Roman territory are affected by underground cavities. The census of sinkholes (over 2700) shows that over 30 km2 has been affected by sinkholes. The final susceptibility map highlights that inside the Ring Road about 40 km2 of the territory (about 11%) have a very high probability of triggering a sinkhole event. The susceptibility map was also compared with the data of ground subsidence (InSAR) to obtain a predictive model.
Domestic well locations and populations served in the contiguous U.S.: 1990
Johnson, Tyler; Belitz, Kenneth
2017-01-01
We estimate the location and population served by domestic wells in the contiguous United States in two ways: (1) the “Block Group Method” or BGM, uses data from the 1990 census, and (2) the “Road-Enhanced Method” or REM, refines the locations by using a buffer expansion and shrinkage technique along roadways to define areas where domestic wells exist. The fundamental assumption is that houses (and therefore domestic wells) are located near a named road. The results are presented as two nationally-consistent domestic-well population datasets.While both methods can be considered valid, the REM map is more precise in locating domestic wells; the REM map has a smaller amount of spatial bias (Type 1 and Type 2 errors nearly equal vs biased in Type 1), total error (10.9% vs 23.7%), and distance error (2.0 km vs 2.7 km), when comparing the REM and BGM maps to a calibration map in California. However, the BGM map is more inclusive of all potential locations for domestic wells. Independent domestic well datasets from the USGS, and the States of MN, NV, and TX show that the BGM captures about 5 to 10% more wells than the REM.One key difference between the BGM and the REM is the mapping of low density areas. The REM reduces areas mapped as low density by 57%, concentrating populations into denser regions. Therefore, if one is trying to capture all of the potential areas of domestic-well usage, then the BGM map may be more applicable. If location is more imperative, then the REM map is better at identifying areas of the landscape with the highest probability of finding a domestic well. Depending on the purpose of a study, a combination of both maps can be used.
Weyer-Menkhoff, I; Thrun, M C; Lötsch, J
2018-05-01
Pain in response to noxious cold has a complex molecular background probably involving several types of sensors. A recent observation has been the multimodal distribution of human cold pain thresholds. This study aimed at analysing reproducibility and stability of this observation and further exploration of data patterns supporting a complex background. Pain thresholds to noxious cold stimuli (range 32-0 °C, tonic: temperature decrease -1 °C/s, phasic: temperature decrease -8 °C/s) were acquired in 148 healthy volunteers. The probability density distribution was analysed using machine-learning derived methods implemented as Gaussian mixture modeling (GMM), emergent self-organizing maps and self-organizing swarms of data agents. The probability density function of pain responses was trimodal (mean thresholds at 25.9, 18.4 and 8.0 °C for tonic and 24.5, 18.1 and 7.5 °C for phasic stimuli). Subjects' association with Gaussian modes was consistent between both types of stimuli (weighted Cohen's κ = 0.91). Patterns emerging in self-organizing neuronal maps and swarms could be associated with different trends towards decreasing cold pain sensitivity in different Gaussian modes. On self-organizing maps, the third Gaussian mode emerged as particularly distinct. Thresholds at, roughly, 25 and 18 °C agree with known working temperatures of TRPM8 and TRPA1 ion channels, respectively, and hint at relative local dominance of either channel in respective subjects. Data patterns suggest involvement of further distinct mechanisms in cold pain perception at lower temperatures. Findings support data science approaches to identify biologically plausible hints at complex molecular mechanisms underlying human pain phenotypes. Sensitivity to pain is heterogeneous. Data-driven computational research approaches allow the identification of subgroups of subjects with a distinct pattern of sensitivity to cold stimuli. The subgroups are reproducible with different types of noxious cold stimuli. Subgroups show pattern that hints at distinct and inter-individually different types of the underlying molecular background. © 2018 European Pain Federation - EFIC®.
Automatically Generated Vegetation Density Maps with LiDAR Survey for Orienteering Purpose
NASA Astrophysics Data System (ADS)
Petrovič, Dušan
2018-05-01
The focus of our research was to automatically generate the most adequate vegetation density maps for orienteering purpose. Application Karttapullatuin was used for automated generation of vegetation density maps, which requires LiDAR data to process an automatically generated map. A part of the orienteering map in the area of Kazlje-Tomaj was used to compare the graphical display of vegetation density. With different settings of parameters in the Karttapullautin application we changed the way how vegetation density of automatically generated map was presented, and tried to match it as much as possible with the orienteering map of Kazlje-Tomaj. Comparing more created maps of vegetation density the most suitable parameter settings to automatically generate maps on other areas were proposed, too.
Newell, Felicity L.; Sheehan, James; Wood, Petra Bohall; Rodewald, Amanda D.; Buehler, David A.; Keyser, Patrick D.; Larkin, Jeffrey L.; Beachy, Tiffany A.; Bakermans, Marja H.; Boves, Than J.; Evans, Andrea; George, Gregory A.; McDermott, Molly E.; Perkins, Kelly A.; White, Matthew; Wigley, T. Bently
2013-01-01
Point counts are commonly used to assess changes in bird abundance, including analytical approaches such as distance sampling that estimate density. Point-count methods have come under increasing scrutiny because effects of detection probability and field error are difficult to quantify. For seven forest songbirds, we compared fixed-radii counts (50 m and 100 m) and density estimates obtained from distance sampling to known numbers of birds determined by territory mapping. We applied point-count analytic approaches to a typical forest management question and compared results to those obtained by territory mapping. We used a before–after control impact (BACI) analysis with a data set collected across seven study areas in the central Appalachians from 2006 to 2010. Using a 50-m fixed radius, variance in error was at least 1.5 times that of the other methods, whereas a 100-m fixed radius underestimated actual density by >3 territories per 10 ha for the most abundant species. Distance sampling improved accuracy and precision compared to fixed-radius counts, although estimates were affected by birds counted outside 10-ha units. In the BACI analysis, territory mapping detected an overall treatment effect for five of the seven species, and effects were generally consistent each year. In contrast, all point-count methods failed to detect two treatment effects due to variance and error in annual estimates. Overall, our results highlight the need for adequate sample sizes to reduce variance, and skilled observers to reduce the level of error in point-count data. Ultimately, the advantages and disadvantages of different survey methods should be considered in the context of overall study design and objectives, allowing for trade-offs among effort, accuracy, and power to detect treatment effects.
Probability mapping of scarred myocardium using texture and intensity features in CMR images
2013-01-01
Background The myocardium exhibits heterogeneous nature due to scarring after Myocardial Infarction (MI). In Cardiac Magnetic Resonance (CMR) imaging, Late Gadolinium (LG) contrast agent enhances the intensity of scarred area in the myocardium. Methods In this paper, we propose a probability mapping technique using Texture and Intensity features to describe heterogeneous nature of the scarred myocardium in Cardiac Magnetic Resonance (CMR) images after Myocardial Infarction (MI). Scarred tissue and non-scarred tissue are represented with high and low probabilities, respectively. Intermediate values possibly indicate areas where the scarred and healthy tissues are interwoven. The probability map of scarred myocardium is calculated by using a probability function based on Bayes rule. Any set of features can be used in the probability function. Results In the present study, we demonstrate the use of two different types of features. One is based on the mean intensity of pixel and the other on underlying texture information of the scarred and non-scarred myocardium. Examples of probability maps computed using the mean intensity of pixel and the underlying texture information are presented. We hypothesize that the probability mapping of myocardium offers alternate visualization, possibly showing the details with physiological significance difficult to detect visually in the original CMR image. Conclusion The probability mapping obtained from the two features provides a way to define different cardiac segments which offer a way to identify areas in the myocardium of diagnostic importance (like core and border areas in scarred myocardium). PMID:24053280
Studying the effects of fuel treatment based on burn probability on a boreal forest landscape.
Liu, Zhihua; Yang, Jian; He, Hong S
2013-01-30
Fuel treatment is assumed to be a primary tactic to mitigate intense and damaging wildfires. However, how to place treatment units across a landscape and assess its effectiveness is difficult for landscape-scale fuel management planning. In this study, we used a spatially explicit simulation model (LANDIS) to conduct wildfire risk assessments and optimize the placement of fuel treatments at the landscape scale. We first calculated a baseline burn probability map from empirical data (fuel, topography, weather, and fire ignition and size data) to assess fire risk. We then prioritized landscape-scale fuel treatment based on maps of burn probability and fuel loads (calculated from the interactions among tree composition, stand age, and disturbance history), and compared their effects on reducing fire risk. The burn probability map described the likelihood of burning on a given location; the fuel load map described the probability that a high fuel load will accumulate on a given location. Fuel treatment based on the burn probability map specified that stands with high burn probability be treated first, while fuel treatment based on the fuel load map specified that stands with high fuel loads be treated first. Our results indicated that fuel treatment based on burn probability greatly reduced the burned area and number of fires of different intensities. Fuel treatment based on burn probability also produced more dispersed and smaller high-risk fire patches and therefore can improve efficiency of subsequent fire suppression. The strength of our approach is that more model components (e.g., succession, fuel, and harvest) can be linked into LANDIS to map the spatially explicit wildfire risk and its dynamics to fuel management, vegetation dynamics, and harvesting. Copyright © 2012 Elsevier Ltd. All rights reserved.
Clerkin, L.; Kirk, D.; Manera, M.; ...
2016-08-30
It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (kappa_WL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the Counts in Cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey (DES) Science Verification data over 139 deg^2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirmmore » that the galaxy density contrast distribution is well modeled by a lognormal PDF convolved with Poisson noise at angular scales from 10-40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as kappa_WL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the kappa_WL distribution is well modeled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fit chi^2/DOF of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07 respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.« less
NASA Astrophysics Data System (ADS)
Clerkin, L.; Kirk, D.; Manera, M.; Lahav, O.; Abdalla, F.; Amara, A.; Bacon, D.; Chang, C.; Gaztañaga, E.; Hawken, A.; Jain, B.; Joachimi, B.; Vikram, V.; Abbott, T.; Allam, S.; Armstrong, R.; Benoit-Lévy, A.; Bernstein, G. M.; Bernstein, R. A.; Bertin, E.; Brooks, D.; Burke, D. L.; Rosell, A. Carnero; Carrasco Kind, M.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lima, M.; Melchior, P.; Miquel, R.; Nord, B.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sanchez, E.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Walker, A. R.
2017-04-01
It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (κWL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the counts-in-cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey Science Verification data over 139 deg2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modelled by a lognormal PDF convolved with Poisson noise at angular scales from 10 to 40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as κWL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the κWL distribution is well modelled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fitting χ2/dof of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07, respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check, we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.
Mapping Topographic Structure in White Matter Pathways with Level Set Trees
Kent, Brian P.; Rinaldo, Alessandro; Yeh, Fang-Cheng; Verstynen, Timothy
2014-01-01
Fiber tractography on diffusion imaging data offers rich potential for describing white matter pathways in the human brain, but characterizing the spatial organization in these large and complex data sets remains a challenge. We show that level set trees–which provide a concise representation of the hierarchical mode structure of probability density functions–offer a statistically-principled framework for visualizing and analyzing topography in fiber streamlines. Using diffusion spectrum imaging data collected on neurologically healthy controls (N = 30), we mapped white matter pathways from the cortex into the striatum using a deterministic tractography algorithm that estimates fiber bundles as dimensionless streamlines. Level set trees were used for interactive exploration of patterns in the endpoint distributions of the mapped fiber pathways and an efficient segmentation of the pathways that had empirical accuracy comparable to standard nonparametric clustering techniques. We show that level set trees can also be generalized to model pseudo-density functions in order to analyze a broader array of data types, including entire fiber streamlines. Finally, resampling methods show the reliability of the level set tree as a descriptive measure of topographic structure, illustrating its potential as a statistical descriptor in brain imaging analysis. These results highlight the broad applicability of level set trees for visualizing and analyzing high-dimensional data like fiber tractography output. PMID:24714673
Continued-fraction representation of the Kraus map for non-Markovian reservoir damping
NASA Astrophysics Data System (ADS)
van Wonderen, A. J.; Suttorp, L. G.
2018-04-01
Quantum dissipation is studied for a discrete system that linearly interacts with a reservoir of harmonic oscillators at thermal equilibrium. Initial correlations between system and reservoir are assumed to be absent. The dissipative dynamics as determined by the unitary evolution of system and reservoir is described by a Kraus map consisting of an infinite number of matrices. For all Laplace-transformed Kraus matrices exact solutions are constructed in terms of continued fractions that depend on the pair correlation functions of the reservoir. By performing factorizations in the Kraus map a perturbation theory is set up that conserves in arbitrary perturbative order both positivity and probability of the density matrix. The latter is determined by an integral equation for a bitemporal matrix and a finite hierarchy for Kraus matrices. In the lowest perturbative order this hierarchy reduces to one equation for one Kraus matrix. Its solution is given by a continued fraction of a much simpler structure as compared to the non-perturbative case. In the lowest perturbative order our non-Markovian evolution equations are applied to the damped Jaynes–Cummings model. From the solution for the atomic density matrix it is found that the atom may remain in the state of maximum entropy for a significant time span that depends on the initial energy of the radiation field.
Infrared dust bubble CS51 and its interaction with the surrounding interstellar medium
NASA Astrophysics Data System (ADS)
Das, Swagat R.; Tej, Anandmayee; Vig, Sarita; Liu, Hong-Li; Liu, Tie; Ishwara Chandra, C. H.; Ghosh, Swarna K.
2017-12-01
A multiwavelength investigation of the southern infrared dust bubble CS51 is presented in this paper. We probe the associated ionized, cold dust, molecular and stellar components. Radio continuum emission mapped at 610 and 1300 MHz, using the Giant Metrewave Radio Telescope, India, reveals the presence of three compact emission components (A, B, and C) apart from large-scale diffuse emission within the bubble interior. Radio spectral index map shows the co-existence of thermal and non-thermal emission components. Modified blackbody fits to the thermal dust emission using Herschel Photodetector Array Camera and Spectrometer and Spectral and Photometric Imaging Receiver data is performed to generate dust temperature and column density maps. We identify five dust clumps associated with CS51 with masses and radius in the range 810-4600 M⊙ and 1.0-1.9 pc, respectively. We further construct the column density probability distribution functions of the surrounding cold dust which display the impact of ionization feedback from high-mass stars. The estimated dynamical and fragmentation time-scales indicate the possibility of collect and collapse mechanism in play at the bubble border. Molecular line emission from the Millimeter Astronomy Legacy Team 90 GHz survey is used to understand the nature of two clumps which show signatures of expansion of CS51.
Jian, Jhih-Wei; Elumalai, Pavadai; Pitti, Thejkiran; Wu, Chih Yuan; Tsai, Keng-Chang; Chang, Jeng-Yih; Peng, Hung-Pin; Yang, An-Suei
2016-01-01
Predicting ligand binding sites (LBSs) on protein structures, which are obtained either from experimental or computational methods, is a useful first step in functional annotation or structure-based drug design for the protein structures. In this work, the structure-based machine learning algorithm ISMBLab-LIG was developed to predict LBSs on protein surfaces with input attributes derived from the three-dimensional probability density maps of interacting atoms, which were reconstructed on the query protein surfaces and were relatively insensitive to local conformational variations of the tentative ligand binding sites. The prediction accuracy of the ISMBLab-LIG predictors is comparable to that of the best LBS predictors benchmarked on several well-established testing datasets. More importantly, the ISMBLab-LIG algorithm has substantial tolerance to the prediction uncertainties of computationally derived protein structure models. As such, the method is particularly useful for predicting LBSs not only on experimental protein structures without known LBS templates in the database but also on computationally predicted model protein structures with structural uncertainties in the tentative ligand binding sites. PMID:27513851
Stark, J A; Hladky, S B
2000-02-01
Dwell-time histograms are often plotted as part of patch-clamp investigations of ion channel currents. The advantages of plotting these histograms with a logarithmic time axis were demonstrated by, J. Physiol. (Lond.). 378:141-174), Pflügers Arch. 410:530-553), and, Biophys. J. 52:1047-1054). Sigworth and Sine argued that the interpretation of such histograms is simplified if the counts are presented in a manner similar to that of a probability density function. However, when ion channel records are recorded as a discrete time series, the dwell times are quantized. As a result, the mapping of dwell times to logarithmically spaced bins is highly irregular; bins may be empty, and significant irregularities may extend beyond the duration of 100 samples. Using simple approximations based on the nature of the binning process and the transformation rules for probability density functions, we develop adjustments for the display of the counts to compensate for this effect. Tests with simulated data suggest that this procedure provides a faithful representation of the data.
NASA Astrophysics Data System (ADS)
Boncio, P.; Caldarella, M.
2016-12-01
We analyze the zones of coseismic surface faulting along thrust faults, whit the aim of defining the most appropriate criteria for zoning the Surface Fault Rupture Hazard (SFRH) along thrust faults. Normal and strike-slip faults were deeply studied in the past, while thrust faults were not studied with comparable attention. We analyze the 1999 Chi-Chi, Taiwan (Mw 7.6) and 2008 Wenchuan, China (Mw 7.9) earthquakes. Several different types of coseismic fault scarps characterize the two earthquakes, depending on the topography, fault geometry and near-surface materials. For both the earthquakes, we collected from the literature, or measured in GIS-georeferenced published maps, data about the Width of the coseismic Rupture Zone (WRZ). The frequency distribution of WRZ compared to the trace of the main fault shows that the surface ruptures occur mainly on and near the main fault. Ruptures located away from the main fault occur mainly in the hanging wall. Where structural complexities are present (e.g., sharp bends, step-overs), WRZ is wider then for simple fault traces. We also fitted the distribution of the WRZ dataset with probability density functions, in order to define a criterion to remove outliers (e.g., by selecting 90% or 95% probability) and define the zone where the probability of SFRH is the highest. This might help in sizing the zones of SFRH during seismic microzonation (SM) mapping. In order to shape zones of SFRH, a very detailed earthquake geologic study of the fault is necessary. In the absence of such a very detailed study, during basic (First level) SM mapping, a width of 350-400 m seems to be recommended (95% of probability). If the fault is carefully mapped (higher level SM), one must consider that the highest SFRH is concentrated in a narrow zone, 50 m-wide, that should be considered as a "fault-avoidance (or setback) zone". These fault zones should be asymmetric. The ratio of footwall to hanging wall (FW:HW) calculated here ranges from 1:5 to 1:3.
NASA Astrophysics Data System (ADS)
Gulyaeva, Tamara; Stanislawska, Iwona; Arikan, Feza; Arikan, Orhan
The probability of occurrence of the positive and negative planetary ionosphere storms is evaluated using the W index maps produced from Global Ionospheric Maps of Total Electron Content, GIM-TEC, provided by Jet Propulsion Laboratory, and transformed from geographic coordinates to magnetic coordinates frame. The auroral electrojet AE index and the equatorial disturbance storm time Dst index are investigated as precursors of the global ionosphere storm. The superposed epoch analysis is performed for 77 intense storms (Dst≤-100 nT) and 227 moderate storms (-100
Reply to "Comment on 'Fractional quantum mechanics' and 'Fractional Schrödinger equation' ".
Laskin, Nick
2016-06-01
The fractional uncertainty relation is a mathematical formulation of Heisenberg's uncertainty principle in the framework of fractional quantum mechanics. Two mistaken statements presented in the Comment have been revealed. The origin of each mistaken statement has been clarified and corrected statements have been made. A map between standard quantum mechanics and fractional quantum mechanics has been presented to emphasize the features of fractional quantum mechanics and to avoid misinterpretations of the fractional uncertainty relation. It has been shown that the fractional probability current equation is correct in the area of its applicability. Further studies have to be done to find meaningful quantum physics problems with involvement of the fractional probability current density vector and the extra term emerging in the framework of fractional quantum mechanics.
Yang, Guocheng; Li, Meiling; Chen, Leiting; Yu, Jie
2015-01-01
We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices. PMID:26557871
NASA Astrophysics Data System (ADS)
Demir, Gökhan; aytekin, mustafa; banu ikizler, sabriye; angın, zekai
2013-04-01
The North Anatolian Fault is know as one of the most active and destructive fault zone which produced many earthquakes with high magnitudes. Along this fault zone, the morphology and the lithological features are prone to landsliding. However, many earthquake induced landslides were recorded by several studies along this fault zone, and these landslides caused both injuiries and live losts. Therefore, a detailed landslide susceptibility assessment for this area is indispancable. In this context, a landslide susceptibility assessment for the 1445 km2 area in the Kelkit River valley a part of North Anatolian Fault zone (Eastern Black Sea region of Turkey) was intended with this study, and the results of this study are summarized here. For this purpose, geographical information system (GIS) and a bivariate statistical model were used. Initially, Landslide inventory maps are prepared by using landslide data determined by field surveys and landslide data taken from General Directorate of Mineral Research and Exploration. The landslide conditioning factors are considered to be lithology, slope gradient, slope aspect, topographical elevation, distance to streams, distance to roads and distance to faults, drainage density and fault density. ArcGIS package was used to manipulate and analyze all the collected data Logistic regression method was applied to create a landslide susceptibility map. Landslide susceptibility maps were divided into five susceptibility regions such as very low, low, moderate, high and very high. The result of the analysis was verified using the inventoried landslide locations and compared with the produced probability model. For this purpose, Area Under Curvature (AUC) approach was applied, and a AUC value was obtained. Based on this AUC value, the obtained landslide susceptibility map was concluded as satisfactory. Keywords: North Anatolian Fault Zone, Landslide susceptibility map, Geographical Information Systems, Logistic Regression Analysis.
Ligand Electron Density Shape Recognition Using 3D Zernike Descriptors
NASA Astrophysics Data System (ADS)
Gunasekaran, Prasad; Grandison, Scott; Cowtan, Kevin; Mak, Lora; Lawson, David M.; Morris, Richard J.
We present a novel approach to crystallographic ligand density interpretation based on Zernike shape descriptors. Electron density for a bound ligand is expanded in an orthogonal polynomial series (3D Zernike polynomials) and the coefficients from this expansion are employed to construct rotation-invariant descriptors. These descriptors can be compared highly efficiently against large databases of descriptors computed from other molecules. In this manuscript we describe this process and show initial results from an electron density interpretation study on a dataset containing over a hundred OMIT maps. We could identify the correct ligand as the first hit in about 30 % of the cases, within the top five in a further 30 % of the cases, and giving rise to an 80 % probability of getting the correct ligand within the top ten matches. In all but a few examples, the top hit was highly similar to the correct ligand in both shape and chemistry. Further extensions and intrinsic limitations of the method are discussed.
Time-dependent landslide probability mapping
Campbell, Russell H.; Bernknopf, Richard L.; ,
1993-01-01
Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.
Simulated cosmic microwave background maps at 0.5 deg resolution: Unresolved features
NASA Technical Reports Server (NTRS)
Kogut, A.; Hinshaw, G.; Bennett, C. L.
1995-01-01
High-contrast peaks in the cosmic microwave background (CMB) anisotropy can appear as unresolved sources to observers. We fit simluated CMB maps generated with a cold dark matter model to a set of unresolved features at instrumental resolution 0.5 deg-1.5 deg to derive the integral number density per steradian n (greater than absolute value of T) of features brighter than threshold temperature absolute value of T and compare the results to recent experiments. A typical medium-scale experiment observing 0.001 sr at 0.5 deg resolution would expect to observe one feature brighter than 85 micro-K after convolution with the beam profile, with less than 5% probability to observe a source brighter than 150 micro-K. Increasing the power-law index of primordial density perturbations n from 1 to 1.5 raises these temperature limits absolute value of T by a factor of 2. The MSAM features are in agreement with standard cold dark matter models and are not necessarily evidence for processes beyond the standard model.
NASA Astrophysics Data System (ADS)
Benda, L. E.
2009-12-01
Stochastic geomorphology refers to the interaction of the stochastic field of sediment supply with hierarchically branching river networks where erosion, sediment flux and sediment storage are described by their probability densities. There are a number of general principles (hypotheses) that stem from this conceptual and numerical framework that may inform the science of erosion and sedimentation in river basins. Rainstorms and other perturbations, characterized by probability distributions of event frequency and magnitude, stochastically drive sediment influx to channel networks. The frequency-magnitude distribution of sediment supply that is typically skewed reflects strong interactions among climate, topography, vegetation, and geotechnical controls that vary between regions; the distribution varies systematically with basin area and the spatial pattern of erosion sources. Probability densities of sediment flux and storage evolve from more to less skewed forms downstream in river networks due to the convolution of the population of sediment sources in a watershed that should vary with climate, network patterns, topography, spatial scale, and degree of erosion asynchrony. The sediment flux and storage distributions are also transformed downstream due to diffusion, storage, interference, and attrition. In stochastic systems, the characteristically pulsed sediment supply and transport can create translational or stationary-diffusive valley and channel depositional landforms, the geometries of which are governed by sediment flux-network interactions. Episodic releases of sediment to the network can also drive a system memory reflected in a Hurst Effect in sediment yields and thus in sedimentological records. Similarly, discreet events of punctuated erosion on hillslopes can lead to altered surface and subsurface properties of a population of erosion source areas that can echo through time and affect subsequent erosion and sediment flux rates. Spatial patterns of probability densities have implications for the frequency and magnitude of sediment transport and storage and thus for the formation of alluvial and colluvial landforms throughout watersheds. For instance, the combination and interference of probability densities of sediment flux at confluences creates patterns of riverine heterogeneity, including standing waves of sediment with associated age distributions of deposits that can vary from younger to older depending on network geometry and position. Although the watershed world of probability densities is rarified and typically confined to research endeavors, it has real world implications for the day-to-day work on hillslopes and in fluvial systems, including measuring erosion, sediment transport, mapping channel morphology and aquatic habitats, interpreting deposit stratigraphy, conducting channel restoration, and applying environmental regulations. A question for the geomorphology community is whether the stochastic framework is useful for advancing our understanding of erosion and sedimentation and whether it should stimulate research to further develop, refine and test these and other principles. For example, a changing climate should lead to shifts in probability densities of erosion, sediment flux, storage, and associated habitats and thus provide a useful index of climate change in earth science forecast models.
NASA Astrophysics Data System (ADS)
Ciepłuch, C.; Mooney, P.; Jacob, R.; Zheng, J.; Winstanely, A. C.
2011-12-01
New trends in GIS such as Volunteered Geographical Information (VGI), Citizen Science, and Urban Sensing, have changed the shape of the geoinformatics landscape. The OpenStreetMap (OSM) project provided us with an exciting, evolving, free and open solution as a base dataset for our geoserver and spatial data provider for our research. OSM is probably the best known and best supported example of VGI and user generated spatial content on the Internet. In this paper we will describe current results from the development of quality indicators for measures for OSM data. Initially we have analysed the Ireland OSM data in grid cells (5km) to gather statistical data about the completeness, accuracy, and fitness for purpose of the underlying spatial data. This analysis included: density of user contributions, spatial density of points and polygons, types of tags and metadata used, dominant contributors in a particular area or for a particular geographic feature type, etc. There greatest OSM activity and spatial data density is highly correlated with centres of large population. The ability to quantify and assess if VGI, such as OSM, is of sufficient quality for mobile mapping applications and Location-based services is critical to the future success of VGI as a spatial data source for these technologies.
MacRoy-Higgins, Michelle; Dalton, Kevin Patrick
2015-12-01
The purpose of this study was to examine the influence of phonotactic probability on sublexical (phonological) and lexical representations in 3-year-olds who had a history of being late talkers in comparison with their peers with typical language development. Ten 3-year-olds who were late talkers and 10 age-matched typically developing controls completed nonword repetition and fast mapping tasks; stimuli for both experimental procedures differed in phonotactic probability. Both participant groups repeated nonwords containing high phonotactic probability sequences more accurately than nonwords containing low phonotactic probability sequences. Participants with typical language showed an early advantage for fast mapping high phonotactic probability words; children who were late talkers required more exposures to the novel words to show the same advantage for fast mapping high phonotactic probability words. Children who were late talkers showed similar sensitivities to phonotactic probability in nonword repetition and word learning when compared with their peers with no history of language delay. However, word learning in children who were late talkers appeared to be slower when compared with their peers.
Quantum States and Generalized Observables: A Simple Proof of Gleason's Theorem
NASA Astrophysics Data System (ADS)
Busch, P.
2003-09-01
A quantum state can be understood in a loose sense as a map that assigns a value to every observable. Formalizing this characterization of states in terms of generalized probability distributions on the set of effects, we obtain a simple proof of the result, analogous to Gleason’s theorem, that any quantum state is given by a density operator. As a corollary we obtain a vonNeumann type argument against noncontextual hidden variables. It follows that on an individual interpretation of quantum mechanics the values of effects are appropriately understood as propensities.
Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon
2013-01-01
Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005
Palmer, Nicholette D.; Divers, Jasmin; Lu, Lingyi; Register, Thomas C.; Carr, J. Jeffrey; Hicks, Pamela J.; Smith, S. Carrie; Xu, Jianzhao; Judd, Suzanne E.; Irvin, Marguerite R.; Gutierrez, Orlando M.; Bowden, Donald W.; Wagenknecht, Lynne E.; Langefeld, Carl D.; Freedman, Barry I.
2016-01-01
Vitamin D and intact parathyroid hormone (iPTH) concentrations differ between individuals of African and European descent and may play a role in observed racial differences in bone mineral density (BMD). These findings suggest that mapping by admixture linkage disequilibrium (MALD) may be informative for identifying genetic variants contributing to these ethnic disparities. Admixture mapping was performed for serum 25-hydroxyvitamin D, 1,25-dihydroxyvitamin D, vitamin D-binding protein (VDBP), bioavailable vitamin D, and iPTH concentrations and computed tomography measured thoracic and lumbar vertebral volumetric BMD in 552 unrelated African Americans with type 2 diabetes from the African American-Diabetes Heart Study. Genotyping was performed using a custom Illumina ancestry informative marker (AIM) panel. For each AIM, the probability of inheriting 0, 1, or 2 copies of a European-derived allele was determined. Non-parametric linkage analysis was performed by testing for association between each AIM using these probabilities among phenotypes, accounting for global ancestry, age, and gender. Fine-mapping of MALD peaks was facilitated by genome-wide association study (GWAS) data. VDBP levels were significantly linked in proximity to the protein coding locus (rs7689609, LOD=11.05). Two loci exhibited significant linkage signals for 1,25-dihydroxyvitamin D on 13q21.2 (rs1622710, LOD=3.20) and 12q13.2 (rs11171526, LOD=3.10). iPTH was significantly linked on 9q31.3 (rs7854368, LOD=3.14). Fine-mapping with GWAS data revealed significant known (rs7041 with VDBP, P=1.38×10−82) and novel (rs12741813 and rs10863774 with VDBP, P<6.43×10−5) loci with plausible biological roles. Admixture mapping in combination with fine-mapping has focused efforts to identify loci contributing to ethnic differences in vitamin D-related traits. PMID:27032714
ERIC Educational Resources Information Center
Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.
2010-01-01
Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…
Revision of Time-Independent Probabilistic Seismic Hazard Maps for Alaska
Wesson, Robert L.; Boyd, Oliver S.; Mueller, Charles S.; Bufe, Charles G.; Frankel, Arthur D.; Petersen, Mark D.
2007-01-01
We present here time-independent probabilistic seismic hazard maps of Alaska and the Aleutians for peak ground acceleration (PGA) and 0.1, 0.2, 0.3, 0.5, 1.0 and 2.0 second spectral acceleration at probability levels of 2 percent in 50 years (annual probability of 0.000404), 5 percent in 50 years (annual probability of 0.001026) and 10 percent in 50 years (annual probability of 0.0021). These maps represent a revision of existing maps based on newly obtained data and assumptions reflecting best current judgments about methodology and approach. These maps have been prepared following the procedures and assumptions made in the preparation of the 2002 National Seismic Hazard Maps for the lower 48 States. A significant improvement relative to the 2002 methodology is the ability to include variable slip rate along a fault where appropriate. These maps incorporate new data, the responses to comments received at workshops held in Fairbanks and Anchorage, Alaska, in May, 2005, and comments received after draft maps were posted on the National Seismic Hazard Mapping Web Site. These maps will be proposed for adoption in future revisions to the International Building Code. In this documentation we describe the maps and in particular explain and justify changes that have been made relative to the 1999 maps. We are also preparing a series of experimental maps of time-dependent hazard that will be described in future documents.
Can we detect Galactic spiral arms? 3D dust distribution in the Milky Way
NASA Astrophysics Data System (ADS)
Rezaei Kh., Sara; Bailer-Jones, Coryn A. L.; Fouesneau, Morgan; Hanson, Richard
2018-04-01
We present a model to map the 3D distribution of dust in the Milky Way. Although dust is just a tiny fraction of what comprises the Galaxy, it plays an important role in various processes. In recent years various maps of dust extinction have been produced, but we still lack a good knowledge of the dust distribution. Our presented approach leverages line-of-sight extinctions towards stars in the Galaxy at measured distances. Since extinction is proportional to the integral of the dust density towards a given star, it is possible to reconstruct the 3D distribution of dust by combining many lines-of-sight in a model accounting for the spatial correlation of the dust. Such a technique can be used to infer the most probable 3D distribution of dust in the Galaxy even in regions which have not been observed. This contribution provides one of the first maps which does not show the ``fingers of God'' effect. Furthermore, we show that expected high precision measurements of distances and extinctions offer the possibility of mapping the spiral arms in the Galaxy.
Geological Mapping of Pluto and Charon Using New Horizons Data
NASA Astrophysics Data System (ADS)
Moore, J. M.; Spencer, J. R.; McKinnon, W. B.; Howard, A. D.; White, O. M.; Umurhan, O. M.; Schenk, P. M.; Beyer, R. A.; Singer, K.; Stern, S. A.; Weaver, H. A.; Young, L. A.; Ennico Smith, K.; Olkin, C.; Horizons Geology, New; Geophysics Imaging Team
2016-06-01
Pluto and Charon exhibit strikingly different surface appearances, despite their similar densities and presumed bulk compositions. Systematic mapping has revealed that much of Pluto's surface can be attributed to surface-atmosphere interactions and the mobilization of volatile ices by insolation. Many mapped valley systems appear to be the consequence of glaciation involving nitrogen ice. Other geological activity requires or required internal heating. The convection and advection of volatile ices in Sputnik Planum can be powered by present-day radiogenic heat loss. On the other hand, the prominent mountains at the western margin of Sputnik Planum, and the strange, multi-km-high mound features to the south, probably composed of H2O, are young geologically as inferred by light cratering and superposition relationships. Their origin, and what drove their formation so late in Solar System history, is under investigation. The dynamic remolding of landscapes by volatile transport seen on Pluto is not unambiguously evident in the mapping of Charon. Charon does, however, display a large resurfaced plain and globally engirdling extensional tectonic network attesting to its early endogenic vigor.
HP2 survey. III. The California Molecular Cloud: A sleeping giant revisited
NASA Astrophysics Data System (ADS)
Lada, Charles J.; Lewis, John A.; Lombardi, Marco; Alves, João
2017-10-01
We present new high resolution and dynamic range dust column density and temperature maps of the California Molecular Cloud derived from a combination of Planck and Herschel dust-emission maps, and 2MASS NIR dust-extinction maps. We used these data to determine the ratio of the 2.2 μm extinction coefficient to the 850 μm opacity and found the value to be close to that found in similar studies of the Orion B and Perseus clouds but higher than that characterizing the Orion A cloud, indicating that variations in the fundamental optical properties of dust may exist between local clouds. We show that over a wide range of extinction, the column density probability distribution function (pdf) of the cloud can be well described by a simple power law (I.e., PDFN ∝ AK -n) with an index (n = 4.0 ± 0.1) that represents a steeper decline with AK than found (n ≈ 3) in similar studies of the Orion and Perseus clouds. Using only the protostellar population of the cloud and our extinction maps we investigate the Schmidt relation, that is, the relation between the protostellar surface density, Σ∗, and extinction, AK, within the cloud. We show that Σ∗ is directly proportional to the ratio of the protostellar and cloud pdfs, I.e., PDF∗(AK)/PDFN(AK). We use the cumulative distribution of protostars to infer the functional forms for both Σ∗ and PDF∗. We find that Σ∗ is best described by two power-law functions. At extinctions AK ≲ 2.5 mag, Σ∗ ∝ AK β with β = 3.3 while at higher extinctions β = 2.5, both values steeper than those (≈2) found in other local giant molecular clouds (GMCs). We find that PDF∗ is a declining function of extinction also best described by two power-laws whose behavior mirrors that of Σ∗. Our observations suggest that variations both in the slope of the Schmidt relation and in the sizes of the protostellar populations between GMCs are largely driven by variations in the slope, n, of PDFN(AK). This confirms earlier studies suggesting that cloud structure plays a major role in setting the global star formation rates in GMCs HP2 (Herschel-Planck-2MASS) survey is a continuation of the series originally entitled "Herschel-Planck dust opacity and column density maps" (Lombardi et al. 2014, Zari et al. 2016).The reduced Herschel and Planck map and the column density and temperature maps are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/606/A100
NASA Astrophysics Data System (ADS)
Lautze, N. C.; Ito, G.; Thomas, D. M.; Hinz, N.; Frazer, L. N.; Waller, D.
2015-12-01
Hawaii offers the opportunity to gain knowledge and develop geothermal energy on the only oceanic hotspot in the U.S. As a remote island state, Hawaii is more dependent on imported fossil fuel than any other state in the U.S., and energy prices are 3 to 4 times higher than the national average. The only proven resource, located on Hawaii Island's active Kilauea volcano, is a region of high geologic risk; other regions of probable resource exist but lack adequate assessment. The last comprehensive statewide geothermal assessment occurred in 1983 and found a potential resource on all islands (Hawaii Institute of Geophysics, 1983). Phase 1 of a Department of Energy funded project to assess the probability of geothermal resource potential statewide in Hawaii was recently completed. The execution of this project was divided into three main tasks: (1) compile all historical and current data for Hawaii that is relevant to geothermal resources into a single Geographic Information System (GIS) project; (2) analyze and rank these datasets in terms of their relevance to the three primary properties of a viable geothermal resource: heat (H), fluid (F), and permeability (P); and (3) develop and apply a Bayesian statistical method to incorporate the ranks and produce probability models that map out Hawaii's geothermal resource potential. Here, we summarize the project methodology and present maps that highlight both high prospect areas as well as areas that lack enough data to make an adequate assessment. We suggest a path for future exploration activities in Hawaii, and discuss how this method of analysis can be adapted to other regions and other types of resources. The figure below shows multiple layers of GIS data for Hawaii Island. Color shades indicate crustal density anomalies produced from inversions of gravity (Flinders et al. 2013). Superimposed on this are mapped calderas, rift zones, volcanic cones, and faults (following Sherrod et al., 2007). These features were used to identify probable locations of intrusive rock (heat) and permeability.
Argenti, Fabrizio; Bianchi, Tiziano; Alparone, Luciano
2006-11-01
In this paper, a new despeckling method based on undecimated wavelet decomposition and maximum a posteriori MIAP) estimation is proposed. Such a method relies on the assumption that the probability density function (pdf) of each wavelet coefficient is generalized Gaussian (GG). The major novelty of the proposed approach is that the parameters of the GG pdf are taken to be space-varying within each wavelet frame. Thus, they may be adjusted to spatial image context, not only to scale and orientation. Since the MAP equation to be solved is a function of the parameters of the assumed pdf model, the variance and shape factor of the GG function are derived from the theoretical moments, which depend on the moments and joint moments of the observed noisy signal and on the statistics of speckle. The solution of the MAP equation yields the MAP estimate of the wavelet coefficients of the noise-free image. The restored SAR image is synthesized from such coefficients. Experimental results, carried out on both synthetic speckled images and true SAR images, demonstrate that MAP filtering can be successfully applied to SAR images represented in the shift-invariant wavelet domain, without resorting to a logarithmic transformation.
Use of spatial capture–recapture to estimate density of Andean bears in northern Ecuador
Molina, Santiago; Fuller, Angela K.; Morin, Dana J.; Royle, J. Andrew
2017-01-01
The Andean bear (Tremarctos ornatus) is the only extant species of bear in South America and is considered threatened across its range and endangered in Ecuador. Habitat loss and fragmentation is considered a critical threat to the species, and there is a lack of knowledge regarding its distribution and abundance. The species is thought to occur at low densities, making field studies designed to estimate abundance or density challenging. We conducted a pilot camera-trap study to estimate Andean bear density in a recently identified population of Andean bears northwest of Quito, Ecuador, during 2012. We compared 12 candidate spatial capture–recapture models including covariates on encounter probability and density and estimated a density of 7.45 bears/100 km2 within the region. In addition, we estimated that approximately 40 bears used a recently named Andean bear corridor established by the Secretary of Environment, and we produced a density map for this area. Use of a rub-post with vanilla scent attractant allowed us to capture numerous photographs for each event, improving our ability to identify individual bears by unique facial markings. This study provides the first empirically derived density estimate for Andean bears in Ecuador and should provide direction for future landscape-scale studies interested in conservation initiatives requiring spatially explicit estimates of density.
NASA Astrophysics Data System (ADS)
Agliozzo, C.; Nikutta, R.; Pignata, G.; Phillips, N. M.; Ingallinera, A.; Buemi, C.; Umana, G.; Leto, P.; Trigilio, C.; Noriega-Crespo, A.; Paladini, R.; Bufano, F.; Cavallaro, F.
2017-04-01
We present new observations of the nebula around the Magellanic candidate Luminous Blue Variable S61. These comprise high-resolution data acquired with the Australia Telescope Compact Array (ATCA), the Atacama Large Millimetre/Submillimetre Array (ALMA), and the VLT Imager and Spectrometer for mid Infrared (VISIR) at the Very Large Telescope. The nebula was detected only in the radio, up to 17 GHz. The 17 GHz ATCA map, with 0.8 arcsec resolution, allowed a morphological comparison with the Hα Hubble Space Telescope image. The radio nebula resembles a spherical shell, as in the optical. The spectral index map indicates that the radio emission is due to free-free transitions in the ionized, optically thin gas, but there are hints of inhomogeneities. We present our new public code RHOCUBE to model 3D density distributions and determine via Bayesian inference the nebula's geometric parameters. We applied the code to model the electron density distribution in the S61 nebula. We found that different distributions fit the data, but all of them converge to the same ionized mass, ˜ 0.1 M⊙, which is an order of magnitude smaller than previous estimates. We show how the nebula models can be used to derive the mass-loss history with high-temporal resolution. The nebula was probably formed through stellar winds, rather than eruptions. From the ALMA and VISIR non-detections, plus the derived extinction map, we deduce that the infrared emission observed by space telescopes must arise from extended, diffuse dust within the ionized region.
Deep convolutional neural network for mammographic density segmentation
NASA Astrophysics Data System (ADS)
Wei, Jun; Li, Songfeng; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir; Samala, Ravi K.
2018-02-01
Breast density is one of the most significant factors for cancer risk. In this study, we proposed a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammography (DM). The deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD). PD was calculated as the ratio of the dense area to the breast area based on the probability of each pixel belonging to dense region or fatty region at a decision threshold of 0.5. The DCNN estimate was compared to a feature-based statistical learning approach, in which gray level, texture and morphological features were extracted from each ROI and the least absolute shrinkage and selection operator (LASSO) was used to select and combine the useful features to generate the PMD. The reference PD of each image was provided by two experienced MQSA radiologists. With IRB approval, we retrospectively collected 347 DMs from patient files at our institution. The 10-fold cross-validation results showed a strong correlation r=0.96 between the DCNN estimation and interactive segmentation by radiologists while that of the feature-based statistical learning approach vs radiologists' segmentation had a correlation r=0.78. The difference between the segmentation by DCNN and by radiologists was significantly smaller than that between the feature-based learning approach and radiologists (p < 0.0001) by two-tailed paired t-test. This study demonstrated that the DCNN approach has the potential to replace radiologists' interactive thresholding in PD estimation on DMs.
Globally optimal superconducting magnets part I: minimum stored energy (MSE) current density map.
Tieng, Quang M; Vegh, Viktor; Brereton, Ian M
2009-01-01
An optimal current density map is crucial in magnet design to provide the initial values within search spaces in an optimization process for determining the final coil arrangement of the magnet. A strategy for obtaining globally optimal current density maps for the purpose of designing magnets with coaxial cylindrical coils in which the stored energy is minimized within a constrained domain is outlined. The current density maps obtained utilising the proposed method suggests that peak current densities occur around the perimeter of the magnet domain, where the adjacent peaks have alternating current directions for the most compact designs. As the dimensions of the domain are increased, the current density maps yield traditional magnet designs of positive current alone. These unique current density maps are obtained by minimizing the stored magnetic energy cost function and therefore suggest magnet coil designs of minimal system energy. Current density maps are provided for a number of different domain arrangements to illustrate the flexibility of the method and the quality of the achievable designs.
NASA Astrophysics Data System (ADS)
Marsh, K. A.; Whitworth, A. P.; Lomax, O.
2015-12-01
We present point process mapping (
Assessment of Data Fusion Algorithms for Earth Observation Change Detection Processes.
Molina, Iñigo; Martinez, Estibaliz; Morillo, Carmen; Velasco, Jesus; Jara, Alvaro
2016-09-30
In this work a parametric multi-sensor Bayesian data fusion approach and a Support Vector Machine (SVM) are used for a Change Detection problem. For this purpose two sets of SPOT5-PAN images have been used, which are in turn used for Change Detection Indices (CDIs) calculation. For minimizing radiometric differences, a methodology based on zonal "invariant features" is suggested. The choice of one or the other CDI for a change detection process is a subjective task as each CDI is probably more or less sensitive to certain types of changes. Likewise, this idea might be employed to create and improve a "change map", which can be accomplished by means of the CDI's informational content. For this purpose, information metrics such as the Shannon Entropy and "Specific Information" have been used to weight the changes and no-changes categories contained in a certain CDI and thus introduced in the Bayesian information fusion algorithm. Furthermore, the parameters of the probability density functions (pdf's) that best fit the involved categories have also been estimated. Conversely, these considerations are not necessary for mapping procedures based on the discriminant functions of a SVM. This work has confirmed the capabilities of probabilistic information fusion procedure under these circumstances.
A statistical study of gyro-averaging effects in a reduced model of drift-wave transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fonseca, Julio; Del-Castillo-Negrete, Diego B.; Sokolov, Igor M.
2016-08-25
Here, a statistical study of finite Larmor radius (FLR) effects on transport driven by electrostatic driftwaves is presented. The study is based on a reduced discrete Hamiltonian dynamical system known as the gyro-averaged standard map (GSM). In this system, FLR effects are incorporated through the gyro-averaging of a simplified weak-turbulence model of electrostatic fluctuations. Formally, the GSM is a modified version of the standard map in which the perturbation amplitude, K 0, becomes K 0J 0(more » $$\\hat{p}$$), where J 0 is the zeroth-order Bessel function and $$\\hat{p}$$ s the Larmor radius. Assuming a Maxwellian probability density function (pdf) for $$\\hat{p}$$ , we compute analytically and numerically the pdf and the cumulative distribution function of the effective drift-wave perturba- tion amplitude K 0J 0($$\\hat{p}$$). Using these results, we compute the probability of loss of confinement (i.e., global chaos), P c provides an upper bound for the escape rate, and that P t rovides a good estimate of the particle trapping rate. Lastly. the analytical results are compared with direct numerical Monte-Carlo simulations of particle transport.« less
Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-04-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.
Long-term volcanic hazard assessment on El Hierro (Canary Islands)
NASA Astrophysics Data System (ADS)
Becerril, L.; Bartolini, S.; Sobradelo, R.; Martí, J.; Morales, J. M.; Galindo, I.
2014-07-01
Long-term hazard assessment, one of the bastions of risk-mitigation programs, is required for land-use planning and for developing emergency plans. To ensure quality and representative results, long-term volcanic hazard assessment requires several sequential steps to be completed, which include the compilation of geological and volcanological information, the characterisation of past eruptions, spatial and temporal probabilistic studies, and the simulation of different eruptive scenarios. Despite being a densely populated active volcanic region that receives millions of visitors per year, no systematic hazard assessment has ever been conducted on the Canary Islands. In this paper we focus our attention on El Hierro, the youngest of the Canary Islands and the most recently affected by an eruption. We analyse the past eruptive activity to determine the spatial and temporal probability, and likely style of a future eruption on the island, i.e. the where, when and how. By studying the past eruptive behaviour of the island and assuming that future eruptive patterns will be similar, we aim to identify the most likely volcanic scenarios and corresponding hazards, which include lava flows, pyroclastic fallout and pyroclastic density currents (PDCs). Finally, we estimate their probability of occurrence. The end result, through the combination of the most probable scenarios (lava flows, pyroclastic density currents and ashfall), is the first qualitative integrated volcanic hazard map of the island.
The shapes of column density PDFs. The importance of the last closed contour
NASA Astrophysics Data System (ADS)
Alves, João; Lombardi, Marco; Lada, Charles J.
2017-10-01
The probability distribution function of column density (PDF) has become the tool of choice for cloud structure analysis and star formation studies. Its simplicity is attractive, and the PDF could offer access to cloud physical parameters otherwise difficult to measure, but there has been some confusion in the literature on the definition of its completeness limit and shape at the low column density end. In this letter we use the natural definition of the completeness limit of a column density PDF, the last closed column density contour inside a surveyed region, and apply it to a set of large-scale maps of nearby molecular clouds. We conclude that there is no observational evidence for log-normal PDFs in these objects. We find that all studied molecular clouds have PDFs well described by power laws, including the diffuse cloud Polaris. Our results call for a new physical interpretation of the shape of the column density PDFs. We find that the slope of a cloud PDF is invariant to distance but not to the spatial arrangement of cloud material, and as such it is still a useful tool for investigating cloud structure.
Hurford, Amy; Hebblewhite, Mark; Lewis, Mark A
2006-11-01
A reduced probability of finding mates at low densities is a frequently hypothesized mechanism for a component Allee effect. At low densities dispersers are less likely to find mates and establish new breeding units. However, many mathematical models for an Allee effect do not make a distinction between breeding group establishment and subsequent population growth. Our objective is to derive a spatially explicit mathematical model, where dispersers have a reduced probability of finding mates at low densities, and parameterize the model for wolf recolonization in the Greater Yellowstone Ecosystem (GYE). In this model, only the probability of establishing new breeding units is influenced by the reduced probability of finding mates at low densities. We analytically and numerically solve the model to determine the effect of a decreased probability in finding mates at low densities on population spread rate and density. Our results suggest that a reduced probability of finding mates at low densities may slow recolonization rate.
NASA Astrophysics Data System (ADS)
Rossi, M.; Apuani, T.; Felletti, F.
2009-04-01
The aim of this paper is to compare the results of two statistical methods for landslide susceptibility analysis: 1) univariate probabilistic method based on landslide susceptibility index, 2) multivariate method (logistic regression). The study area is the Febbraro valley, located in the central Italian Alps, where different types of metamorphic rocks croup out. On the eastern part of the studied basin a quaternary cover represented by colluvial and secondarily, by glacial deposits, is dominant. In this study 110 earth flows, mainly located toward NE portion of the catchment, were analyzed. They involve only the colluvial deposits and their extension mainly ranges from 36 to 3173 m2. Both statistical methods require to establish a spatial database, in which each landslide is described by several parameters that can be assigned using a main scarp central point of landslide. The spatial database is constructed using a Geographical Information System (GIS). Each landslide is described by several parameters corresponding to the value of main scarp central point of the landslide. Based on bibliographic review a total of 15 predisposing factors were utilized. The width of the intervals, in which the maps of the predisposing factors have to be reclassified, has been defined assuming constant intervals to: elevation (100 m), slope (5 °), solar radiation (0.1 MJ/cm2/year), profile curvature (1.2 1/m), tangential curvature (2.2 1/m), drainage density (0.5), lineament density (0.00126). For the other parameters have been used the results of the probability-probability plots analysis and the statistical indexes of landslides site. In particular slope length (0 ÷ 2, 2 ÷ 5, 5 ÷ 10, 10 ÷ 20, 20 ÷ 35, 35 ÷ 260), accumulation flow (0 ÷ 1, 1 ÷ 2, 2 ÷ 5, 5 ÷ 12, 12 ÷ 60, 60 ÷27265), Topographic Wetness Index 0 ÷ 0.74, 0.74 ÷ 1.94, 1.94 ÷ 2.62, 2.62 ÷ 3.48, 3.48 ÷ 6,00, 6.00 ÷ 9.44), Stream Power Index (0 ÷ 0.64, 0.64 ÷ 1.28, 1.28 ÷ 1.81, 1.81 ÷ 4.20, 4.20 ÷ 9.40). Geological map and land use map were also used, considering geological and land use properties as categorical variables. Appling the univariate probabilistic method the Landslide Susceptibility Index (LSI) is defined as the sum of the ratio Ra/Rb calculated for each predisposing factor, where Ra is the ratio between number of pixel of class and the total number of pixel of the study area, and Rb is the ratio between number of landslides respect to the pixel number of the interval area. From the analysis of the Ra/Rb ratio the relationship between landslide occurrence and predisposing factors were defined. Then the equation of LSI was used in GIS to trace the landslide susceptibility maps. The multivariate method for landslide susceptibility analysis, based on logistic regression, was performed starting from the density maps of the predisposing factors, calculated with the intervals defined above using the equation Rb/Rbtot, where Rbtot is a sum of all Rb values. Using stepwise forward algorithms the logistic regression was performed in two successive steps: first a univariate logistic regression is used to choose the most significant predisposing factors, then the multivariate logistic regression can be performed. The univariate regression highlighted the importance of the following factors: elevation, accumulation flow, drainage density, lineament density, geology and land use. When the multivariate regression was applied the number of controlling factors was reduced neglecting the geological properties. The resulting final susceptibility equation is: P = 1 / (1 + exp-(6.46-22.34*elevation-5.33*accumulation flow-7.99* drainage density-4.47*lineament density-17.31*land use)) and using this equation the susceptibility maps were obtained. To easy compare the results of the two methodologies, the susceptibility maps were reclassified in five susceptibility intervals (very high, high, moderate, low and very low) using natural breaks. Then the maps were validated using two cumulative distribution curves, one related to the landslides (number of landslides in each susceptibility class) and one to the basin (number of pixel covering each class). Comparing the curves for each method, it results that the two approaches (univariate and multivariate) are appropriate, providing acceptable results. In both maps the distribution of high susceptibility condition is mainly localized on the left slope of the catchment in agreement with the field evidences. The comparison between the methods was obtained by subtraction of the two maps. This operation shows that about 40% of the basin is classified by the same class of susceptibility. In general the univariate probabilistic method tends to overestimate the areal extension of the high susceptibility class with respect to the maps obtained by the logistic regression method.
NASA Technical Reports Server (NTRS)
Marrs, R. W.; Evans, M. A.
1974-01-01
The author has identified the following significant results. The crop types of a Great Plains study area were mapped from color infrared aerial photography. Each field was positively identified from field checks in the area. Enlarged (50x) density contour maps were constructed from three ERTS-1 images taken in the summer of 1973. The map interpreted from the aerial photography was compared to the density contour maps and the accuracy of the ERTS-1 density contour map interpretations were determined. Changes in the vegetation during the growing season and harvest periods were detectable on the ERTS-1 imagery. Density contouring aids in the detection of such charges.
Forcing of the Coupled Ionosphere-Thermosphere (IT) System During Magnetic Storms
NASA Technical Reports Server (NTRS)
Huang, Cheryl; Huang, Yanshi; Su, Yi-Jiun; Sutton, Eric; Hairston, Marc; Coley, W. Robin; Doornbos, Eelco; Zhang, Yongliang
2014-01-01
Poynting flux shows peaks around auroral zone AND inside polar cap. Energy enters IT system at all local times in polar cap. Track-integrated flux at DMSP often peaks at polar latitudes- probably due to increased area of polar cap during storm main phases. center dot lon temperatures at DMSP show large increases in polar region at all local times; cusp and auroral zones do not show distinctively high Ti. center dot I on temperatures in the polar cap are higher than in the auroral zones during quiet times. center dot Neutral densities at GRACE and GOCE show maxima at polar latitudes without clear auroral signatures. Response is fast, minutes from onset to density peaks. center dot GUVI observations of O/N2 ratio during storms show similar response as direct measurements of ion and neutral densities, i.e. high temperatures in polar cap during prestorm quiet period, heating proceeding from polar cap to lower latitudes during storm main phase. center dot Discrepancy between maps of Poynting flux and of ion temperatures/neutral densities suggests that connection between Poynting flux and Joule heating is not simple.
Force Density Function Relationships in 2-D Granular Media
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.
2004-01-01
An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms
Valdivia, Maria Pia; Stutman, Dan; Stoeckl, Christian; Mileham, Chad; Begishev, Ildar A; Bromage, Jake; Regan, Sean P
2018-01-10
Talbot-Lau x-ray interferometry uses incoherent x-ray sources to measure refraction index changes in matter. These measurements can provide accurate electron density mapping through phase retrieval. An adaptation of the interferometer has been developed in order to meet the specific requirements of high-energy density experiments. This adaptation is known as a moiré deflectometer, which allows for single-shot capabilities in the form of interferometric fringe patterns. The moiré x-ray deflectometry technique requires a set of object and reference images in order to provide electron density maps, which can be costly in the high-energy density environment. In particular, synthetic reference phase images obtained ex situ through a phase-scan procedure, can provide a feasible solution. To test this procedure, an object phase map was retrieved from a single-shot moiré image obtained from a plasma-produced x-ray source. A reference phase map was then obtained from phase-stepping measurements using a continuous x-ray tube source in a small laboratory setting. The two phase maps were used to retrieve an electron density map. A comparison of the moiré and phase-stepping phase-retrieval methods was performed to evaluate single-exposure plasma electron density mapping for high-energy density and other transient plasma experiments. It was found that a combination of phase-retrieval methods can deliver accurate refraction angle mapping. Once x-ray backlighter quality is optimized, the ex situ method is expected to deliver electron density mapping with improved resolution. The steps necessary for improved diagnostic performance are discussed.
QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility
NASA Astrophysics Data System (ADS)
Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.
2013-08-01
One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision-making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps, i.e. the spatial probability of a future vent opening given the past eruptive activity of a volcano. This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source Geographic Information System Quantum GIS, that is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows to select an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input datasets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).
United States Geological Survey fire science: fire danger monitoring and forecasting
Eidenshink, Jeff C.; Howard, Stephen M.
2012-01-01
Each day, the U.S. Geological Survey produces 7-day forecasts for all Federal lands of the distributions of number of ignitions, number of fires above a given size, and conditional probabilities of fires growing larger than a specified size. The large fire probability map is an estimate of the likelihood that ignitions will become large fires. The large fire forecast map is a probability estimate of the number of fires on federal lands exceeding 100 acres in the forthcoming week. The ignition forecast map is a probability estimate of the number of fires on Federal land greater than 1 acre in the forthcoming week. The extreme event forecast is the probability estimate of the number of fires on Federal land that may exceed 5,000 acres in the forthcoming week.
Zhou, Gaofeng; Jian, Jianbo; Wang, Penghao; Li, Chengdao; Tao, Ye; Li, Xuan; Renshaw, Daniel; Clements, Jonathan; Sweetingham, Mark; Yang, Huaan
2018-01-01
An ultra-high density genetic map containing 34,574 sequence-defined markers was developed in Lupinus angustifolius. Markers closely linked to nine genes of agronomic traits were identified. A physical map was improved to cover 560.5 Mb genome sequence. Lupin (Lupinus angustifolius L.) is a recently domesticated legume grain crop. In this study, we applied the restriction-site associated DNA sequencing (RADseq) method to genotype an F 9 recombinant inbred line population derived from a wild type × domesticated cultivar (W × D) cross. A high density linkage map was developed based on the W × D population. By integrating sequence-defined DNA markers reported in previous mapping studies, we established an ultra-high density consensus genetic map, which contains 34,574 markers consisting of 3508 loci covering 2399 cM on 20 linkage groups. The largest gap in the entire consensus map was 4.73 cM. The high density W × D map and the consensus map were used to develop an improved physical map, which covered 560.5 Mb of genome sequence data. The ultra-high density consensus linkage map, the improved physical map and the markers linked to genes of breeding interest reported in this study provide a common tool for genome sequence assembly, structural genomics, comparative genomics, functional genomics, QTL mapping, and molecular plant breeding in lupin.
Tsunami probability in the Caribbean Region
Parsons, T.; Geist, E.L.
2008-01-01
We calculated tsunami runup probability (in excess of 0.5 m) at coastal sites throughout the Caribbean region. We applied a Poissonian probability model because of the variety of uncorrelated tsunami sources in the region. Coastlines were discretized into 20 km by 20 km cells, and the mean tsunami runup rate was determined for each cell. The remarkable ???500-year empirical record compiled by O'Loughlin and Lander (2003) was used to calculate an empirical tsunami probability map, the first of three constructed for this study. However, it is unclear whether the 500-year record is complete, so we conducted a seismic moment-balance exercise using a finite-element model of the Caribbean-North American plate boundaries and the earthquake catalog, and found that moment could be balanced if the seismic coupling coefficient is c = 0.32. Modeled moment release was therefore used to generate synthetic earthquake sequences to calculate 50 tsunami runup scenarios for 500-year periods. We made a second probability map from numerically-calculated runup rates in each cell. Differences between the first two probability maps based on empirical and numerical-modeled rates suggest that each captured different aspects of tsunami generation; the empirical model may be deficient in primary plate-boundary events, whereas numerical model rates lack backarc fault and landslide sources. We thus prepared a third probability map using Bayesian likelihood functions derived from the empirical and numerical rate models and their attendant uncertainty to weight a range of rates at each 20 km by 20 km coastal cell. Our best-estimate map gives a range of 30-year runup probability from 0 - 30% regionally. ?? irkhaueser 2008.
Hydrologic controls on basin-scale distribution of benthic macroinvertebrates
NASA Astrophysics Data System (ADS)
Bertuzzo, E.; Ceola, S.; Singer, G. A.; Battin, T. J.; Montanari, A.; Rinaldo, A.
2013-12-01
The presentation deals with the role of streamflow variability on basin-scale distributions of benthic macroinvertebrates. Specifically, we present a probabilistic analysis of the impacts of the variability along the river network of relevant hydraulic variables on the density of benthic macroinvertebrate species. The relevance of this work is based on the implications of the predictability of macroinvertebrate patterns within a catchment on fluvial ecosystem health, being macroinvertebrates commonly used as sensitive indicators, and on the effects of anthropogenic activity. The analytical tools presented here outline a novel procedure of general nature aiming at a spatially-explicit quantitative assessment of how near-bed flow variability affects benthic macroinvertebrate abundance. Moving from the analytical characterization of the at-a-site probability distribution functions (pdfs) of streamflow and bottom shear stress, a spatial extension to a whole river network is performed aiming at the definition of spatial maps of streamflow and bottom shear stress. Then, bottom shear stress pdf, coupled with habitat suitability curves (e.g., empirical relations between species density and bottom shear stress) derived from field studies are used to produce maps of macroinvertebrate suitability to shear stress conditions. Thus, moving from measured hydrologic conditions, possible effects of river streamflow alterations on macroinvertebrate densities may be fairly assessed. We apply this framework to an Austrian river network, used as benchmark for the analysis, for which rainfall and streamflow time-series and river network hydraulic properties and macroinvertebrate density data are available. A comparison between observed vs "modeled" species' density in three locations along the examined river network is also presented. Although the proposed approach focuses on a single controlling factor, it shows important implications with water resources management and fluvial ecosystem protection.
Canopy Density Mapping on Ultracam-D Aerial Imagery in Zagros Woodlands, Iran
NASA Astrophysics Data System (ADS)
Erfanifard, Y.; Khodaee, Z.
2013-09-01
Canopy density maps express different characteristics of forest stands, especially in woodlands. Obtaining such maps by field measurements is so expensive and time-consuming. It seems necessary to find suitable techniques to produce these maps to be used in sustainable management of woodland ecosystems. In this research, a robust procedure was suggested to obtain these maps by very high spatial resolution aerial imagery. It was aimed to produce canopy density maps by UltraCam-D aerial imagery, newly taken in Zagros woodlands by Iran National Geographic Organization (NGO), in this study. A 30 ha plot of Persian oak (Quercus persica) coppice trees was selected in Zagros woodlands, Iran. The very high spatial resolution aerial imagery of the plot purchased from NGO, was classified by kNN technique and the tree crowns were extracted precisely. The canopy density was determined in each cell of different meshes with different sizes overlaid on the study area map. The accuracy of the final maps was investigated by the ground truth obtained by complete field measurements. The results showed that the proposed method of obtaining canopy density maps was efficient enough in the study area. The final canopy density map obtained by a mesh with 30 Ar (3000 m2) cell size had 80% overall accuracy and 0.61 KHAT coefficient of agreement which shows a great agreement with the observed samples. This method can also be tested in other case studies to reveal its capability in canopy density map production in woodlands.
Stochastic analysis of a pulse-type prey-predator model
NASA Astrophysics Data System (ADS)
Wu, Y.; Zhu, W. Q.
2008-04-01
A stochastic Lotka-Volterra model, a so-called pulse-type model, for the interaction between two species and their random natural environment is investigated. The effect of a random environment is modeled as random pulse trains in the birth rate of the prey and the death rate of the predator. The generalized cell mapping method is applied to calculate the probability distributions of the species populations at a state of statistical quasistationarity. The time evolution of the population densities is studied, and the probability of the near extinction time, from an initial state to a critical state, is obtained. The effects on the ecosystem behaviors of the prey self-competition term and of the pulse mean arrival rate are also discussed. Our results indicate that the proposed pulse-type model shows obviously distinguishable characteristics from a Gaussian-type model, and may confer a significant advantage for modeling the prey-predator system under discrete environmental fluctuations.
Stochastic analysis of a pulse-type prey-predator model.
Wu, Y; Zhu, W Q
2008-04-01
A stochastic Lotka-Volterra model, a so-called pulse-type model, for the interaction between two species and their random natural environment is investigated. The effect of a random environment is modeled as random pulse trains in the birth rate of the prey and the death rate of the predator. The generalized cell mapping method is applied to calculate the probability distributions of the species populations at a state of statistical quasistationarity. The time evolution of the population densities is studied, and the probability of the near extinction time, from an initial state to a critical state, is obtained. The effects on the ecosystem behaviors of the prey self-competition term and of the pulse mean arrival rate are also discussed. Our results indicate that the proposed pulse-type model shows obviously distinguishable characteristics from a Gaussian-type model, and may confer a significant advantage for modeling the prey-predator system under discrete environmental fluctuations.
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
Donato, Mary M.
2000-01-01
As ground water continues to provide an ever-growing proportion of Idaho?s drinking water, concerns about the quality of that resource are increasing. Pesticides (most commonly, atrazine/desethyl-atrazine, hereafter referred to as atrazine) and nitrite plus nitrate as nitrogen (hereafter referred to as nitrate) have been detected in many aquifers in the State. To provide a sound hydrogeologic basis for atrazine and nitrate management in southern Idaho—the largest region of land and water use in the State—the U.S. Geological Survey produced maps showing the probability of detecting these contaminants in ground water in the upper Snake River Basin (published in a 1998 report) and the western Snake River Plain (published in this report). The atrazine probability map for the western Snake River Plain was constructed by overlaying ground-water quality data with hydrogeologic and anthropogenic data in a geographic information system (GIS). A data set was produced in which each well had corresponding information on land use, geology, precipitation, soil characteristics, regional depth to ground water, well depth, water level, and atrazine use. These data were analyzed by logistic regression using a statistical software package. Several preliminary multivariate models were developed and those that best predicted the detection of atrazine were selected. The multivariate models then were entered into a GIS and the probability maps were produced. Land use, precipitation, soil hydrologic group, and well depth were significantly correlated with atrazine detections in the western Snake River Plain. These variables also were important in the 1998 probability study of the upper Snake River Basin. The effectiveness of the probability models for atrazine might be improved if more detailed data were available for atrazine application. A preliminary atrazine probability map for the entire Snake River Plain in Idaho, based on a data set representing that region, also was produced. In areas where this map overlaps the 1998 map of the upper Snake River Basin, the two maps show broadly similar probabilities of detecting atrazine. Logistic regression also was used to develop a preliminary statistical model that predicts the probability of detecting elevated nitrate in the western Snake River Plain. A nitrate probability map was produced from this model. Results showed that elevated nitrate concentrations were correlated with land use, soil organic content, well depth, and water level. Detailed information on nitrate input, specifically fertilizer application, might have improved the effectiveness of this model.
A Multiscale pipeline for the search of string-induced CMB anisotropies
NASA Astrophysics Data System (ADS)
Vafaei Sadr, A.; Movahed, S. M. S.; Farhang, M.; Ringeval, C.; Bouchet, F. R.
2018-03-01
We propose a multiscale edge-detection algorithm to search for the Gott-Kaiser-Stebbins imprints of a cosmic string (CS) network on the cosmic microwave background (CMB) anisotropies. Curvelet decomposition and extended Canny algorithm are used to enhance the string detectability. Various statistical tools are then applied to quantify the deviation of CMB maps having a CS contribution with respect to pure Gaussian anisotropies of inflationary origin. These statistical measures include the one-point probability density function, the weighted two-point correlation function (TPCF) of the anisotropies, the unweighted TPCF of the peaks and of the up-crossing map, as well as their cross-correlation. We use this algorithm on a hundred of simulated Nambu-Goto CMB flat sky maps, covering approximately 10 per cent of the sky, and for different string tensions Gμ. On noiseless sky maps with an angular resolution of 0.9 arcmin, we show that our pipeline detects CSs with Gμ as low as Gμ ≳ 4.3 × 10-10. At the same resolution, but with a noise level typical to a CMB-S4 phase II experiment, the detection threshold would be to Gμ ≳ 1.2 × 10-7.
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2003-01-01
A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...
Chang, Edward F; Breshears, Jonathan D; Raygor, Kunal P; Lau, Darryl; Molinaro, Annette M; Berger, Mitchel S
2017-01-01
OBJECTIVE Functional mapping using direct cortical stimulation is the gold standard for the prevention of postoperative morbidity during resective surgery in dominant-hemisphere perisylvian regions. Its role is necessitated by the significant interindividual variability that has been observed for essential language sites. The aim in this study was to determine the statistical probability distribution of eliciting aphasic errors for any given stereotactically based cortical position in a patient cohort and to quantify the variability at each cortical site. METHODS Patients undergoing awake craniotomy for dominant-hemisphere primary brain tumor resection between 1999 and 2014 at the authors' institution were included in this study, which included counting and picture-naming tasks during dense speech mapping via cortical stimulation. Positive and negative stimulation sites were collected using an intraoperative frameless stereotactic neuronavigation system and were converted to Montreal Neurological Institute coordinates. Data were iteratively resampled to create mean and standard deviation probability maps for speech arrest and anomia. Patients were divided into groups with a "classic" or an "atypical" location of speech function, based on the resultant probability maps. Patient and clinical factors were then assessed for their association with an atypical location of speech sites by univariate and multivariate analysis. RESULTS Across 102 patients undergoing speech mapping, the overall probabilities of speech arrest and anomia were 0.51 and 0.33, respectively. Speech arrest was most likely to occur with stimulation of the posterior inferior frontal gyrus (maximum probability from individual bin = 0.025), and variance was highest in the dorsal premotor cortex and the posterior superior temporal gyrus. In contrast, stimulation within the posterior perisylvian cortex resulted in the maximum mean probability of anomia (maximum probability = 0.012), with large variance in the regions surrounding the posterior superior temporal gyrus, including the posterior middle temporal, angular, and supramarginal gyri. Patients with atypical speech localization were far more likely to have tumors in canonical Broca's or Wernicke's areas (OR 7.21, 95% CI 1.67-31.09, p < 0.01) or to have multilobar tumors (OR 12.58, 95% CI 2.22-71.42, p < 0.01), than were patients with classic speech localization. CONCLUSIONS This study provides statistical probability distribution maps for aphasic errors during cortical stimulation mapping in a patient cohort. Thus, the authors provide an expected probability of inducing speech arrest and anomia from specific 10-mm 2 cortical bins in an individual patient. In addition, they highlight key regions of interindividual mapping variability that should be considered preoperatively. They believe these results will aid surgeons in their preoperative planning of eloquent cortex resection.
THE DARKEST SHADOWS: DEEP MID-INFRARED EXTINCTION MAPPING OF A MASSIVE PROTOCLUSTER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Michael J.; Tan, Jonathan C.; Kainulainen, Jouni
We use deep 8 μm Spitzer-IRAC imaging of massive Infrared Dark Cloud (IRDC) G028.37+00.07 to construct a mid-infrared (MIR) extinction map that probes mass surface densities up to Σ ∼ 1 g cm{sup –2} (A{sub V} ∼ 200 mag), amongst the highest values yet probed by extinction mapping. Merging with an NIR extinction map of the region creates a high dynamic range map that reveals structures down to A{sub V} ∼ 1 mag. We utilize the map to: (1) measure a cloud mass ∼7 × 10{sup 4} M {sub ☉} within a radius of ∼8 pc. {sup 13}CO kinematics indicate thatmore » the cloud is gravitationally bound. It thus has the potential to form one of the most massive young star clusters known in the Galaxy. (2) Characterize the structures of 16 massive cores within the IRDC, finding they can be fit by singular polytropic spheres with ρ∝r{sup −k{sub ρ}} and k {sub ρ} = 1.3 ± 0.3. They have Σ-bar ≃0.1--0.4 g cm{sup −2}—relatively low values that, along with their measured cold temperatures, suggest that magnetic fields, rather than accretion-powered radiative heating, are important for controlling fragmentation of these cores. (3) Determine the Σ (equivalently column density or A{sub V} ) probability distribution function (PDF) for a region that is nearly complete for A{sub V} > 3 mag. The PDF is well fit by a single log-normal with mean A-bar {sub V}≃9 mag, high compared to other known clouds. It does not exhibit a separate high-end power law tail, which has been claimed to indicate the importance of self-gravity. However, we suggest that the PDF does result from a self-similar, self-gravitating hierarchy of structures present over a wide range of scales in the cloud.« less
Galaxy clustering with photometric surveys using PDF redshift information
Asorey, J.; Carrasco Kind, M.; Sevilla-Noarbe, I.; ...
2016-03-28
Here, photometric surveys produce large-area maps of the galaxy distribution, but with less accurate redshift information than is obtained from spectroscopic methods. Modern photometric redshift (photo-z) algorithms use galaxy magnitudes, or colors, that are obtained through multi-band imaging to produce a probability density function (PDF) for each galaxy in the map. We used simulated data to study the effect of using different photo-z estimators to assign galaxies to redshift bins in order to compare their effects on angular clustering and galaxy bias measurements. We found that if we use the entire PDF, rather than a single-point (mean or mode) estimate, the deviations are less biased, especially when using narrow redshift bins. When the redshift bin widths aremore » $$\\Delta z=0.1$$, the use of the entire PDF reduces the typical measurement bias from 5%, when using single point estimates, to 3%.« less
Comparison of an Atomic Model and Its Cryo-EM Image at the Central Axis of a Helix
He, Jing; Zeil, Stephanie; Hallak, Hussam; McKaig, Kele; Kovacs, Julio; Wriggers, Willy
2016-01-01
Cryo-electron microscopy (cryo-EM) is an important biophysical technique that produces three-dimensional (3D) density maps at different resolutions. Because more and more models are being produced from cryo-EM density maps, validation of the models is becoming important. We propose a method for measuring local agreement between a model and the density map using the central axis of the helix. This method was tested using 19 helices from cryo-EM density maps between 5.5 Å and 7.2 Å resolution and 94 helices from simulated density maps. This method distinguished most of the well-fitting helices, although challenges exist for shorter helices. PMID:27280059
Series approximation to probability densities
NASA Astrophysics Data System (ADS)
Cohen, L.
2018-04-01
One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.
Segmentation and automated measurement of chronic wound images: probability map approach
NASA Astrophysics Data System (ADS)
Ahmad Fauzi, Mohammad Faizal; Khansa, Ibrahim; Catignani, Karen; Gordillo, Gayle; Sen, Chandan K.; Gurcan, Metin N.
2014-03-01
estimated 6.5 million patients in the United States are affected by chronic wounds, with more than 25 billion US dollars and countless hours spent annually for all aspects of chronic wound care. There is need to develop software tools to analyze wound images that characterize wound tissue composition, measure their size, and monitor changes over time. This process, when done manually, is time-consuming and subject to intra- and inter-reader variability. In this paper, we propose a method that can characterize chronic wounds containing granulation, slough and eschar tissues. First, we generate a Red-Yellow-Black-White (RYKW) probability map, which then guides the region growing segmentation process. The red, yellow and black probability maps are designed to handle the granulation, slough and eschar tissues, respectively found in wound tissues, while the white probability map is designed to detect the white label card for measurement calibration purpose. The innovative aspects of this work include: 1) Definition of a wound characteristics specific probability map for segmentation, 2) Computationally efficient regions growing on 4D map; 3) Auto-calibration of measurements with the content of the image. The method was applied on 30 wound images provided by the Ohio State University Wexner Medical Center, with the ground truth independently generated by the consensus of two clinicians. While the inter-reader agreement between the readers is 85.5%, the computer achieves an accuracy of 80%.
NASA Astrophysics Data System (ADS)
Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.
2012-12-01
We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.
NASA Astrophysics Data System (ADS)
McKague, Darren Shawn
2001-12-01
The statistical properties of clouds and precipitation on a global scale are important to our understanding of climate. Inversion methods exist to retrieve the needed cloud and precipitation properties from satellite data pixel-by-pixel that can then be summarized over large data sets to obtain the desired statistics. These methods can be quite computationally expensive, and typically don't provide errors on the statistics. A new method is developed to directly retrieve probability distributions of parameters from the distribution of measured radiances. The method also provides estimates of the errors on the retrieved distributions. The method can retrieve joint distributions of parameters that allows for the study of the connection between parameters. A forward radiative transfer model creates a mapping from retrieval parameter space to radiance space. A Monte Carlo procedure uses the mapping to transform probability density from the observed radiance histogram to a two- dimensional retrieval property probability distribution function (PDF). An estimate of the uncertainty in the retrieved PDF is calculated from random realizations of the radiance to retrieval parameter PDF transformation given the uncertainty of the observed radiances, the radiance PDF, the forward radiative transfer, the finite number of prior state vectors, and the non-unique mapping to retrieval parameter space. The retrieval method is also applied to the remote sensing of precipitation from SSM/I microwave data. A method of stochastically generating hydrometeor fields based on the fields from a numerical cloud model is used to create the precipitation parameter radiance space transformation. The impact of vertical and horizontal variability within the hydrometeor fields has a significant impact on algorithm performance. Beamfilling factors are computed from the simulated hydrometeor fields. The beamfilling factors vary quite a bit depending upon the horizontal structure of the rain. The algorithm is applied to SSM/I images from the eastern tropical Pacific and is compared to PDFs of rain rate computed using pixel-by-pixel retrievals from Wilheit and from Liu and Curry. Differences exist between the three methods, but good general agreement is seen between the PDF retrieval algorithm and the algorithm of Liu and Curry. (Abstract shortened by UMI.)
Yarkovsky-driven Impact Predictions: Apophis and 1950 DA
NASA Astrophysics Data System (ADS)
Farnocchia, Davide; Chesley, S. R.; Chodas, P.; Milani, A.
2013-05-01
Abstract (2,250 Maximum Characters): Orbit determination for Near-Earth Asteroids presents unique technical challenges due to the imperative of early detection and careful assessment of the risk posed by specific Earth close approaches. The occurrence of an Earth impact can be decisively driven by the Yarkovsky effect, which is the most important nongravitational perturbation as it causes asteroids to undergo a secular variation in semimajor axis resulting in a quadratic effect in anomaly. We discuss the cases of (99942) Apophis and (29075) 1950 DA. The relevance of the Yarkovsky effect for Apophis is due to a scattering close approach in 2029 with minimum geocentric distance ~38000 km. For 1950 DA the influence of the Yarkovsky effect in 2880 is due to the long time interval preceding the impact. We use the available information on the asteroids' physical models as a starting point for a Monte Carlo method that allow us to measure how the Yarkovsky effect affects orbital predictions. For Apophis we map onto the 2029 close approach b-plane and analyze the keyholes corresponding to resonant close approaches. For 1950 DA we use the b-plane corresponding to the possible impact in 2880. We finally compute the impact probability from the mapped probability density function on the considered b-plane.
Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
Calibration of the DRASTIC ground water vulnerability mapping method
Rupert, M.G.
2001-01-01
Ground water vulnerability maps developed using the DRASTIC method have been produced in many parts of the world. Comparisons of those maps with actual ground water quality data have shown that the DRASTIC method is typically a poor predictor of ground water contamination. This study significantly improved the effectiveness of a modified DRASTIC ground water vulnerability map by calibrating the point rating schemes to actual ground water quality data by using nonparametric statistical techniques and a geographic information system. Calibration was performed by comparing data on nitrite plus nitrate as nitrogen (NO2 + NO3-N) concentrations in ground water to land-use, soils, and depth to first-encountered ground water data. These comparisons showed clear statistical differences between NO2 + NO3-N concentrations and the various categories. Ground water probability point ratings for NO2 + NO3-N contamination were developed from the results of these comparisons, and a probability map was produced. This ground water probability map was then correlated with an independent set of NO2 + NO3-N data to demonstrate its effectiveness in predicting elevated NO2 + NO3-N concentrations in ground water. This correlation demonstrated that the probability map was effective, but a vulnerability map produced with the uncalibrated DRASTIC method in the same area and using the same data layers was not effective. Considerable time and expense have been outlaid to develop ground water vulnerability maps with the DRASTIC method. This study demonstrates a cost-effective method to improve and verify the effectiveness of ground water vulnerability maps.
Dynamical history of a binary cluster: Abell 3653
NASA Astrophysics Data System (ADS)
Caglar, Turgay; Hudaverdi, Murat
2017-12-01
We study the dynamical structure of a bimodal galaxy cluster Abell 3653 at z = 0.1089 using optical and X-ray data. Observations include archival data from the Anglo-Australian Telescope, X-ray observatories XMM-Newton and Chandra. We draw a global picture for A3653 using galaxy density, X-ray luminosity and temperature maps. The galaxy distribution has a regular morphological shape at the 3 Mpc size. The galaxy density map shows an elongation in the east-west direction, which perfectly aligns with the extended diffuse X-ray emission. We detect two dominant groups around the two brightest cluster galaxies (BCGs). BCG1 (z = 0.1099) can be associated with the main cluster A3653E, and a foreground subcluster A3653W is concentrated at BCG2 (z = 0.1075). Both X-ray peaks are dislocated from the BCGs by ∼35 kpc, which suggest an ongoing merger process. We measure the subcluster gas temperatures of 4.67 and 3.66 keV, respectively. Two-body dynamical analysis shows that A3653E and A3653W are very likely gravitationally bound (93.5 per cent probability). The highly favoured scenario suggests that the two subclusters have a mass ratio of 1.4 and are colliding close to the plane of sky (α = 17.61°) at 2400 km s-1, and will undergo core passage in 380 Myr. The temperature map also significantly shows a shock-heated gas (6.16 keV) between the subclusters, which confirms the supersonic infalling scenario.
Pinhas, Alexander; Linderman, Rachel; Mo, Shelley; Krawitz, Brian D; Geyman, Lawrence S; Carroll, Joseph; Rosen, Richard B; Chui, Toco Y
2018-01-01
To present a method for age-matched deviation mapping in the assessment of disease-related changes to the radial peripapillary capillaries (RPCs). We reviewed 4.5x4.5mm en face peripapillary OCT-A scans of 133 healthy control eyes (133 subjects, mean 41.5 yrs, range 11-82 yrs) and 4 eyes with distinct retinal pathologies, obtained using spectral-domain optical coherence tomography angiography. Statistical analysis was performed to evaluate the impact of age on RPC perfusion densities. RPC density group mean and standard deviation maps were generated for each decade of life. Deviation maps were created for the diseased eyes based on these maps. Large peripapillary vessel (LPV; noncapillary vessel) perfusion density was also studied for impact of age. Average healthy RPC density was 42.5±1.47%. ANOVA and pairwise Tukey-Kramer tests showed that RPC density in the ≥60yr group was significantly lower compared to RPC density in all younger decades of life (p<0.01). Average healthy LPV density was 21.5±3.07%. Linear regression models indicated that LPV density decreased with age, however ANOVA and pairwise Tukey-Kramer tests did not reach statistical significance. Deviation mapping enabled us to quantitatively and visually elucidate the significance of RPC density changes in disease. It is important to consider changes that occur with aging when analyzing RPC and LPV density changes in disease. RPC density, coupled with age-matched deviation mapping techniques, represents a potentially clinically useful method in detecting changes to peripapillary perfusion in disease.
The frequency-domain approach for apparent density mapping
NASA Astrophysics Data System (ADS)
Tong, T.; Guo, L.
2017-12-01
Apparent density mapping is a technique to estimate density distribution in the subsurface layer from the observed gravity data. It has been widely applied for geologic mapping, tectonic study and mineral exploration for decades. Apparent density mapping usually models the density layer as a collection of vertical, juxtaposed prisms in both horizontal directions, whose top and bottom surfaces are assumed to be horizontal or variable-depth, and then inverts or deconvolves the gravity anomalies to determine the density of each prism. Conventionally, the frequency-domain approach, which assumes that both top and bottom surfaces of the layer are horizontal, is usually utilized for fast density mapping. However, such assumption is not always valid in the real world, since either the top surface or the bottom surface may be variable-depth. Here, we presented a frequency-domain approach for apparent density mapping, which permits both the top and bottom surfaces of the layer to be variable-depth. We first derived the formula for forward calculation of gravity anomalies caused by the density layer, whose top and bottom surfaces are variable-depth, and the formula for inversion of gravity anomalies for the density distribution. Then we proposed the procedure for density mapping based on both the formulas of inversion and forward calculation. We tested the approach on the synthetic data, which verified its effectiveness. We also tested the approach on the real Bouguer gravity anomalies data from the central South China. The top surface was assumed to be flat and was on the sea level, and the bottom surface was considered as the Moho surface. The result presented the crustal density distribution, which was coinciding well with the basic tectonic features in the study area.
SE Great Basin Play Fairway Analysis
Adam Brandt
2015-11-15
This submission includes a Na/K geothermometer probability greater than 200 deg C map, as well as two play fairway analysis (PFA) models. The probability map acts as a composite risk segment for the PFA models. The PFA models differ in their application of magnetotelluric conductors as composite risk segments. These PFA models map out the geothermal potential in the region of SE Great Basin, Utah.
The physics and chemistry of the L134N molecular core
NASA Technical Reports Server (NTRS)
Swade, Daryl A.
1989-01-01
The dark cloud L134N is studied in detail via millimeter- and centimeter-wavelength emission-line spectra. A high-density core of molecular gas exists in L134N which has a kinetic temperature of about 12 K, a peak molecular hydrogen density of about 10 exp 4.5/cu cm, and a mass of about 23 solar. The core may be the site of future star formation. Maps of emission from (C-18)O, CS, H(C-13)O(+), SO, NH3, and C3H2 reveal morphologically different distributions resulting in part from both varying physical conditions within the cloud and optical depth effects. Significant differences also exist which are probably due to chemical abundance variations. A consistent set of LTE chemical abundances has been estimated at as many as seven positions, which can be used to constrain chemical models of dark clouds.
PDF approach for turbulent scalar field: Some recent developments
NASA Technical Reports Server (NTRS)
Gao, Feng
1993-01-01
The probability density function (PDF) method has been proven a very useful approach in turbulence research. It has been particularly effective in simulating turbulent reacting flows and in studying some detailed statistical properties generated by a turbulent field There are, however, some important questions that have yet to be answered in PDF studies. Our efforts in the past year have been focused on two areas. First, a simple mixing model suitable for Monte Carlo simulations has been developed based on the mapping closure. Secondly, the mechanism of turbulent transport has been analyzed in order to understand the recently observed abnormal PDF's of turbulent temperature fields generated by linear heat sources.
Emergent quantum mechanics without wavefunctions
NASA Astrophysics Data System (ADS)
Mesa Pascasio, J.; Fussy, S.; Schwabl, H.; Grössing, G.
2016-03-01
We present our model of an Emergent Quantum Mechanics which can be characterized by “realism without pre-determination”. This is illustrated by our analytic description and corresponding computer simulations of Bohmian-like “surreal” trajectories, which are obtained classically, i.e. without the use of any quantum mechanical tool such as wavefunctions. However, these trajectories do not necessarily represent ontological paths of particles but rather mappings of the probability density flux in a hydrodynamical sense. Modelling emergent quantum mechanics in a high-low intesity double slit scenario gives rise to the “quantum sweeper effect” with a characteristic intensity pattern. This phenomenon should be experimentally testable via weak measurement techniques.
Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec
2004-01-01
Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.
NASA Astrophysics Data System (ADS)
Jolivet, R.; Duputel, Z.; Simons, M.; Jiang, J.; Riel, B. V.; Moore, A. W.; Owen, S. E.
2017-12-01
Mapping subsurface fault slip during the different phases of the seismic cycle provides a probe of the mechanical properties and the state of stress along these faults. We focus on the northern Chile megathrust where first order estimates of interseismic fault locking suggests little to no overlap between regions slipping seismically versus those that are dominantly aseismic. However, published distributions of slip, be they during seismic or aseismic phases, rely on unphysical regularization of the inverse problem, thereby cluttering attempts to quantify the degree of overlap between seismic and aseismic slip. Considering all the implications of aseismic slip on our understanding of the nucleation, propagation and arrest of seismic ruptures, it is of utmost importance to quantify our confidence in the current description of fault coupling. Here, we take advantage of 20 years of InSAR observations and more than a decade of GPS measurements to derive probabilistic maps of inter-seismic coupling, as well as co-seismic and post-seismic slip along the northern Chile subduction megathrust. A wide InSAR velocity map is derived using a novel multi-pixel time series analysis method accounting for orbital errors, atmospheric noise and ground deformation. We use AlTar, a massively parallel Monte Carlo Markov Chain algorithm exploiting the acceleration capabilities of Graphic Processing Units, to derive the probability density functions (PDF) of slip. In northern Chile, we find high probabilities for a complete release of the elastic strain accumulated since the 1877 earthquake by the 2014, Iquique earthquake and for the presence of a large, independent, locked asperity left untapped by recent events, north of the Mejillones peninsula. We evaluate the probability of overlap between the co-, inter- and post-seismic slip and consider the potential occurrence of slow, aseismic slip events along this portion of the subduction zone.
Estimating loblolly pine size-density trajectories across a range of planting densities
Curtis L. VanderSchaaf; Harold E. Burkhart
2013-01-01
Size-density trajectories on the logarithmic (ln) scale are generally thought to consist of two major stages. The first is often referred to as the density-independent mortality stage where the probability of mortality is independent of stand density; in the second, often referred to as the density-dependent mortality or self-thinning stage, the probability of...
A method for mapping flood hazard along roads.
Kalantari, Zahra; Nickman, Alireza; Lyon, Steve W; Olofsson, Bo; Folkeson, Lennart
2014-01-15
A method was developed for estimating and mapping flood hazard probability along roads using road and catchment characteristics as physical catchment descriptors (PCDs). The method uses a Geographic Information System (GIS) to derive candidate PCDs and then identifies those PCDs that significantly predict road flooding using a statistical modelling approach. The method thus allows flood hazards to be estimated and also provides insights into the relative roles of landscape characteristics in determining road-related flood hazards. The method was applied to an area in western Sweden where severe road flooding had occurred during an intense rain event as a case study to demonstrate its utility. The results suggest that for this case study area three categories of PCDs are useful for prediction of critical spots prone to flooding along roads: i) topography, ii) soil type, and iii) land use. The main drivers among the PCDs considered were a topographical wetness index, road density in the catchment, soil properties in the catchment (mainly the amount of gravel substrate) and local channel slope at the site of a road-stream intersection. These can be proposed as strong indicators for predicting the flood probability in ungauged river basins in this region, but some care is needed in generalising the case study results other potential factors are also likely to influence the flood hazard probability. Overall, the method proposed represents a straightforward and consistent way to estimate flooding hazards to inform both the planning of future roadways and the maintenance of existing roadways. Copyright © 2013 Elsevier Ltd. All rights reserved.
Yarkovsky-driven Impact Predictions: Apophis and 1950 DA
NASA Astrophysics Data System (ADS)
Chesley, Steven R.; Farnocchia, D.; Chodas, P. W.; Milani, A.
2013-10-01
Orbit determination for Near-Earth Asteroids presents unique technical challenges due to the imperative of early detection and careful assessment of the risk posed by specific Earth close approaches. The occurrence of an Earth impact can be decisively driven by the Yarkovsky effect, which is the most important nongravitational perturbation as it causes asteroids to undergo a secular variation in semimajor axis resulting in a quadratic effect in anomaly. We discuss the cases of (99942) Apophis and (29075) 1950 DA. The relevance of the Yarkovsky effect for Apophis is due to a scattering close approach in 2029 with minimum geocentric distance ~38000 km. For 1950 DA the influence of the Yarkovsky effect in 2880 is due to the long time interval preceding the impact. We use the available information from the astrometry and the asteroids' physical models and dynamical evolution as a starting point for a Monte Carlo method that allows us to measure how the Yarkovsky effect affects orbital predictions. We also find that 1950 DA has a 98% likelihood of being a retrograde rotator. For Apophis we map onto the 2029 close approach b-plane and analyze the keyholes corresponding to resonant close approaches. For 1950 DA we use the b-plane corresponding to the possible impact in 2880. We finally compute the impact probability from the mapped probability density function on the considered b-plane. For Apophis we have 4 in a million chances of an impact in 2068, while the probability of Earth impact in 2880 for 1950 DA is 0.04%.
Data-driven probability concentration and sampling on manifold
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu
2016-09-15
A new methodology is proposed for generating realizations of a random vector with values in a finite-dimensional Euclidean space that are statistically consistent with a dataset of observations of this vector. The probability distribution of this random vector, while a priori not known, is presumed to be concentrated on an unknown subset of the Euclidean space. A random matrix is introduced whose columns are independent copies of the random vector and for which the number of columns is the number of data points in the dataset. The approach is based on the use of (i) the multidimensional kernel-density estimation methodmore » for estimating the probability distribution of the random matrix, (ii) a MCMC method for generating realizations for the random matrix, (iii) the diffusion-maps approach for discovering and characterizing the geometry and the structure of the dataset, and (iv) a reduced-order representation of the random matrix, which is constructed using the diffusion-maps vectors associated with the first eigenvalues of the transition matrix relative to the given dataset. The convergence aspects of the proposed methodology are analyzed and a numerical validation is explored through three applications of increasing complexity. The proposed method is found to be robust to noise levels and data complexity as well as to the intrinsic dimension of data and the size of experimental datasets. Both the methodology and the underlying mathematical framework presented in this paper contribute new capabilities and perspectives at the interface of uncertainty quantification, statistical data analysis, stochastic modeling and associated statistical inverse problems.« less
ERIC Educational Resources Information Center
Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon
2013-01-01
Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…
QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility
NASA Astrophysics Data System (ADS)
Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.
2013-11-01
One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps (i.e., the spatial probability of a future vent opening given the past eruptive activity of a volcano). This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source geographic information system Quantum GIS, which is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows the selection of an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input data sets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).
Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.
2016-01-01
Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures. PMID:27265878
Galindo, I; Romero, M C; Sánchez, N; Morales, J M
2016-06-06
Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.
NASA Astrophysics Data System (ADS)
Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.
2016-06-01
Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.
Robust location and spread measures for nonparametric probability density function estimation.
López-Rubio, Ezequiel
2009-10-01
Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.
Rastas, Pasi; Calboli, Federico C. F.; Guo, Baocheng; Shikano, Takahito; Merilä, Juha
2016-01-01
High-density linkage maps are important tools for genome biology and evolutionary genetics by quantifying the extent of recombination, linkage disequilibrium, and chromosomal rearrangements across chromosomes, sexes, and populations. They provide one of the best ways to validate and refine de novo genome assemblies, with the power to identify errors in assemblies increasing with marker density. However, assembly of high-density linkage maps is still challenging due to software limitations. We describe Lep-MAP2, a software for ultradense genome-wide linkage map construction. Lep-MAP2 can handle various family structures and can account for achiasmatic meiosis to gain linkage map accuracy. Simulations show that Lep-MAP2 outperforms other available mapping software both in computational efficiency and accuracy. When applied to two large F2-generation recombinant crosses between two nine-spined stickleback (Pungitius pungitius) populations, it produced two high-density (∼6 markers/cM) linkage maps containing 18,691 and 20,054 single nucleotide polymorphisms. The two maps showed a high degree of synteny, but female maps were 1.5–2 times longer than male maps in all linkage groups, suggesting genome-wide recombination suppression in males. Comparison with the genome sequence of the three-spined stickleback (Gasterosteus aculeatus) revealed a high degree of interspecific synteny with a low frequency (<5%) of interchromosomal rearrangements. However, a fairly large (ca. 10 Mb) translocation from autosome to sex chromosome was detected in both maps. These results illustrate the utility and novel features of Lep-MAP2 in assembling high-density linkage maps, and their usefulness in revealing evolutionarily interesting properties of genomes, such as strong genome-wide sex bias in recombination rates. PMID:26668116
Inference about density and temporary emigration in unmarked populations
Chandler, Richard B.; Royle, J. Andrew; King, David I.
2011-01-01
Few species are distributed uniformly in space, and populations of mobile organisms are rarely closed with respect to movement, yet many models of density rely upon these assumptions. We present a hierarchical model allowing inference about the density of unmarked populations subject to temporary emigration and imperfect detection. The model can be fit to data collected using a variety of standard survey methods such as repeated point counts in which removal sampling, double-observer sampling, or distance sampling is used during each count. Simulation studies demonstrated that parameter estimators are unbiased when temporary emigration is either "completely random" or is determined by the size and location of home ranges relative to survey points. We also applied the model to repeated removal sampling data collected on Chestnut-sided Warblers (Dendroica pensylvancia) in the White Mountain National Forest, USA. The density estimate from our model, 1.09 birds/ha, was similar to an estimate of 1.11 birds/ha produced by an intensive spot-mapping effort. Our model is also applicable when processes other than temporary emigration affect the probability of being available for detection, such as in studies using cue counts. Functions to implement the model have been added to the R package unmarked.
Molecular surface mesh generation by filtering electron density map.
Giard, Joachim; Macq, Benoît
2010-01-01
Bioinformatics applied to macromolecules are now widely spread and in continuous expansion. In this context, representing external molecular surface such as the Van der Waals Surface or the Solvent Excluded Surface can be useful for several applications. We propose a fast and parameterizable algorithm giving good visual quality meshes representing molecular surfaces. It is obtained by isosurfacing a filtered electron density map. The density map is the result of the maximum of Gaussian functions placed around atom centers. This map is filtered by an ideal low-pass filter applied on the Fourier Transform of the density map. Applying the marching cubes algorithm on the inverse transform provides a mesh representation of the molecular surface.
Gettings, Mark E.; Bultman, Mark W.
2005-01-01
Some aquifers of the southwestern Colorado Plateaus Province are deeply buried and overlain by several impermeable shale layers, and so recharge to the aquifer probably is mainly by seepage down penetrative-fracture systems. The purpose of this 2-year study, sponsored by the U.S. National Park Service, was to map candidate deep penetrative fractures over a 120,000-km2 area, using gravity and aeromagnetic-anomaly data together with surficial-fracture data. The study area was on the Colorado Plateau south of the Grand Canyon and west of Black Mesa; mapping was carried out at a scale of 1:250,000. The resulting database constitutes a spatially registered estimate of deep-fracture locations. Candidate penetrative fractures were located by spatial correlation of horizontal- gradient and analytic-signal maximums of gravity and magnetic anomalies with major surficial lineaments obtained from geologic, topographic, side-looking-airborne-radar, and satellite imagery. The maps define a subset of candidate penetrative fractures because of limitations in the data coverage and the analytical technique. In particular, the data and analytical technique used cannot predict whether the fractures are open or closed. Correlations were carried out by using image-processing software, such that every pixel on the resulting images was coded to uniquely identify which datasets are correlated. The technique correctly identified known and many new deep fracture systems. The resulting penetrative-fracture-distribution maps constitute an objectively obtained, repeatable dataset and a benchmark from which additional studies can begin. The maps also define in detail the tectonic fabrics of the southwestern Colorado Plateaus Province. Overlaying the correlated lineaments on the normalized-density-of-vegetation-index image reveals that many of these lineaments correlate with the boundaries of vegetation zones in drainages and canyons and so may be controlling near-surface water availability in some places. Many derivative products can be produced from the database, such as fracture-density-estimate maps, and maps with the number of correlations color-coded to estimate the possible quality of correlation. The database contained in this report is designed to be used in a geographic information system and image-processing systems, and most data layers are in georeferenced tagged image format (Geotiff) or ARC grids. The report includes 163 map plates and various metadata, supporting, and statistical diagram files.
ERIC Educational Resources Information Center
Storkel, Holly L.; Hoover, Jill R.
2011-01-01
The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…
Transport, diffusion, and energy studies in the Arnold-Beltrami-Childress map
NASA Astrophysics Data System (ADS)
Das, Swetamber; Gupte, Neelima
2017-09-01
We study the transport and diffusion properties of passive inertial particles described by a six-dimensional dissipative bailout embedding map. The base map chosen for the study is the three-dimensional incompressible Arnold-Beltrami-Childress (ABC) map chosen as a representation of volume preserving flows. There are two distinct cases: the two-action and the one-action cases, depending on whether two or one of the parameters (A ,B ,C ) exceed 1. The embedded map dynamics is governed by two parameters (α ,γ ), which quantify the mass density ratio and dissipation, respectively. There are important differences between the aerosol (α <1 ) and the bubble (α >1 ) regimes. We have studied the diffusive behavior of the system and constructed the phase diagram in the parameter space by computing the diffusion exponents η . Three classes have been broadly classified—subdiffusive transport (η <1 ), normal diffusion (η ≈1 ), and superdiffusion (η >1 ) with η ≈2 referred to as the ballistic regime. Correlating the diffusive phase diagram with the phase diagram for dynamical regimes seen earlier, we find that the hyperchaotic bubble regime is largely correlated with normal and superdiffusive behavior. In contrast, in the aerosol regime, ballistic superdiffusion is seen in regions that largely show periodic dynamical behaviors, whereas subdiffusive behavior is seen in both periodic and chaotic regimes. The probability distributions of the diffusion exponents show power-law scaling for both aerosol and bubbles in the superdiffusive regimes. We further study the Poincáre recurrence times statistics of the system. Here, we find that recurrence time distributions show power law regimes due to the existence of partial barriers to transport in the phase space. Moreover, the plot of average particle kinetic energies versus the mass density ratio for the two-action case exhibits a devil's staircase-like structure for higher dissipation values. We explain these results and discuss their implications for realistic systems.
KinSNP software for homozygosity mapping of disease genes using SNP microarrays.
Amir, El-Ad David; Bartal, Ofer; Morad, Efrat; Nagar, Tal; Sheynin, Jony; Parvari, Ruti; Chalifa-Caspi, Vered
2010-08-01
Consanguineous families affected with a recessive genetic disease caused by homozygotisation of a mutation offer a unique advantage for positional cloning of rare diseases. Homozygosity mapping of patient genotypes is a powerful technique for the identification of the genomic locus harbouring the causing mutation. This strategy relies on the observation that in these patients a large region spanning the disease locus is also homozygous with high probability. The high marker density in single nucleotide polymorphism (SNP) arrays is extremely advantageous for homozygosity mapping. We present KinSNP, a user-friendly software tool for homozygosity mapping using SNP arrays. The software searches for stretches of SNPs which are homozygous to the same allele in all ascertained sick individuals. User-specified parameters control the number of allowed genotyping 'errors' within homozygous blocks. Candidate disease regions are then reported in a detailed, coloured Excel file, along with genotypes of family members and healthy controls. An interactive genome browser has been included which shows homozygous blocks, individual genotypes, genes and further annotations along the chromosomes, with zooming and scrolling capabilities. The software has been used to identify the location of a mutated gene causing insensitivity to pain in a large Bedouin family. KinSNP is freely available from.
KinSNP software for homozygosity mapping of disease genes using SNP microarrays
2010-01-01
Consanguineous families affected with a recessive genetic disease caused by homozygotisation of a mutation offer a unique advantage for positional cloning of rare diseases. Homozygosity mapping of patient genotypes is a powerful technique for the identification of the genomic locus harbouring the causing mutation. This strategy relies on the observation that in these patients a large region spanning the disease locus is also homozygous with high probability. The high marker density in single nucleotide polymorphism (SNP) arrays is extremely advantageous for homozygosity mapping. We present KinSNP, a user-friendly software tool for homozygosity mapping using SNP arrays. The software searches for stretches of SNPs which are homozygous to the same allele in all ascertained sick individuals. User-specified parameters control the number of allowed genotyping 'errors' within homozygous blocks. Candidate disease regions are then reported in a detailed, coloured Excel file, along with genotypes of family members and healthy controls. An interactive genome browser has been included which shows homozygous blocks, individual genotypes, genes and further annotations along the chromosomes, with zooming and scrolling capabilities. The software has been used to identify the location of a mutated gene causing insensitivity to pain in a large Bedouin family. KinSNP is freely available from http://bioinfo.bgu.ac.il/bsu/software/kinSNP. PMID:20846928
Isostatic Gravity Map with Geology of the Santa Ana 30' x 60' Quadrangle, Southern California
Langenheim, V.E.; Lee, Tien-Chang; Biehler, Shawn; Jachens, R.C.; Morton, D.M.
2006-01-01
This report presents an updated isostatic gravity map, with an accompanying discussion of the geologic significance of gravity anomalies in the Santa Ana 30 by 60 minute quadrangle, southern California. Comparison and analysis of the gravity field with mapped geology indicates the configuration of structures bounding the Los Angeles Basin, geometry of basins developed within the Elsinore and San Jacinto Fault zones, and a probable Pliocene drainage network carved into the bedrock of the Perris block. Total cumulative horizontal displacement on the Elsinore Fault derived from analysis of the length of strike-slip basins within the fault zone is about 5-12 km and is consistent with previously published estimates derived from other sources of information. This report also presents a map of density variations within pre-Cenozoic metamorphic and igneous basement rocks. Analysis of basement gravity patterns across the Elsinore Fault zone suggests 6-10 km of right-lateral displacement. A high-amplitude basement gravity high is present over the San Joaquin Hills and is most likely caused by Peninsular Ranges gabbro and/or Tertiary mafic intrusion. A major basement gravity gradient coincides with the San Jacinto Fault zone and marked magnetic, seismic-velocity, and isotopic gradients that reflect a discontinuity within the Peninsular Ranges batholith in the northeast corner of the quadrangle.
Li, Gang; Hillier, LaDeana W; Grahn, Robert A; Zimin, Aleksey V; David, Victor A; Menotti-Raymond, Marilyn; Middleton, Rondo; Hannah, Steven; Hendrickson, Sher; Makunin, Alex; O'Brien, Stephen J; Minx, Pat; Wilson, Richard K; Lyons, Leslie A; Warren, Wesley C; Murphy, William J
2016-06-01
High-resolution genetic and physical maps are invaluable tools for building accurate genome assemblies, and interpreting results of genome-wide association studies (GWAS). Previous genetic and physical maps anchored good quality draft assemblies of the domestic cat genome, enabling the discovery of numerous genes underlying hereditary disease and phenotypes of interest to the biomedical science and breeding communities. However, these maps lacked sufficient marker density to order thousands of shorter scaffolds in earlier assemblies, which instead relied heavily on comparative mapping with related species. A high-resolution map would aid in validating and ordering chromosome scaffolds from existing and new genome assemblies. Here, we describe a high-resolution genetic linkage map of the domestic cat genome based on genotyping 453 domestic cats from several multi-generational pedigrees on the Illumina 63K SNP array. The final maps include 58,055 SNP markers placed relative to 6637 markers with unique positions, distributed across all autosomes and the X chromosome. Our final sex-averaged maps span a total autosomal length of 4464 cM, the longest described linkage map for any mammal, confirming length estimates from a previous microsatellite-based map. The linkage map was used to order and orient the scaffolds from a substantially more contiguous domestic cat genome assembly (Felis catus v8.0), which incorporated ∼20 × coverage of Illumina fragment reads. The new genome assembly shows substantial improvements in contiguity, with a nearly fourfold increase in N50 scaffold size to 18 Mb. We use this map to report probable structural errors in previous maps and assemblies, and to describe features of the recombination landscape, including a massive (∼50 Mb) recombination desert (of virtually zero recombination) on the X chromosome that parallels a similar desert on the porcine X chromosome in both size and physical location. Copyright © 2016 Li et al.
A wave function for stock market returns
NASA Astrophysics Data System (ADS)
Ataullah, Ali; Davidson, Ian; Tippett, Mark
2009-02-01
The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.
Tsai, Hsin Y; Robledo, Diego; Lowe, Natalie R; Bekaert, Michael; Taggart, John B; Bron, James E; Houston, Ross D
2016-07-07
High density linkage maps are useful tools for fine-scale mapping of quantitative trait loci, and characterization of the recombination landscape of a species' genome. Genomic resources for Atlantic salmon (Salmo salar) include a well-assembled reference genome, and high density single nucleotide polymorphism (SNP) arrays. Our aim was to create a high density linkage map, and to align it with the reference genome assembly. Over 96,000 SNPs were mapped and ordered on the 29 salmon linkage groups using a pedigreed population comprising 622 fish from 60 nuclear families, all genotyped with the 'ssalar01' high density SNP array. The number of SNPs per group showed a high positive correlation with physical chromosome length (r = 0.95). While the order of markers on the genetic and physical maps was generally consistent, areas of discrepancy were identified. Approximately 6.5% of the previously unmapped reference genome sequence was assigned to chromosomes using the linkage map. Male recombination rate was lower than females across the vast majority of the genome, but with a notable peak in subtelomeric regions. Finally, using RNA-Seq data to annotate the reference genome, the mapped SNPs were categorized according to their predicted function, including annotation of ∼2500 putative nonsynonymous variants. The highest density SNP linkage map for any salmonid species has been created, annotated, and integrated with the Atlantic salmon reference genome assembly. This map highlights the marked heterochiasmy of salmon, and provides a useful resource for salmonid genetics and genomics research. Copyright © 2016 Tsai et al.
Storkel, Holly L.; Lee, Jaehoon; Cox, Casey
2016-01-01
Purpose Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Method Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. Results The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. Conclusions As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise. PMID:27788276
Han, Min Kyung; Storkel, Holly L; Lee, Jaehoon; Cox, Casey
2016-11-01
Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise.
NASA Astrophysics Data System (ADS)
Haris, A.; Novriyani, M.; Suparno, S.; Hidayat, R.; Riyanto, A.
2017-07-01
This study presents the integration of seismic stochastic inversion and multi-attributes for delineating the reservoir distribution in term of lithology and porosity in the formation within depth interval between the Top Sihapas and Top Pematang. The method that has been used is a stochastic inversion, which is integrated with multi-attribute seismic by applying neural network Probabilistic Neural Network (PNN). Stochastic methods are used to predict the probability mapping sandstone as the result of impedance varied with 50 realizations that will produce a good probability. Analysis of Stochastic Seismic Tnversion provides more interpretive because it directly gives the value of the property. Our experiment shows that AT of stochastic inversion provides more diverse uncertainty so that the probability value will be close to the actual values. The produced AT is then used for an input of a multi-attribute analysis, which is used to predict the gamma ray, density and porosity logs. To obtain the number of attributes that are used, stepwise regression algorithm is applied. The results are attributes which are used in the process of PNN. This PNN method is chosen because it has the best correlation of others neural network method. Finally, we interpret the product of the multi-attribute analysis are in the form of pseudo-gamma ray volume, density volume and volume of pseudo-porosity to delineate the reservoir distribution. Our interpretation shows that the structural trap is identified in the southeastern part of study area, which is along the anticline.
Shetty, Priya B; Tang, Hua; Feng, Tao; Tayo, Bamidele; Morrison, Alanna C; Kardia, Sharon L R; Hanis, Craig L; Arnett, Donna K; Hunt, Steven C; Boerwinkle, Eric; Rao, Dabeeru C; Cooper, Richard S; Risch, Neil; Zhu, Xiaofeng
2015-02-01
Admixture mapping of lipids was followed-up by family-based association analysis to identify variants for cardiovascular disease in African Americans. The present study conducted admixture mapping analysis for total cholesterol, high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, and triglycerides. The analysis was performed in 1905 unrelated African American subjects from the National Heart, Lung and Blood Institute's Family Blood Pressure Program (FBPP). Regions showing admixture evidence were followed-up with family-based association analysis in 3556 African American subjects from the FBPP. The admixture mapping and family-based association analyses were adjusted for age, age(2), sex, body mass index, and genome-wide mean ancestry to minimize the confounding caused by population stratification. Regions that were suggestive of local ancestry association evidence were found on chromosomes 7 (low-density lipoprotein cholesterol), 8 (high-density lipoprotein cholesterol), 14 (triglycerides), and 19 (total cholesterol and triglycerides). In the fine-mapping analysis, 52 939 single-nucleotide polymorphisms (SNPs) were tested and 11 SNPs (8 independent SNPs) showed nominal significant association with high-density lipoprotein cholesterol (2 SNPs), low-density lipoprotein cholesterol (4 SNPs), and triglycerides (5 SNPs). The family data were used in the fine-mapping to identify SNPs that showed novel associations with lipids and regions, including genes with known associations for cardiovascular disease. This study identified regions on chromosomes 7, 8, 14, and 19 and 11 SNPs from the fine-mapping analysis that were associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, and triglycerides for further studies of cardiovascular disease in African Americans. © 2014 American Heart Association, Inc.
NASA Technical Reports Server (NTRS)
Lambert, Winifred; Short, David; Wolkmer, Matthew; Sharp, David; Spratt, Scott
2006-01-01
Each morning, the forecasters at the National Weather Service in Melbourne, FL (NWS MLB) produce an experimental cloud-to-ground (CG) lightning threat index map for their county warning area (CWA) that is posted to their web site (http://www.srh.weather.gov/mlb/ghwo/lightning.shtml) . Given the hazardous nature of lightning in East Central Florida, especially during the warm season months of May September, these maps help users factor the threat of lightning, relative to their location, into their daily plans. The maps are color-coded in five levels from Very Low to Extreme, with threat level definitions based on the probability of lightning occurrence and the expected amount of CG activity. On a day in which thunderstorms are expected, there are typically two or more threat levels depicted spatially across the CWA. The locations of relative lightning threat maxima and minima often depend on the position and orientation of the low-level ridge axis, forecast propagation and interaction of sea/lake/outflow boundaries, expected evolution of moisture and stability fields, and other factors that can influence the spatial distribution of thunderstorms over the CWA. The lightning threat index maps are issued for the 24-hour period beginning at 1200 UTC each day with a grid resolution of 5 km x 5 km. Product preparation is performed on the AWIPS Graphical Forecast Editor (GFE), which is the standard NWS platform for graphical editing. Currently, the forecasters create each map manually, starting with a blank map. To improve efficiency of the forecast process, NWS MLB requested that the Applied Meteorology Unit (AMU) create gridded warm season lightning climatologies that could be used as first-guess inputs to initialize lightning threat index maps. The gridded values requested included CG strike densities and frequency of occurrence stratified by synoptic-scale flow regime. The intent is to improve consistency between forecasters while allowing them to focus on the mesoscale detail of the forecast, ultimately benefiting the end-users of the product. Several studies took place at the Florida State University (FSU) and NWS Tallahassee (TAE) in which they created daily flow regimes using Florida 1200 UTC synoptic soundings and CG strike densities, or number of strikes per specified area. The soundings used to determine the flow regimes were taken at Miami (MIA), Tampa (TBW), and Jacksonville (JAX), FL, and the lightning data for the strike densities came from the National Lightning Detection Network (NLDN). The densities were created on a 2.5 km x 2.5 km grid for every hour of every day during the warm seasons in the years 1989-2004. The grids encompass an area that includes the entire state of Florida and adjacent Atlantic and Gulf of Mexico waters. Personnel at FSU and NWS TAE provided this data and supporting software for the work performed by the AMU.
NASA Technical Reports Server (NTRS)
Lambert, Winifred; Short, David; Volkmer, Matthew; Sharp, David; Spratt, Scott
2007-01-01
Each morning, the forecasters at the National Weather Service in Melbourne, FL (NWS MLB) produce an experimental cloud-to-ground (CG) lightning threat index map for their county warning area (CWA) that is posted to their web site (httl://www.srh.weather.gov/mlb/ghwo/lightning.shtml) . Given the hazardous nature of lightning in East Central Florida, especially during the warm season months of May September, these maps help users factor the threat of lightning, relative to their location, into their daily plans. The maps are color-coded in five levels from Very Low to Extreme, with threat level definitions based on the probability of lightning occurrence and the expected amount of CG activity. On a day in which thunderstorms are expected, there are typically two or more threat levels depicted spatially across the CWA. The locations of relative lightning threat maxima and minima often depend on the position and orientation of the low-level ridge axis, forecast propagation and interaction of sea/lake/outflow boundaries, expected evolution of moisture and stability fields, and other factors that can influence the spatial distribution of thunderstorms over the CWA. The lightning threat index maps are issued for the 24-hour period beginning at 1200 UTC each day with a grid resolution of 5 km x 5 km. Product preparation is performed on the AWIPS Graphical Forecast Editor (GFE), which is the standard NWS platform for graphical editing. Until recently, the forecasters created each map manually, starting with a blank map. To improve efficiency of the forecast process, NWS MLB requested that the Applied Meteorology Unit (AMU) create gridded warm season lightning climatologies that could be used as first-guess inputs to initialize lightning threat index maps. The gridded values requested included CG strike densities and frequency of occurrence stratified by synoptic-scale flow regime. The intent was to improve consistency between forecasters while allowing them to focus on the mesoscale detail of the forecast. Several studies took place at the Florida State University (FSU) and NWS Tallahassee (TAE) in which they created daily flow regimes using Florida 1200 UTC synoptic soundings and CG strike densities, or number of strikes per specified area. The soundings used to determine the flow regimes were taken at Miami (MIA), Tampa (TBW), and Jacksonville (JAX), FL, and the lightning data for the strike densities came from the National Lightning Detection Network (NLDN). The densities were created on a 2.5 km x 2.5 km grid for every hour of every day during the warm seasons in the years 1989-2004. The grids encompass an area that includes the entire state of Florida and adjacent Atlantic and Gulf of Mexico waters. Personnel at FSU and NWS TAE provided this data and supporting software for the work performed by the AMU.
The Mass of Saturn's B ring from hidden density waves
NASA Astrophysics Data System (ADS)
Hedman, M. M.; Nicholson, P. D.
2015-12-01
The B ring is Saturn's brightest and most opaque ring, but many of its fundamental parameters, including its total mass, are not well constrained. Elsewhere in the rings, the best mass density estimates come from spiral waves driven by mean-motion resonances with Saturn's various moons, but such waves have been hard to find in the B ring. We have developed a new wavelet-based technique, for combining data from multiple stellar occultations that allows us to isolate the density wave signals from other ring structures. This method has been applied to 5 density waves using 17 occultations of the star gamma Crucis observed by the Visual and Infrared Mapping Spectrometer (VIMS) onboard the Cassini spacecraft. Two of these waves (generated by the Janus 2:1 and Mimas 5:2 Inner Lindblad Resonances) are visible in individual occultation profiles, but the other three wave signatures ( associated with the Janus 3:2, Enceladus 3:1 and Pandora 3:2 Inner Lindblad Resonances ) are not visible in individual profiles and can only be detected in the combined dataset. Estimates of the ring's surface mass density derived from these five waves fall between 40 and 140 g/cm^2. Surprisingly, these mass density estimates show no obvious correlation with the ring's optical depth. Furthermore, these data indicate that the total mass of the B ring is probably between one-third and two-thirds the mass of Saturn's moon Mimas.
Landslide susceptibility revealed by LIDAR imagery and historical records, Seattle, Washington
Schulz, W.H.
2007-01-01
Light detection and ranging (LIDAR) data were used to visually map landslides, headscarps, and denuded slopes in Seattle, Washington. Four times more landslides were mapped than by previous efforts that used aerial photographs. The mapped landforms (landslides, headscarps, and denuded slopes) were created by many individual landslides. The spatial distribution of mapped landforms and 1308 historical landslides show that historical landslide activity has been concentrated on the mapped landforms, and that most of the landslide activity that created the landforms was prehistoric. Thus, the spatial densities of historical landslides on the landforms provide approximations of the landforms' relative susceptibilities to future landsliding. Historical landslide characteristics appear to be closely related to landform type so relative susceptibilities were determined for landslides with various characteristics. No strong relations were identified between stratigraphy and landslide occurrence; however, landslide characteristics and slope morphology appear to be related to stratigraphic conditions. Human activity is responsible for causing about 80% of historical Seattle landslides. The distribution of mapped landforms and human-caused landslides suggests the probable characteristics of future human-caused landslides on each of the landforms. The distribution of mapped landforms and historical landslides suggests that erosion of slope-toes by surface water has been a necessary condition for causing Seattle landslides. Human activity has largely arrested this erosion, which implies that landslide activity will decrease with time as hillsides naturally stabilize. However, evaluation of glacial-age analogs of areas of recent slope-toe erosion suggests that landslide activity in Seattle will continue for the foreseeable future. ?? 2006 Elsevier B.V. All rights reserved.
Model-based local density sharpening of cryo-EM maps
Jakobi, Arjen J; Wilmanns, Matthias
2017-01-01
Atomic models based on high-resolution density maps are the ultimate result of the cryo-EM structure determination process. Here, we introduce a general procedure for local sharpening of cryo-EM density maps based on prior knowledge of an atomic reference structure. The procedure optimizes contrast of cryo-EM densities by amplitude scaling against the radially averaged local falloff estimated from a windowed reference model. By testing the procedure using six cryo-EM structures of TRPV1, β-galactosidase, γ-secretase, ribosome-EF-Tu complex, 20S proteasome and RNA polymerase III, we illustrate how local sharpening can increase interpretability of density maps in particular in cases of resolution variation and facilitates model building and atomic model refinement. PMID:29058676
Fraser, Ross M; Allan, James; Simmen, Martin W
2006-12-08
Nucleosome positioning signals embedded within the DNA sequence have the potential to influence the detailed structure of the higher-order chromatin fibre. In two previous studies of long stretches of DNA, encompassing the chicken beta-globin and ovine beta-lactoglobulin genes, respectively, we mapped the relative affinity of every site for the core histone octamer. In both cases a periodic arrangement of the in vitro positioning sites suggests that they might influence the folding of a nucleosome chain into higher-order structure; this hypothesis was borne out in the case of the beta-lactoglobulin gene, where the distribution of the in vitro positioning sites is related to the positions nucleosomes actually occupy in sheep liver cells. Here, we have exploited the in vitro nucleosome positioning datasets to simulate nucleosomal organisation using in silico approaches. We use the high-resolution, quantitative positioning maps to define a one-dimensional positioning energy lattice, which can be populated with a defined number of nucleosomes. Monte Carlo techniques are employed to simulate the behaviour of the model at equilibrium to produce a set of configurations, which provide a probability-based occupancy map. Employing a variety of techniques we show that the occupancy maps are a sensitive function of the histone octamer density (nucleosome repeat length) and find that a minimal change in this property can produce dramatic localised changes in structure. Although simulations generally give rise to regular periodic nucleosomal arrangements, they often show octamer density-dependent discontinuities, which tend to co-localise with sequences that adopt distinctive chromatin structure in vivo. Furthermore, the overall organisation of simulated chromatin structures are more closely related to the situation in vivo than is the original in vitro positioning data, particularly at a nucleosome density corresponding to the in vivo state. Although our model is simplified, we argue that it provides a unique insight into the influence that DNA sequence can have in determining chromatin structure and could serve as a useful basis for the incorporation of other parameters.
Peng, Wenzhu; Xu, Jian; Zhang, Yan; Feng, Jianxin; Dong, Chuanju; Jiang, Likun; Feng, Jingyan; Chen, Baohua; Gong, Yiwen; Chen, Lin; Xu, Peng
2016-01-01
High density genetic linkage maps are essential for QTL fine mapping, comparative genomics and high quality genome sequence assembly. In this study, we constructed a high-density and high-resolution genetic linkage map with 28,194 SNP markers on 14,146 distinct loci for common carp based on high-throughput genotyping with the carp 250 K single nucleotide polymorphism (SNP) array in a mapping family. The genetic length of the consensus map was 10,595.94 cM with an average locus interval of 0.75 cM and an average marker interval of 0.38 cM. Comparative genomic analysis revealed high level of conserved syntenies between common carp and the closely related model species zebrafish and medaka. The genome scaffolds were anchored to the high-density linkage map, spanning 1,357 Mb of common carp reference genome. QTL mapping and association analysis identified 22 QTLs for growth-related traits and 7 QTLs for sex dimorphism. Candidate genes underlying growth-related traits were identified, including important regulators such as KISS2, IGF1, SMTLB, NPFFR1 and CPE. Candidate genes associated with sex dimorphism were also identified including 3KSR and DMRT2b. The high-density and high-resolution genetic linkage map provides an important tool for QTL fine mapping and positional cloning of economically important traits, and improving common carp genome assembly. PMID:27225429
2013-09-01
partner agencies and nations, detects, tracks, and interdicts illegal drug-trafficking in this region. In this thesis, we develop a probability model based...trafficking in this region. In this thesis, we develop a probability model based on intelligence inputs to generate a spatial temporal heat map specifying the...complement and vet such complicated simulation by developing more analytically tractable models. We develop probability models to generate a heat map
Probability function of breaking-limited surface elevation. [wind generated waves of ocean
NASA Technical Reports Server (NTRS)
Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.
1989-01-01
The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.
Intergalactic Hydrogen Clouds at Low Redshift: Connections to Voids and Dwarf Galaxies
NASA Technical Reports Server (NTRS)
Shull, J. Michael; Stocke, John T.; Penton, Steve
1996-01-01
We provide new post-COSTAR data on one sightline (Mrk 421) and updated data from another (I Zw 1) from our Hubble Space Telescope (HST) survey of intergalactic Ly(alpha) clouds located along sightlines to four bright quasars passing through well-mapped galaxy voids (16000 km/s pathlength) and superclusters (18000 km/s). We report two more definite detections of low-redshift Ly(alpha) clouds in voids: one at 3047 km/s (heliocentric) toward Mrk 421 and a second just beyond the Local Supercluster at 2861 km/s toward I Zw 1, confirming our earlier discovery of Ly(alpha) absorption clouds in voids (Stocke et al., ApJ, 451, 24). We have now identified ten definite and one probable low-redshift neutral hydrogen absorption clouds toward four targets, a frequency of approximately one absorber every 3400 km/s above 10(exp 12.7/sq cm column density. Of these ten absorption systems, three lie within voids; the probable absorber also lies in a void. Thus, the tendency of Ly(alpha) absorbers to 'avoid the voids' is not as clear as we found previously. If the Ly(alpha) clouds are approximated as homogeneous spheres of 100 kpc radius, their masses are approximately 10(exp 9)solar mass (about 0.01 times that of bright L* galaxies) and they are 40 times more numerous, comparable to the density of dwarf galaxies and of low-mass halos in numerical CDM simulations. The Ly(alpha) clouds contribute a fraction Omega(sub cl)approximately equals 0.003/h(sub 75) to the closure density of the universe, comparable to that of luminous matter. These clouds probably require a substantial amount of nonbaryonic dark matter for gravitational binding. They may represent extended haloes of low-mass protogalaxies which have not experienced significant star formation or low-mass dwarf galaxies whose star formation ceased long ago, but blew out significant gaseous material.
High-precision simulation of the height distribution for the KPZ equation
NASA Astrophysics Data System (ADS)
Hartmann, Alexander K.; Le Doussal, Pierre; Majumdar, Satya N.; Rosso, Alberto; Schehr, Gregory
2018-03-01
The one-point distribution of the height for the continuum Kardar-Parisi-Zhang (KPZ) equation is determined numerically using the mapping to the directed polymer in a random potential at high temperature. Using an importance sampling approach, the distribution is obtained over a large range of values, down to a probability density as small as 10-1000 in the tails. Both short and long times are investigated and compared with recent analytical predictions for the large-deviation forms of the probability of rare fluctuations. At short times the agreement with the analytical expression is spectacular. We observe that the far left and right tails, with exponents 5/2 and 3/2, respectively, are preserved also in the region of long times. We present some evidence for the predicted non-trivial crossover in the left tail from the 5/2 tail exponent to the cubic tail of the Tracy-Widom distribution, although the details of the full scaling form remain beyond reach.
Bouguer Images of the North American Craton
NASA Technical Reports Server (NTRS)
Arvidson, R. E.; Bindschadler, D.; Bowring, S.; Eddy, M.; Guinness, E.; Leff, C.
1985-01-01
Processing of existing gravity and aeromagnetic data with modern methods is providing new insights into crustal and mantle structures for large parts of the United States and Canada. More than three-quarters of a million ground station readings of gravity are now available for this region. These data offer a wealth of information on crustal and mantle structures when reduced and displayed as Bouguer anomalies, where lateral variations are controlled by the size, shape and densities of underlying materials. Digital image processing techniques were used to generate Bouguer images that display more of the granularity inherent in the data as compared with existing contour maps. A dominant NW-SE linear trend of highs and lows can be seen extending from South Dakota, through Nebaska, and into Missouri. This trend is probably related to features created during an early and perhaps initial episode of crustal assembly by collisional processes. The younger granitic materials are probably a thin cover over an older crust.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gagor, Anna; Pawlowski, Antoni; Pietraszko, Adam
2009-03-15
Single crystals of proustite Ag{sub 3}AsS{sub 3} have been characterised by impedance spectroscopy and single-crystal X-ray diffraction in the temperature ranges of 295-543 and 295-695 K, respectively. An analysis of the one-particle potential of silver atoms shows that in the whole measuring temperature range defects in the silver substructure play a major role in the conduction mechanism. Furthermore, the silver transfer is equally probable within silver chains and spirals, as well as between chains and spirals. The trigonal R3c room temperature phase does not change until the decomposition of the crystal. The electric anomaly of the first-order character which appearsmore » near 502 K is related to an increase in the electronic component of the total conductivity resulting from Ag{sub 2}S deposition at the sample surface. - Joint probability density function map of silver atoms at T=695 K.« less
Script-independent text line segmentation in freestyle handwritten documents.
Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi
2008-08-01
Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.
NASA Astrophysics Data System (ADS)
Tachibana, Tomihisa; Tanahashi, Katsuto; Mochizuki, Toshimitsu; Shirasawa, Katsuhiko; Takato, Hidetaka
2018-04-01
Bifacial interdigitated-back-contact (IBC) silicon solar cells with a high bifaciality of 0.91 were fabricated. Screen printing and firing technology were used to reduce the production cost. For the first time, the relationship between the rear side structure and carrier collection probability was evaluated using internal quantum efficiency (IQE) mapping. The measurement results showed that the screen-printed electrode and back surface field (BSF) area led to low IQE. The low carrier collection probability by BSF area can be explained by electrical shading effects. Thus, it is clear that the IQE mapping system is useful to evaluate the IBC cell.
Park, Hae-Jeong; Kwon, Jun Soo; Youn, Tak; Pae, Ji Soo; Kim, Jae-Jin; Kim, Myung-Sun; Ha, Kyoo-Seob
2002-11-01
We describe a method for the statistical parametric mapping of low resolution electromagnetic tomography (LORETA) using high-density electroencephalography (EEG) and individual magnetic resonance images (MRI) to investigate the characteristics of the mismatch negativity (MMN) generators in schizophrenia. LORETA, using a realistic head model of the boundary element method derived from the individual anatomy, estimated the current density maps from the scalp topography of the 128-channel EEG. From the current density maps that covered the whole cortical gray matter (up to 20,000 points), volumetric current density images were reconstructed. Intensity normalization of the smoothed current density images was used to reduce the confounding effect of subject specific global activity. After transforming each image into a standard stereotaxic space, we carried out statistical parametric mapping of the normalized current density images. We applied this method to the source localization of MMN in schizophrenia. The MMN generators, produced by a deviant tone of 1,200 Hz (5% of 1,600 trials) under the standard tone of 1,000 Hz, 80 dB binaural stimuli with 300 msec of inter-stimulus interval, were measured in 14 right-handed schizophrenic subjects and 14 age-, gender-, and handedness-matched controls. We found that the schizophrenic group exhibited significant current density reductions of MMN in the left superior temporal gyrus and the left inferior parietal gyrus (P < 0. 0005). This study is the first voxel-by-voxel statistical mapping of current density using individual MRI and high-density EEG. Copyright 2002 Wiley-Liss, Inc.
USDA-ARS?s Scientific Manuscript database
High-density linkage maps are vital to supporting the correct placement of scaffolds and gene sequences on chromosomes and fundamental to contemporary organismal research and scientific approaches to genetic improvement; high-density linkage maps are especially important in paleopolyploids with exce...
NASA Technical Reports Server (NTRS)
Wanjek, Christopher
2003-01-01
In June, NASA plans to launch the Microwave Anisotropy Probe (MAP) to survey the ancient radiation in unprecedented detail. MAP will map slight temperature fluctuations within the microwave background that vary by only 0.00001 C across a chilly radiation that now averages 2.73 C above absolute zero. The temperature differences today point back to density differences in the fiery baby universe, in which there was a little more matter here and a little less matter there. Areas of slightly enhanced density had stronger gravity than low-density areas. The high-density areas pulled back on the background radiation, making it appear slightly cooler in those directions.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
NASA Astrophysics Data System (ADS)
Mansoor, Awais; Casas, Rafael; Linguraru, Marius G.
2016-03-01
Pleural effusion is an abnormal collection of fluid within the pleural cavity. Excessive accumulation of pleural fluid is an important bio-marker for various illnesses, including congestive heart failure, pneumonia, metastatic cancer, and pulmonary embolism. Quantification of pleural effusion can be indicative of the progression of disease as well as the effectiveness of any treatment being administered. Quantification, however, is challenging due to unpredictable amounts and density of fluid, complex topology of the pleural cavity, and the similarity in texture and intensity of pleural fluid to the surrounding tissues in computed tomography (CT) scans. Herein, we present an automated method for the segmentation of pleural effusion in CT scans based on spatial context information. The method consists of two stages: first, a probabilistic pleural effusion map is created using multi-atlas segmentation. The probabilistic map assigns a priori probabilities to the presence of pleural uid at every location in the CT scan. Second, a statistical pattern classification approach is designed to annotate pleural regions using local descriptors based on a priori probabilities, geometrical, and spatial features. Thirty seven CT scans from a diverse patient population containing confirmed cases of minimal to severe amounts of pleural effusion were used to validate the proposed segmentation method. An average Dice coefficient of 0.82685 and Hausdorff distance of 16.2155 mm was obtained.
Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyż, W.; Zalewski, K.
2005-10-01
It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.
Use of uninformative priors to initialize state estimation for dynamical systems
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.
2017-10-01
The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.
2013-01-01
Background As successful malaria control programmes move towards elimination, they must identify residual transmission foci, target vector control to high-risk areas, focus on both asymptomatic and symptomatic infections, and manage importation risk. High spatial and temporal resolution maps of malaria risk can support all of these activities, but commonly available malaria maps are based on parasite rate, a poor metric for measuring malaria at extremely low prevalence. New approaches are required to provide case-based risk maps to countries seeking to identify remaining hotspots of transmission while managing the risk of transmission from imported cases. Methods Household locations and travel histories of confirmed malaria patients during 2011 were recorded through routine surveillance by the Swaziland National Malaria Control Programme for the higher transmission months of January to April and the lower transmission months of May to December. Household locations for patients with no travel history to endemic areas were compared against a random set of background points sampled proportionate to population density with respect to a set of variables related to environment, population density, vector control, and distance to the locations of identified imported cases. Comparisons were made separately for the high and low transmission seasons. The Random Forests regression tree classification approach was used to generate maps predicting the probability of a locally acquired case at 100 m resolution across Swaziland for each season. Results Results indicated that case households during the high transmission season tended to be located in areas of lower elevation, closer to bodies of water, in more sparsely populated areas, with lower rainfall and warmer temperatures, and closer to imported cases than random background points (all p < 0.001). Similar differences were evident during the low transmission season. Maps from the fit models suggested better predictive ability during the high season. Both models proved useful at predicting the locations of local cases identified in 2012. Conclusions The high-resolution mapping approaches described here can help elimination programmes understand the epidemiology of a disappearing disease. Generating case-based risk maps at high spatial and temporal resolution will allow control programmes to direct interventions proactively according to evidence-based measures of risk and ensure that the impact of limited resources is maximized to achieve and maintain malaria elimination. PMID:23398628
Cohen, Justin M; Dlamini, Sabelo; Novotny, Joseph M; Kandula, Deepika; Kunene, Simon; Tatem, Andrew J
2013-02-11
As successful malaria control programmes move towards elimination, they must identify residual transmission foci, target vector control to high-risk areas, focus on both asymptomatic and symptomatic infections, and manage importation risk. High spatial and temporal resolution maps of malaria risk can support all of these activities, but commonly available malaria maps are based on parasite rate, a poor metric for measuring malaria at extremely low prevalence. New approaches are required to provide case-based risk maps to countries seeking to identify remaining hotspots of transmission while managing the risk of transmission from imported cases. Household locations and travel histories of confirmed malaria patients during 2011 were recorded through routine surveillance by the Swaziland National Malaria Control Programme for the higher transmission months of January to April and the lower transmission months of May to December. Household locations for patients with no travel history to endemic areas were compared against a random set of background points sampled proportionate to population density with respect to a set of variables related to environment, population density, vector control, and distance to the locations of identified imported cases. Comparisons were made separately for the high and low transmission seasons. The Random Forests regression tree classification approach was used to generate maps predicting the probability of a locally acquired case at 100 m resolution across Swaziland for each season. Results indicated that case households during the high transmission season tended to be located in areas of lower elevation, closer to bodies of water, in more sparsely populated areas, with lower rainfall and warmer temperatures, and closer to imported cases than random background points (all p < 0.001). Similar differences were evident during the low transmission season. Maps from the fit models suggested better predictive ability during the high season. Both models proved useful at predicting the locations of local cases identified in 2012. The high-resolution mapping approaches described here can help elimination programmes understand the epidemiology of a disappearing disease. Generating case-based risk maps at high spatial and temporal resolution will allow control programmes to direct interventions proactively according to evidence-based measures of risk and ensure that the impact of limited resources is maximized to achieve and maintain malaria elimination.
Avram, Alexandru V; Sarlls, Joelle E; Barnett, Alan S; Özarslan, Evren; Thomas, Cibu; Irfanoglu, M Okan; Hutchinson, Elizabeth; Pierpaoli, Carlo; Basser, Peter J
2016-02-15
Diffusion tensor imaging (DTI) is the most widely used method for characterizing noninvasively structural and architectural features of brain tissues. However, the assumption of a Gaussian spin displacement distribution intrinsic to DTI weakens its ability to describe intricate tissue microanatomy. Consequently, the biological interpretation of microstructural parameters, such as fractional anisotropy or mean diffusivity, is often equivocal. We evaluate the clinical feasibility of assessing brain tissue microstructure with mean apparent propagator (MAP) MRI, a powerful analytical framework that efficiently measures the probability density function (PDF) of spin displacements and quantifies useful metrics of this PDF indicative of diffusion in complex microstructure (e.g., restrictions, multiple compartments). Rotation invariant and scalar parameters computed from the MAP show consistent variation across neuroanatomical brain regions and increased ability to differentiate tissues with distinct structural and architectural features compared with DTI-derived parameters. The return-to-origin probability (RTOP) appears to reflect cellularity and restrictions better than MD, while the non-Gaussianity (NG) measures diffusion heterogeneity by comprehensively quantifying the deviation between the spin displacement PDF and its Gaussian approximation. Both RTOP and NG can be decomposed in the local anatomical frame for reference determined by the orientation of the diffusion tensor and reveal additional information complementary to DTI. The propagator anisotropy (PA) shows high tissue contrast even in deep brain nuclei and cortical gray matter and is more uniform in white matter than the FA, which drops significantly in regions containing crossing fibers. Orientational profiles of the propagator computed analytically from the MAP MRI series coefficients allow separation of different fiber populations in regions of crossing white matter pathways, which in turn improves our ability to perform whole-brain fiber tractography. Reconstructions from subsampled data sets suggest that MAP MRI parameters can be computed from a relatively small number of DWIs acquired with high b-value and good signal-to-noise ratio in clinically achievable scan durations of less than 10min. The neuroanatomical consistency across healthy subjects and reproducibility in test-retest experiments of MAP MRI microstructural parameters further substantiate the robustness and clinical feasibility of this technique. The MAP MRI metrics could potentially provide more sensitive clinical biomarkers with increased pathophysiological specificity compared to microstructural measures derived using conventional diffusion MRI techniques. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jager, Yetta; Efroymson, Rebecca Ann; Sublette, K.
Quantitative tools are needed to evaluate the ecological effects of increasing petroleum production. In this article, we describe two stochastic models for simulating the spatial distribution of brine spills on a landscape. One model uses general assumptions about the spatial arrangement of spills and their sizes; the second model distributes spills by siting rectangular well complexes and conditioning spill probabilities on the configuration of pipes. We present maps of landscapes with spills produced by the two methods and compare the ability of the models to reproduce a specified spill area. A strength of the models presented here is their abilitymore » to extrapolate from the existing landscape to simulate landscapes with a higher (or lower) density of oil wells.« less
Han, Koeun; Jeong, Hee-Jin; Yang, Hee-Bum; Kang, Sung-Min; Kwon, Jin-Kyung; Kim, Seungill; Choi, Doil; Kang, Byoung-Cheorl
2016-04-01
Most agricultural traits are controlled by quantitative trait loci (QTLs); however, there are few studies on QTL mapping of horticultural traits in pepper (Capsicum spp.) due to the lack of high-density molecular maps and the sequence information. In this study, an ultra-high-density map and 120 recombinant inbred lines (RILs) derived from a cross between C. annuum'Perennial' and C. annuum'Dempsey' were used for QTL mapping of horticultural traits. Parental lines and RILs were resequenced at 18× and 1× coverage, respectively. Using a sliding window approach, an ultra-high-density bin map containing 2,578 bins was constructed. The total map length of the map was 1,372 cM, and the average interval between bins was 0.53 cM. A total of 86 significant QTLs controlling 17 horticultural traits were detected. Among these, 32 QTLs controlling 13 traits were major QTLs. Our research shows that the construction of bin maps using low-coverage sequence is a powerful method for QTL mapping, and that the short intervals between bins are helpful for fine-mapping of QTLs. Furthermore, bin maps can be used to improve the quality of reference genomes by elucidating the genetic order of unordered regions and anchoring unassigned scaffolds to linkage groups. © The Author 2016. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.
Covariance and correlation estimation in electron-density maps.
Altomare, Angela; Cuocci, Corrado; Giacovazzo, Carmelo; Moliterni, Anna; Rizzi, Rosanna
2012-03-01
Quite recently two papers have been published [Giacovazzo & Mazzone (2011). Acta Cryst. A67, 210-218; Giacovazzo et al. (2011). Acta Cryst. A67, 368-382] which calculate the variance in any point of an electron-density map at any stage of the phasing process. The main aim of the papers was to associate a standard deviation to each pixel of the map, in order to obtain a better estimate of the map reliability. This paper deals with the covariance estimate between points of an electron-density map in any space group, centrosymmetric or non-centrosymmetric, no matter the correlation between the model and target structures. The aim is as follows: to verify if the electron density in one point of the map is amplified or depressed as an effect of the electron density in one or more other points of the map. High values of the covariances are usually connected with undesired features of the map. The phases are the primitive random variables of our probabilistic model; the covariance changes with the quality of the model and therefore with the quality of the phases. The conclusive formulas show that the covariance is also influenced by the Patterson map. Uncertainty on measurements may influence the covariance, particularly in the final stages of the structure refinement; a general formula is obtained taking into account both phase and measurement uncertainty, valid at any stage of the crystal structure solution.
USDA-ARS?s Scientific Manuscript database
Construction of genetic linkage map is essential for genetic and genomic studies. Recent advances in sequencing and genotyping technologies made it possible to generate high-density and high-resolution genetic linkage maps, especially for the organisms lacking extensive genomic resources. In the pre...
Coseismic landslides reveal near-surface rock strength in a high-relief tectonically active setting
Gallen, Sean F.; Clark, Marin K.; Godt, Jonathan W.
2014-01-01
We present quantitative estimates of near-surface rock strength relevant to landscape evolution and landslide hazard assessment for 15 geologic map units of the Longmen Shan, China. Strength estimates are derived from a novel method that inverts earthquake peak ground acceleration models and coseismic landslide inventories to obtain material proper- ties and landslide thickness. Aggregate rock strength is determined by prescribing a friction angle of 30° and solving for effective cohesion. Effective cohesion ranges are from 70 kPa to 107 kPa for 15 geologic map units, and are approximately an order of magnitude less than typical laboratory measurements, probably because laboratory tests on hand-sized specimens do not incorporate the effects of heterogeneity and fracturing that likely control near-surface strength at the hillslope scale. We find that strength among the geologic map units studied varies by less than a factor of two. However, increased weakening of units with proximity to the range front, where precipitation and active fault density are the greatest, suggests that cli- matic and tectonic factors overwhelm lithologic differences in rock strength in this high-relief tectonically active setting.
Wu, Zhiwei; He, Hong S; Liu, Zhihua; Liang, Yu
2013-06-01
Fuel load is often used to prioritize stands for fuel reduction treatments. However, wildfire size and intensity are not only related to fuel loads but also to a wide range of other spatially related factors such as topography, weather and human activity. In prioritizing fuel reduction treatments, we propose using burn probability to account for the effects of spatially related factors that can affect wildfire size and intensity. Our burn probability incorporated fuel load, ignition probability, and spread probability (spatial controls to wildfire) at a particular location across a landscape. Our goal was to assess differences in reducing wildfire size and intensity using fuel-load and burn-probability based treatment prioritization approaches. Our study was conducted in a boreal forest in northeastern China. We derived a fuel load map from a stand map and a burn probability map based on historical fire records and potential wildfire spread pattern. The burn probability map was validated using historical records of burned patches. We then simulated 100 ignitions and six fuel reduction treatments to compare fire size and intensity under two approaches of fuel treatment prioritization. We calibrated and validated simulated wildfires against historical wildfire data. Our results showed that fuel reduction treatments based on burn probability were more effective at reducing simulated wildfire size, mean and maximum rate of spread, and mean fire intensity, but less effective at reducing maximum fire intensity across the burned landscape than treatments based on fuel load. Thus, contributions from both fuels and spatially related factors should be considered for each fuel reduction treatment. Published by Elsevier B.V.
The ratio of N(C18O) and AV in Chamaeleon I and III-B. Using 2MASS and SEST
NASA Astrophysics Data System (ADS)
Kainulainen, J.; Lehtinen, K.; Harju, J.
2006-02-01
We investigate the relationship between the C18O column density and the visual extinction in Chamaeleon I and in a part of the Chamaeleon III molecular cloud. The C18O column densities, N(C18O), are calculated from J=1{-}0 rotational line data observed with the SEST telescope. The visual extinctions, A_V, are derived using {JHK} photometry from the 2MASS survey and the NICER color excess technique. In contrast with the previous results of Hayakawa et al. (2001, PASJ, 53, 1109), we find that the average N(C18O)/AV ratios are similar in Cha I and Cha III, and lie close to values derived for other clouds, i.e. N(C18O) ≈ 2 × 1014 cm-2 ( AV - 2 ). We find, however, clear deviations from this average relationship towards individual clumps. Larger than average N(C18O)/AV ratios can be found in clumps associated with the active star forming region in the northern part of Cha I. On the other hand, some regions in the relatively quiescent southern part of Cha I show smaller than average N(C18O)/AV ratios and also very shallow proportionality between N(C18O) and A_V. The shallow proportionality suggests that C18O is heavily depleted in these regions. As the degree of depletion is proportional to the gas density, these regions probably contain very dense, cold cores, which do not stand out in CO mappings. A comparison with the dust temperature map derived from the ISO data shows that the most prominent of the potentially depleted cores indeed coincides with a dust temperature minimum. It seems therefore feasible to use N(C18O) and AV data together for identifying cold, dense cores in large scale mappings.
Mendes, Maria Paula; Ribeiro, Luís
2010-02-01
The Water Framework Directive and its daughter directives recognize the urgent need to adopt specific measures against the contamination of water by individual pollutants or a group of pollutants that present a significant risk to the quality of water. Probability maps showing that the nitrate concentrations exceed a legal threshold value in any location of the aquifer are used to assess risk of groundwater quality degradation from intensive agricultural activity in aquifers. In this paper we use Disjunctive Kriging to map the probability that the Nitrates Directive limit (91/676/EEC) is exceeded for the Nitrate Vulnerable Zone of the River Tagus alluvium aquifer. The Tagus alluvial aquifer system belongs to one of the most productive hydrogeological unit of continental Portugal and it is used to irrigate crops. Several groundwater monitoring campaigns were carried out from 2004 to 2006 according to the summer crops cycle. The study reveals more areas on the west bank with higher probabilities of contamination by nitrates (nitrate concentration values above 50mg/L) than on the east bank. The analysis of synthetic temporal probability map shows the areas where there is an increase of nitrates concentration during the summers. Copyright 2009 Elsevier B.V. All rights reserved.
2011-01-01
Background Technological advances are progressively increasing the application of genomics to a wider array of economically and ecologically important species. High-density maps enriched for transcribed genes facilitate the discovery of connections between genes and phenotypes. We report the construction of a high-density linkage map of expressed genes for the heterozygous genome of Eucalyptus using Single Feature Polymorphism (SFP) markers. Results SFP discovery and mapping was achieved using pseudo-testcross screening and selective mapping to simultaneously optimize linkage mapping and microarray costs. SFP genotyping was carried out by hybridizing complementary RNA prepared from 4.5 year-old trees xylem to an SFP array containing 103,000 25-mer oligonucleotide probes representing 20,726 unigenes derived from a modest size expressed sequence tags collection. An SFP-mapping microarray with 43,777 selected candidate SFP probes representing 15,698 genes was subsequently designed and used to genotype SFPs in a larger subset of the segregating population drawn by selective mapping. A total of 1,845 genes were mapped, with 884 of them ordered with high likelihood support on a framework map anchored to 180 microsatellites with average density of 1.2 cM. Using more probes per unigene increased by two-fold the likelihood of detecting segregating SFPs eventually resulting in more genes mapped. In silico validation showed that 87% of the SFPs map to the expected location on the 4.5X draft sequence of the Eucalyptus grandis genome. Conclusions The Eucalyptus 1,845 gene map is the most highly enriched map for transcriptional information for any forest tree species to date. It represents a major improvement on the number of genes previously positioned on Eucalyptus maps and provides an initial glimpse at the gene space for this global tree genome. A general protocol is proposed to build high-density transcript linkage maps in less characterized plant species by SFP genotyping with a concurrent objective of reducing microarray costs. HIgh-density gene-rich maps represent a powerful resource to assist gene discovery endeavors when used in combination with QTL and association mapping and should be especially valuable to assist the assembly of reference genome sequences soon to come for several plant and animal species. PMID:21492453
Fu, Beide; Liu, Haiyang; Yu, Xiaomu; Tong, Jingou
2016-01-01
Growth related traits in fish are controlled by quantitative trait loci (QTL), but no QTL for growth have been detected in bighead carp (Hypophthalmichthys nobilis) due to the lack of high-density genetic map. In this study, an ultra-high density genetic map was constructed with 3,121 SNP markers by sequencing 117 individuals in a F1 family using 2b-RAD technology. The total length of the map was 2341.27 cM, with an average marker interval of 0.75 cM. A high level of genomic synteny between our map and zebrafish was detected. Based on this genetic map, one genome-wide significant and 37 suggestive QTL for five growth-related traits were identified in 6 linkage groups (i.e. LG3, LG11, LG15, LG18, LG19, LG22). The phenotypic variance explained (PVE) by these QTL varied from 15.4% to 38.2%. Marker within the significant QTL region was surrounded by CRP1 and CRP2, which played an important role in muscle cell division. These high-density map and QTL information provided a solid base for QTL fine mapping and comparative genomics in bighead carp. PMID:27345016
Investigation of estimators of probability density functions
NASA Technical Reports Server (NTRS)
Speed, F. M.
1972-01-01
Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
Self-dissimilarity as a High Dimensional Complexity Measure
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Macready, William
2005-01-01
For many systems characterized as "complex" the patterns exhibited on different scales differ markedly from one another. For example the biomass distribution in a human body "looks very different" depending on the scale at which one examines it. Conversely, the patterns at different scales in "simple" systems (e.g., gases, mountains, crystals) vary little from one scale to another. Accordingly, the degrees of self-dissimilarity between the patterns of a system at various scales constitute a complexity "signature" of that system. Here we present a novel quantification of self-dissimilarity. This signature can, if desired, incorporate a novel information-theoretic measure of the distance between probability distributions that we derive here. Whatever distance measure is chosen, our quantification of self-dissimilarity can be measured for many kinds of real-world data. This allows comparisons of the complexity signatures of wholly different kinds of systems (e.g., systems involving information density in a digital computer vs. species densities in a rain-forest vs. capital density in an economy, etc.). Moreover, in contrast to many other suggested complexity measures, evaluating the self-dissimilarity of a system does not require one to already have a model of the system. These facts may allow self-dissimilarity signatures to be used a s the underlying observational variables of an eventual overarching theory relating all complex systems. To illustrate self-dissimilarity we present several numerical experiments. In particular, we show that underlying structure of the logistic map is picked out by the self-dissimilarity signature of time series produced by that map
Local Structure Theory for Cellular Automata.
NASA Astrophysics Data System (ADS)
Gutowitz, Howard Andrew
The local structure theory (LST) is a generalization of the mean field theory for cellular automata (CA). The mean field theory makes the assumption that iterative application of the rule does not introduce correlations between the states of cells in different positions. This assumption allows the derivation of a simple formula for the limit density of each possible state of a cell. The most striking feature of CA is that they may well generate correlations between the states of cells as they evolve. The LST takes the generation of correlation explicitly into account. It thus has the potential to describe statistical characteristics in detail. The basic assumption of the LST is that though correlation may be generated by CA evolution, this correlation decays with distance. This assumption allows the derivation of formulas for the estimation of the probability of large blocks of states in terms of smaller blocks of states. Given the probabilities of blocks of size n, probabilities may be assigned to blocks of arbitrary size such that these probability assignments satisfy the Kolmogorov consistency conditions and hence may be used to define a measure on the set of all possible (infinite) configurations. Measures defined in this way are called finite (or n-) block measures. A function called the scramble operator of order n maps a measure to an approximating n-block measure. The action of a CA on configurations induces an action on measures on the set of all configurations. The scramble operator is combined with the CA map on measure to form the local structure operator (LSO). The LSO of order n maps the set of n-block measures into itself. It is hypothesised that the LSO applied to n-block measures approximates the rule itself on general measures, and does so increasingly well as n increases. The fundamental advantage of the LSO is that its action is explicitly computable from a finite system of rational recursion equations. Empirical study of a number of CA rules demonstrates the potential of the LST to describe the statistical features of CA. The behavior of some simple rules is derived analytically. Other rules have more complex, chaotic behavior. Even for these rules, the LST yields an accurate portrait of both small and large time statistics.
Serin, Elise A. R.; Snoek, L. B.; Nijveen, Harm; Willems, Leo A. J.; Jiménez-Gómez, Jose M.; Hilhorst, Henk W. M.; Ligterink, Wilco
2017-01-01
High-density genetic maps are essential for high resolution mapping of quantitative traits. Here, we present a new genetic map for an Arabidopsis Bayreuth × Shahdara recombinant inbred line (RIL) population, built on RNA-seq data. RNA-seq analysis on 160 RILs of this population identified 30,049 single-nucleotide polymorphisms (SNPs) covering the whole genome. Based on a 100-kbp window SNP binning method, 1059 bin-markers were identified, physically anchored on the genome. The total length of the RNA-seq genetic map spans 471.70 centimorgans (cM) with an average marker distance of 0.45 cM and a maximum marker distance of 4.81 cM. This high resolution genotyping revealed new recombination breakpoints in the population. To highlight the advantages of such high-density map, we compared it to two publicly available genetic maps for the same population, comprising 69 PCR-based markers and 497 gene expression markers derived from microarray data, respectively. In this study, we show that SNP markers can effectively be derived from RNA-seq data. The new RNA-seq map closes many existing gaps in marker coverage, saturating the previously available genetic maps. Quantitative trait locus (QTL) analysis for published phenotypes using the available genetic maps showed increased QTL mapping resolution and reduced QTL confidence interval using the RNA-seq map. The new high-density map is a valuable resource that facilitates the identification of candidate genes and map-based cloning approaches. PMID:29259624
Seaside, Oregon, Tsunami Vulnerability Assessment Pilot Study
NASA Astrophysics Data System (ADS)
Dunbar, P. K.; Dominey-Howes, D.; Varner, J.
2006-12-01
The results of a pilot study to assess the risk from tsunamis for the Seaside-Gearhart, Oregon region will be presented. To determine the risk from tsunamis, it is first necessary to establish the hazard or probability that a tsunami of a particular magnitude will occur within a certain period of time. Tsunami inundation maps that provide 100-year and 500-year probabilistic tsunami wave height contours for the Seaside-Gearhart, Oregon, region were developed as part of an interagency Tsunami Pilot Study(1). These maps provided the probability of the tsunami hazard. The next step in determining risk is to determine the vulnerability or degree of loss resulting from the occurrence of tsunamis due to exposure and fragility. The tsunami vulnerability assessment methodology used in this study was developed by M. Papathoma and others(2). This model incorporates multiple factors (e.g. parameters related to the natural and built environments and socio-demographics) that contribute to tsunami vulnerability. Data provided with FEMA's HAZUS loss estimation software and Clatsop County, Oregon, tax assessment data were used as input to the model. The results, presented within a geographic information system, reveal the percentage of buildings in need of reinforcement and the population density in different inundation depth zones. These results can be used for tsunami mitigation, local planning, and for determining post-tsunami disaster response by emergency services. (1)Tsunami Pilot Study Working Group, Seaside, Oregon Tsunami Pilot Study--Modernization of FEMA Flood Hazard Maps, Joint NOAA/USGS/FEMA Special Report, U.S. National Oceanic and Atmospheric Administration, U.S. Geological Survey, U.S. Federal Emergency Management Agency, 2006, Final Draft. (2)Papathoma, M., D. Dominey-Howes, D.,Y. Zong, D. Smith, Assessing Tsunami Vulnerability, an example from Herakleio, Crete, Natural Hazards and Earth System Sciences, Vol. 3, 2003, p. 377-389.
NASA Astrophysics Data System (ADS)
Hengl, Tomislav
2015-04-01
Efficiency of spatial sampling largely determines success of model building. This is especially important for geostatistical mapping where an initial sampling plan should provide a good representation or coverage of both geographical (defined by the study area mask map) and feature space (defined by the multi-dimensional covariates). Otherwise the model will need to extrapolate and, hence, the overall uncertainty of the predictions will be high. In many cases, geostatisticians use point data sets which are produced using unknown or inconsistent sampling algorithms. Many point data sets in environmental sciences suffer from spatial clustering and systematic omission of feature space. But how to quantify these 'representation' problems and how to incorporate this knowledge into model building? The author has developed a generic function called 'spsample.prob' (Global Soil Information Facilities package for R) and which simultaneously determines (effective) inclusion probabilities as an average between the kernel density estimation (geographical spreading of points; analysed using the spatstat package in R) and MaxEnt analysis (feature space spreading of points; analysed using the MaxEnt software used primarily for species distribution modelling). The output 'iprob' map indicates whether the sampling plan has systematically missed some important locations and/or features, and can also be used as an input for geostatistical modelling e.g. as a weight map for geostatistical model fitting. The spsample.prob function can also be used in combination with the accessibility analysis (cost of field survey are usually function of distance from the road network, slope and land cover) to allow for simultaneous maximization of average inclusion probabilities and minimization of total survey costs. The author postulates that, by estimating effective inclusion probabilities using combined geographical and feature space analysis, and by comparing survey costs to representation efficiency, an optimal initial sampling plan can be produced which satisfies both criteria: (a) good representation (i.e. within a tolerance threshold), and (b) minimized survey costs. This sampling analysis framework could become especially interesting for generating sampling plans in new areas e.g. for which no previous spatial prediction model exists. The presentation includes data processing demos with standard soil sampling data sets Ebergotzen (Germany) and Edgeroi (Australia), also available via the GSIF package.
Evaluating detection probabilities for American marten in the Black Hills, South Dakota
Smith, Joshua B.; Jenks, Jonathan A.; Klaver, Robert W.
2007-01-01
Assessing the effectiveness of monitoring techniques designed to determine presence of forest carnivores, such as American marten (Martes americana), is crucial for validation of survey results. Although comparisons between techniques have been made, little attention has been paid to the issue of detection probabilities (p). Thus, the underlying assumption has been that detection probabilities equal 1.0. We used presence-absence data obtained from a track-plate survey in conjunction with results from a saturation-trapping study to derive detection probabilities when marten occurred at high (>2 marten/10.2 km2) and low (???1 marten/10.2 km2) densities within 8 10.2-km2 quadrats. Estimated probability of detecting marten in high-density quadrats was p = 0.952 (SE = 0.047), whereas the detection probability for low-density quadrats was considerably lower (p = 0.333, SE = 0.136). Our results indicated that failure to account for imperfect detection could lead to an underestimation of marten presence in 15-52% of low-density quadrats in the Black Hills, South Dakota, USA. We recommend that repeated site-survey data be analyzed to assess detection probabilities when documenting carnivore survey results.
USDA-ARS?s Scientific Manuscript database
Hard red winter wheat parents ‘Harry’ (drought tolerant) and ‘Wesley’ (drought susceptible) was used to develop a recombinant inbred population to identify genomic regions associated with drought and adaptation. To precisely map genomic regions high-density linkage maps are a prerequisite. In this s...
NASA Astrophysics Data System (ADS)
Zhang, Jiaxin; Shields, Michael D.
2018-01-01
This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.
Nonstationary envelope process and first excursion probability.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.
Wang, Liang-Jen; Lin, Shih-Ku; Chen, Yi-Chih; Huang, Ming-Chyi; Chen, Tzu-Ting; Ree, Shao-Chun; Chen, Chih-Ken
Methamphetamine exerts neurotoxic effects and elicits psychotic symptoms. This study attempted to compare clinical differences between methamphetamine users with persistent psychosis (MAP) and patients with schizophrenia. In addition, we examined the discrimination validity by using symptom clusters to differentiate between MAP and schizophrenia. We enrolled 53 MAP patients and 53 patients with schizophrenia. The psychopathology of participants was assessed using the Chinese version of the Diagnostic Interview for Genetic Studies and the 18-item Brief Psychiatric Rating Scale. Logistic regression was used to examine the predicted probability scores of different symptom combinations on discriminating between MAP and schizophrenia. The receiver operating characteristic (ROC) analyses and area under the curve (AUC) were further applied to examine the discrimination validity of the predicted probability scores on differentiating between MAP and schizophrenia. We found that MAP and schizophrenia demonstrated similar patterns of delusions. Compared to patients with schizophrenia, MAP experienced significantly higher proportions of visual hallucinations and of somatic or tactile hallucinations. However, MAP exhibited significantly lower severity in conceptual disorganization, mannerism/posturing, blunted affect, emotional withdrawal, and motor retardation compared to patients with schizophrenia. The ROC analysis showed that a predicted probability score combining the aforementioned 7 items of symptoms could significantly differentiate between MAP and schizophrenia (AUC = 0.77). Findings in the current study suggest that nuanced differences might exist in the clinical presentation of secondary psychosis (MAP) and primary psychosis (schizophrenia). Combining the symptoms as a whole may help with differential diagnosis for MAP and schizophrenia. © 2016 S. Karger AG, Basel.
Deribe, Kebede; Cano, Jorge; Newport, Melanie J.; Golding, Nick; Pullan, Rachel L.; Sime, Heven; Gebretsadik, Abeba; Assefa, Ashenafi; Kebede, Amha; Hailu, Asrat; Rebollo, Maria P.; Shafi, Oumer; Bockarie, Moses J.; Aseffa, Abraham; Hay, Simon I.; Reithinger, Richard; Enquselassie, Fikre; Davey, Gail; Brooker, Simon J.
2015-01-01
Background Ethiopia is assumed to have the highest burden of podoconiosis globally, but the geographical distribution and environmental limits and correlates are yet to be fully investigated. In this paper we use data from a nationwide survey to address these issues. Methodology Our analyses are based on data arising from the integrated mapping of podoconiosis and lymphatic filariasis (LF) conducted in 2013, supplemented by data from an earlier mapping of LF in western Ethiopia in 2008–2010. The integrated mapping used woreda (district) health offices’ reports of podoconiosis and LF to guide selection of survey sites. A suite of environmental and climatic data and boosted regression tree (BRT) modelling was used to investigate environmental limits and predict the probability of podoconiosis occurrence. Principal Findings Data were available for 141,238 individuals from 1,442 communities in 775 districts from all nine regional states and two city administrations of Ethiopia. In 41.9% of surveyed districts no cases of podoconiosis were identified, with all districts in Affar, Dire Dawa, Somali and Gambella regional states lacking the disease. The disease was most common, with lymphoedema positivity rate exceeding 5%, in the central highlands of Ethiopia, in Amhara, Oromia and Southern Nations, Nationalities and Peoples regional states. BRT modelling indicated that the probability of podoconiosis occurrence increased with increasing altitude, precipitation and silt fraction of soil and decreased with population density and clay content. Based on the BRT model, we estimate that in 2010, 34.9 (95% confidence interval [CI]: 20.2–51.7) million people (i.e. 43.8%; 95% CI: 25.3–64.8% of Ethiopia’s national population) lived in areas environmentally suitable for the occurrence of podoconiosis. Conclusions Podoconiosis is more widespread in Ethiopia than previously estimated, but occurs in distinct geographical regions that are tied to identifiable environmental factors. The resultant maps can be used to guide programme planning and implementation and estimate disease burden in Ethiopia. This work provides a framework with which the geographical limits of podoconiosis could be delineated at a continental scale. PMID:26222887
NASA Astrophysics Data System (ADS)
Yilmaz, Isik; Keskin, Inan; Marschalko, Marian; Bednarik, Martin
2010-05-01
This study compares the GIS based collapse susceptibility mapping methods such as; conditional probability (CP), logistic regression (LR) and artificial neural networks (ANN) applied in gypsum rock masses in Sivas basin (Turkey). Digital Elevation Model (DEM) was first constructed using GIS software. Collapse-related factors, directly or indirectly related to the causes of collapse occurrence, such as distance from faults, slope angle and aspect, topographical elevation, distance from drainage, topographic wetness index- TWI, stream power index- SPI, Normalized Difference Vegetation Index (NDVI) by means of vegetation cover, distance from roads and settlements were used in the collapse susceptibility analyses. In the last stage of the analyses, collapse susceptibility maps were produced from CP, LR and ANN models, and they were then compared by means of their validations. Area Under Curve (AUC) values obtained from all three methodologies showed that the map obtained from ANN model looks like more accurate than the other models, and the results also showed that the artificial neural networks is a usefull tool in preparation of collapse susceptibility map and highly compatible with GIS operating features. Key words: Collapse; doline; susceptibility map; gypsum; GIS; conditional probability; logistic regression; artificial neural networks.
Using known map category marginal frequencies to improve estimates of thematic map accuracy
NASA Technical Reports Server (NTRS)
Card, D. H.
1982-01-01
By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.
Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David
2015-01-01
Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.
Challenges in making a seismic hazard map for Alaska and the Aleutians
Wesson, R.L.; Boyd, O.S.; Mueller, C.S.; Frankel, A.D.; Freymueller, J.T.
2008-01-01
We present a summary of the data and analyses leading to the revision of the time-independent probabilistic seismic hazard maps of Alaska and the Aleutians. These maps represent a revision of existing maps based on newly obtained data, and reflect best current judgments about methodology and approach. They have been prepared following the procedures and assumptions made in the preparation of the 2002 National Seismic Hazard Maps for the lower 48 States, and will be proposed for adoption in future revisions to the International Building Code. We present example maps for peak ground acceleration, 0.2 s spectral amplitude (SA), and 1.0 s SA at a probability level of 2% in 50 years (annual probability of 0.000404). In this summary, we emphasize issues encountered in preparation of the maps that motivate or require future investigation and research.
Integrating Entropy-Based Naïve Bayes and GIS for Spatial Evaluation of Flood Hazard.
Liu, Rui; Chen, Yun; Wu, Jianping; Gao, Lei; Barrett, Damian; Xu, Tingbao; Li, Xiaojuan; Li, Linyi; Huang, Chang; Yu, Jia
2017-04-01
Regional flood risk caused by intensive rainfall under extreme climate conditions has increasingly attracted global attention. Mapping and evaluation of flood hazard are vital parts in flood risk assessment. This study develops an integrated framework for estimating spatial likelihood of flood hazard by coupling weighted naïve Bayes (WNB), geographic information system, and remote sensing. The north part of Fitzroy River Basin in Queensland, Australia, was selected as a case study site. The environmental indices, including extreme rainfall, evapotranspiration, net-water index, soil water retention, elevation, slope, drainage proximity, and density, were generated from spatial data representing climate, soil, vegetation, hydrology, and topography. These indices were weighted using the statistics-based entropy method. The weighted indices were input into the WNB-based model to delineate a regional flood risk map that indicates the likelihood of flood occurrence. The resultant map was validated by the maximum inundation extent extracted from moderate resolution imaging spectroradiometer (MODIS) imagery. The evaluation results, including mapping and evaluation of the distribution of flood hazard, are helpful in guiding flood inundation disaster responses for the region. The novel approach presented consists of weighted grid data, image-based sampling and validation, cell-by-cell probability inferring and spatial mapping. It is superior to an existing spatial naive Bayes (NB) method for regional flood hazard assessment. It can also be extended to other likelihood-related environmental hazard studies. © 2016 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Kruse, Christian; Rottensteiner, Franz; Hoberg, Thorsten; Ziems, Marcel; Rebke, Julia; Heipke, Christian
2018-04-01
The aftermath of wartime attacks is often felt long after the war ended, as numerous unexploded bombs may still exist in the ground. Typically, such areas are documented in so-called impact maps which are based on the detection of bomb craters. This paper proposes a method for the automatic detection of bomb craters in aerial wartime images that were taken during the Second World War. The object model for the bomb craters is represented by ellipses. A probabilistic approach based on marked point processes determines the most likely configuration of objects within the scene. Adding and removing new objects to and from the current configuration, respectively, changing their positions and modifying the ellipse parameters randomly creates new object configurations. Each configuration is evaluated using an energy function. High gradient magnitudes along the border of the ellipse are favored and overlapping ellipses are penalized. Reversible Jump Markov Chain Monte Carlo sampling in combination with simulated annealing provides the global energy optimum, which describes the conformance with a predefined model. For generating the impact map a probability map is defined which is created from the automatic detections via kernel density estimation. By setting a threshold, areas around the detections are classified as contaminated or uncontaminated sites, respectively. Our results show the general potential of the method for the automatic detection of bomb craters and its automated generation of an impact map in a heterogeneous image stock.
Spatial Distribution of Black Bear Incident Reports in Michigan
McFadden-Hiller, Jamie E.; Beyer, Dean E.; Belant, Jerrold L.
2016-01-01
Interactions between humans and carnivores have existed for centuries due to competition for food and space. American black bears are increasing in abundance and populations are expanding geographically in many portions of its range, including areas that are also increasing in human density, often resulting in associated increases in human-bear conflict (hereafter, bear incidents). We used public reports of bear incidents in Michigan, USA, from 2003–2011 to assess the relative contributions of ecological and anthropogenic variables in explaining the spatial distribution of bear incidents and estimated the potential risk of bear incidents. We used weighted Normalized Difference Vegetation Index mean as an index of primary productivity, region (i.e., Upper Peninsula or Lower Peninsula), primary and secondary road densities, and percentage land cover type within 6.5-km2 circular buffers around bear incidents and random points. We developed 22 a priori models and used generalized linear models and Akaike’s Information Criterion (AIC) to rank models. The global model was the best compromise between model complexity and model fit (w = 0.99), with a ΔAIC 8.99 units from the second best performing model. We found that as deciduous forest cover increased, the probability of bear incident occurrence increased. Among the measured anthropogenic variables, cultivated crops and primary roads were the most important in our AIC-best model and were both positively related to the probability of bear incident occurrence. The spatial distribution of relative bear incident risk varied markedly throughout Michigan. Forest cover fragmented with agriculture and other anthropogenic activities presents an environment that likely facilitates bear incidents. Our map can help wildlife managers identify areas of bear incident occurrence, which in turn can be used to help develop strategies aimed at reducing incidents. Researchers and wildlife managers can use similar mapping techniques to assess locations of specific conflict types or to address human impacts on endangered species. PMID:27119344
Spatial Distribution of Black Bear Incident Reports in Michigan.
McFadden-Hiller, Jamie E; Beyer, Dean E; Belant, Jerrold L
2016-01-01
Interactions between humans and carnivores have existed for centuries due to competition for food and space. American black bears are increasing in abundance and populations are expanding geographically in many portions of its range, including areas that are also increasing in human density, often resulting in associated increases in human-bear conflict (hereafter, bear incidents). We used public reports of bear incidents in Michigan, USA, from 2003-2011 to assess the relative contributions of ecological and anthropogenic variables in explaining the spatial distribution of bear incidents and estimated the potential risk of bear incidents. We used weighted Normalized Difference Vegetation Index mean as an index of primary productivity, region (i.e., Upper Peninsula or Lower Peninsula), primary and secondary road densities, and percentage land cover type within 6.5-km2 circular buffers around bear incidents and random points. We developed 22 a priori models and used generalized linear models and Akaike's Information Criterion (AIC) to rank models. The global model was the best compromise between model complexity and model fit (w = 0.99), with a ΔAIC 8.99 units from the second best performing model. We found that as deciduous forest cover increased, the probability of bear incident occurrence increased. Among the measured anthropogenic variables, cultivated crops and primary roads were the most important in our AIC-best model and were both positively related to the probability of bear incident occurrence. The spatial distribution of relative bear incident risk varied markedly throughout Michigan. Forest cover fragmented with agriculture and other anthropogenic activities presents an environment that likely facilitates bear incidents. Our map can help wildlife managers identify areas of bear incident occurrence, which in turn can be used to help develop strategies aimed at reducing incidents. Researchers and wildlife managers can use similar mapping techniques to assess locations of specific conflict types or to address human impacts on endangered species.
The ATLASGAL survey: distribution of cold dust in the Galactic plane. Combination with Planck data
NASA Astrophysics Data System (ADS)
Csengeri, T.; Weiss, A.; Wyrowski, F.; Menten, K. M.; Urquhart, J. S.; Leurini, S.; Schuller, F.; Beuther, H.; Bontemps, S.; Bronfman, L.; Henning, Th.; Schneider, N.
2016-01-01
Context. Sensitive ground-based submillimeter surveys, such as ATLASGAL, provide a global view on the distribution of cold dense gas in the Galactic plane at up to two-times better angular-resolution compared to recent space-based surveys with Herschel. However, a drawback of ground-based continuum observations is that they intrinsically filter emission, at angular scales larger than a fraction of the field-of-view of the array, when subtracting the sky noise in the data processing. The lost information on the distribution of diffuse emission can be, however, recovered from space-based, all-sky surveys with Planck. Aims: Here we aim to demonstrate how this information can be used to complement ground-based bolometer data and present reprocessed maps of the APEX Telescope Large Area Survey of the Galaxy (ATLASGAL) survey. Methods: We use the maps at 353 GHz from the Planck/HFI instrument, which performed a high sensitivity all-sky survey at a frequency close to that of the APEX/LABOCA array, which is centred on 345 GHz. Complementing the ground-based observations with information on larger angular scales, the resulting maps reveal the distribution of cold dust in the inner Galaxy with a larger spatial dynamic range. We visually describe the observed features and assess the global properties of dust distribution. Results: Adding information from large angular scales helps to better identify the global properties of the cold Galactic interstellar medium. To illustrate this, we provide mass estimates from the dust towards the W43 star-forming region and estimate a column density contrast of at least a factor of five between a low intensity halo and the star-forming ridge. We also show examples of elongated structures extending over angular scales of 0.5°, which we refer to as thin giant filaments. Corresponding to > 30 pc structures in projection at a distance of 3 kpc, these dust lanes are very extended and show large aspect ratios. We assess the fraction of dense gas by determining the contribution of the APEX/LABOCA maps to the combined maps, and estimate 2-5% for the dense gas fraction (corresponding to Av> 7 mag) on average in the Galactic plane. We also show probability distribution functions of the column density (N-PDF), which reveal the typically observed log-normal distribution for low column density and exhibit an excess at high column densities. As a reference for extragalactic studies, we show the line-of-sight integrated N-PDF of the inner Galaxy, and derive a contribution of this excess to the total column density of ~ 2.2%, corresponding to NH2 = 2.92 × 1022 cm-2. Taking the total flux density observed in the maps, we provide an independent estimate of the mass of molecular gas in the inner Galaxy of ~ 1 × 109 M⊙, which is consistent with previous estimates using CO emission. From the mass and dense gas fraction (fDG), we estimate a Galactic SFR of Ṁ = 1.3 M⊙ yr-1. Conclusions: Retrieving the extended emission helps to better identify massive giant filaments which are elongated and confined structures. We show that the log-normal distribution of low column density gas is ubiquitous in the inner Galaxy. While the distribution of diffuse gas is relatively homogenous in the inner Galaxy, the central molecular zone (CMZ) stands out with a higher dense gas fraction despite its low star formation efficiency.Altogether our findings explain well the observed low star formation efficiency of the Milky Way by the low fDG in the Galactic ISM. In contrast, the high fDG observed towards the CMZ, despite its low star formation activity, suggests that, in that particular region of our Galaxy, high density gas is not the bottleneck for star formation.
The force distribution probability function for simple fluids by density functional theory.
Rickayzen, G; Heyes, D M
2013-02-28
Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.
Postfragmentation density function for bacterial aggregates in laminar flow
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John
2014-01-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205
A Large-Scale Super-Structure at z=0.65 in the UKIDSS Ultra-Deep Survey Field
NASA Astrophysics Data System (ADS)
Galametz, Audrey; Candels Clustering Working Group
2017-07-01
In hierarchical structure formation scenarios, galaxies accrete along high density filaments. Superclusters represent the largest density enhancements in the cosmic web with scales of 100 to 200 Mpc. As they represent the largest components of LSS, they are very powerful tools to constrain cosmological models. Since they also offer a wide range of density, from infalling group to high density cluster core, they are also the perfect laboratory to study the influence of environment on galaxy evolution. I will present a newly discovered large scale structure at z=0.65 in the UKIDSS UDS field. Although statistically predicted, the presence of such structure in UKIDSS, one of the most extensively covered and studied extragalactic field, remains a serendipity. Our follow-up confirmed more than 15 group members including at least three galaxy clusters with M200 10^14Msol . Deep spectroscopy of the quiescent core galaxies reveals that the most massive structure knots are at very different formation stage with a range of red sequence properties. Statistics allow us to map formation age across the structure denser knots and identify where quenching is most probably occurring across the LSS. Spectral diagnostics analysis also reveals an interesting population of transition galaxies we suspect are transforming from star-forming to quiescent galaxies.
USDA-ARS?s Scientific Manuscript database
High-density genetic linkage maps are essential for fine mapping QTLs controlling disease resistance traits, such as early leaf spot (ELS), late leaf spot (LLS), and Tomato spotted wilt virus (TSWV). With completion of the genome sequences of two diploid ancestors of cultivated peanut, we could use ...
Conduits and dike distribution analysis in San Rafael Swell, Utah
NASA Astrophysics Data System (ADS)
Kiyosugi, K.; Connor, C.; Wetmore, P. H.; Ferwerda, B. P.; Germa, A.
2011-12-01
Volcanic fields generally consist of scattered monogenetic volcanoes, such as cinder cones and maars. The temporal and spatial distribution of monogenetic volcanoes and probability of future activity within volcanic fields is studied with the goals of understanding the origins of these volcano groups, and forecasting potential future volcanic hazards. The subsurface magmatic plumbing systems associated with volcanic fields, however, are rarely observed or studied. Therefore, we investigated a highly eroded and exposed magmatic plumbing system on the San Rafael Swell (UT) that consists of dikes, volcano conduits and sills. San Rafael Swell is part of the Colorado Plateau and is located east of the Rocky Mountain seismic belt and the Basin and Range. The overburden thickness at the time of mafic magma intrusion (Pliocene; ca. 4 Ma) into Jurassic sandstone is estimated to be ~800 m based on paleotopographical reconstructions. Based on a geologic map by P. Delaney and colleagues, and new field research, a total of 63 conduits are mapped in this former volcanic field. The conduits each reveal features of root zone and / or lower diatremes, including rapid dike expansion, peperite and brecciated intrusive and host rocks. Recrystallized baked zone of host rock is also observed around many conduits. Most conduits are basaltic or shonkinitic with thickness of >10 m and associated with feeder dikes intruded along N-S trend joints in the host rock, whereas two conduits are syenitic and suggesting development from underlying cognate sills. Conduit distribution, which is analyzed by a kernel function method with elliptical bandwidth, illustrates a N-S elongate higher conduit density area regardless of the azimuth of closely distributed conduits alignment (nearest neighbor distance <200 m). In addition, dike density was calculated as total dike length in unit area (km/km^2). Conduit and sill distribution is concordant with the high dike density area. Especially, the distribution of conduits is not random with respect to the dike distribution with greater than 99% confidence on the basis of the Kolmogorov-Smirnov test. On the other hand, dike density at each conduits location also suggests that there is no threshold of dike density for conduit formation. In other words, conduits may be possible to develop from even short mapped dikes in low dike density areas. These results show effectiveness of studying volcanic vent distribution to infer the size of magmatic system below volcanic fields and highlight the uncertainty of forecasting the location of new monogenetic volcanoes in active fields, which may be associated with a single dike intrusion.
Local Renyi entropic profiles of DNA sequences.
Vinga, Susana; Almeida, Jonas S
2007-10-16
In a recent report the authors presented a new measure of continuous entropy for DNA sequences, which allows the estimation of their randomness level. The definition therein explored was based on the Rényi entropy of probability density estimation (pdf) using the Parzen's window method and applied to Chaos Game Representation/Universal Sequence Maps (CGR/USM). Subsequent work proposed a fractal pdf kernel as a more exact solution for the iterated map representation. This report extends the concepts of continuous entropy by defining DNA sequence entropic profiles using the new pdf estimations to refine the density estimation of motifs. The new methodology enables two results. On the one hand it shows that the entropic profiles are directly related with the statistical significance of motifs, allowing the study of under and over-representation of segments. On the other hand, by spanning the parameters of the kernel function it is possible to extract important information about the scale of each conserved DNA region. The computational applications, developed in Matlab m-code, the corresponding binary executables and additional material and examples are made publicly available at http://kdbio.inesc-id.pt/~svinga/ep/. The ability to detect local conservation from a scale-independent representation of symbolic sequences is particularly relevant for biological applications where conserved motifs occur in multiple, overlapping scales, with significant future applications in the recognition of foreign genomic material and inference of motif structures.
Local Renyi entropic profiles of DNA sequences
Vinga, Susana; Almeida, Jonas S
2007-01-01
Background In a recent report the authors presented a new measure of continuous entropy for DNA sequences, which allows the estimation of their randomness level. The definition therein explored was based on the Rényi entropy of probability density estimation (pdf) using the Parzen's window method and applied to Chaos Game Representation/Universal Sequence Maps (CGR/USM). Subsequent work proposed a fractal pdf kernel as a more exact solution for the iterated map representation. This report extends the concepts of continuous entropy by defining DNA sequence entropic profiles using the new pdf estimations to refine the density estimation of motifs. Results The new methodology enables two results. On the one hand it shows that the entropic profiles are directly related with the statistical significance of motifs, allowing the study of under and over-representation of segments. On the other hand, by spanning the parameters of the kernel function it is possible to extract important information about the scale of each conserved DNA region. The computational applications, developed in Matlab m-code, the corresponding binary executables and additional material and examples are made publicly available at . Conclusion The ability to detect local conservation from a scale-independent representation of symbolic sequences is particularly relevant for biological applications where conserved motifs occur in multiple, overlapping scales, with significant future applications in the recognition of foreign genomic material and inference of motif structures. PMID:17939871
Fracture network evaluation program (FraNEP): A software for analyzing 2D fracture trace-line maps
NASA Astrophysics Data System (ADS)
Zeeb, Conny; Gomez-Rivas, Enrique; Bons, Paul D.; Virgo, Simon; Blum, Philipp
2013-10-01
Fractures, such as joints, faults and veins, strongly influence the transport of fluids through rocks by either enhancing or inhibiting flow. Techniques used for the automatic detection of lineaments from satellite images and aerial photographs, LIDAR technologies and borehole televiewers significantly enhanced data acquisition. The analysis of such data is often performed manually or with different analysis software. Here we present a novel program for the analysis of 2D fracture networks called FraNEP (Fracture Network Evaluation Program). The program was developed using Visual Basic for Applications in Microsoft Excel™ and combines features from different existing software and characterization techniques. The main novelty of FraNEP is the possibility to analyse trace-line maps of fracture networks applying the (1) scanline sampling, (2) window sampling or (3) circular scanline and window method, without the need of switching programs. Additionally, binning problems are avoided by using cumulative distributions, rather than probability density functions. FraNEP is a time-efficient tool for the characterisation of fracture network parameters, such as density, intensity and mean length. Furthermore, fracture strikes can be visualized using rose diagrams and a fitting routine evaluates the distribution of fracture lengths. As an example of its application, we use FraNEP to analyse a case study of lineament data from a satellite image of the Oman Mountains.
Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?
Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D
2018-02-01
Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P < 0.0001) and in vivo (porcine RA pre-ablation 0.45 ± 0.13 vs. post-ablation 0.78 ± 0.17 points/cm2, P = 0.0008). Increasing chamber geometry was also associated with increased optimal sampling density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of Cardiology
Likelihood-based modification of experimental crystal structure electron density maps
Terwilliger, Thomas C [Sante Fe, NM
2005-04-16
A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.
Li, Yun; Liu, Shikai; Qin, Zhenkui; Waldbieser, Geoff; Wang, Ruijia; Sun, Luyang; Bao, Lisui; Danzmann, Roy G.; Dunham, Rex; Liu, Zhanjiang
2015-01-01
Construction of genetic linkage map is essential for genetic and genomic studies. Recent advances in sequencing and genotyping technologies made it possible to generate high-density and high-resolution genetic linkage maps, especially for the organisms lacking extensive genomic resources. In the present work, we constructed a high-density and high-resolution genetic map for channel catfish with three large resource families genotyped using the catfish 250K single-nucleotide polymorphism (SNP) array. A total of 54,342 SNPs were placed on the linkage map, which to our knowledge had the highest marker density among aquaculture species. The estimated genetic size was 3,505.4 cM with a resolution of 0.22 cM for sex-averaged genetic map. The sex-specific linkage maps spanned a total of 4,495.1 cM in females and 2,593.7 cM in males, presenting a ratio of 1.7 : 1 between female and male in recombination fraction. After integration with the previously established physical map, over 87% of physical map contigs were anchored to the linkage groups that covered a physical length of 867 Mb, accounting for ∼90% of the catfish genome. The integrated map provides a valuable tool for validating and improving the catfish whole-genome assembly and facilitates fine-scale QTL mapping and positional cloning of genes responsible for economically important traits. PMID:25428894
USDA-ARS?s Scientific Manuscript database
High-density linkage maps are fundamental to contemporary organismal research and scientific approaches to genetic improvement, especially in paleopolyploids with exceptionally complex genomes, e.g., Upland cotton (Gossypium hirsutum L., 2n=52). Using 3 full-sib intra-specific mapping populations fr...
Wen, Weie; He, Zhonghu; Gao, Fengmei; Liu, Jindong; Jin, Hui; Zhai, Shengnan; Qu, Yanying; Xia, Xianchun
2017-01-01
A high-density consensus map is a powerful tool for gene mapping, cloning and molecular marker-assisted selection in wheat breeding. The objective of this study was to construct a high-density, single nucleotide polymorphism (SNP)-based consensus map of common wheat (Triticum aestivum L.) by integrating genetic maps from four recombinant inbred line populations. The populations were each genotyped using the wheat 90K Infinium iSelect SNP assay. A total of 29,692 SNP markers were mapped on 21 linkage groups corresponding to 21 hexaploid wheat chromosomes, covering 2,906.86 cM, with an overall marker density of 10.21 markers/cM. Compared with the previous maps based on the wheat 90K SNP chip detected 22,736 (76.6%) of the SNPs with consistent chromosomal locations, whereas 1,974 (6.7%) showed different chromosomal locations, and 4,982 (16.8%) were newly mapped. Alignment of the present consensus map and the wheat expressed sequence tags (ESTs) Chromosome Bin Map enabled assignment of 1,221 SNP markers to specific chromosome bins and 819 ESTs were integrated into the consensus map. The marker orders of the consensus map were validated based on physical positions on the wheat genome with Spearman rank correlation coefficients ranging from 0.69 (4D) to 0.97 (1A, 4B, 5B, and 6A), and were also confirmed by comparison with genetic position on the previously 40K SNP consensus map with Spearman rank correlation coefficients ranging from 0.84 (6D) to 0.99 (6A). Chromosomal rearrangements reported previously were confirmed in the present consensus map and new putative rearrangements were identified. In addition, an integrated consensus map was developed through the combination of five published maps with ours, containing 52,607 molecular markers. The consensus map described here provided a high-density SNP marker map and a reliable order of SNPs, representing a step forward in mapping and validation of chromosomal locations of SNPs on the wheat 90K array. Moreover, it can be used as a reference for quantitative trait loci (QTL) mapping to facilitate exploitation of genes and QTL in wheat breeding. PMID:28848588
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Zhang; Chen, Wei
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Jiang, Zhang; Chen, Wei
2017-11-03
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
A method for producing digital probabilistic seismic landslide hazard maps
Jibson, R.W.; Harp, E.L.; Michael, J.A.
2000-01-01
The 1994 Northridge, California, earthquake is the first earthquake for which we have all of the data sets needed to conduct a rigorous regional analysis of seismic slope instability. These data sets include: (1) a comprehensive inventory of triggered landslides, (2) about 200 strong-motion records of the mainshock, (3) 1:24 000-scale geologic mapping of the region, (4) extensive data on engineering properties of geologic units, and (5) high-resolution digital elevation models of the topography. All of these data sets have been digitized and rasterized at 10 m grid spacing using ARC/INFO GIS software on a UNIX computer. Combining these data sets in a dynamic model based on Newmark's permanent-deformation (sliding-block) analysis yields estimates of coseismic landslide displacement in each grid cell from the Northridge earthquake. The modeled displacements are then compared with the digital inventory of landslides triggered by the Northridge earthquake to construct a probability curve relating predicted displacement to probability of failure. This probability function can be applied to predict and map the spatial variability in failure probability in any ground-shaking conditions of interest. We anticipate that this mapping procedure will be used to construct seismic landslide hazard maps that will assist in emergency preparedness planning and in making rational decisions regarding development and construction in areas susceptible to seismic slope failure. ?? 2000 Elsevier Science B.V. All rights reserved.
Jibson, Randall W.; Harp, Edwin L.; Michael, John A.
1998-01-01
The 1994 Northridge, California, earthquake is the first earthquake for which we have all of the data sets needed to conduct a rigorous regional analysis of seismic slope instability. These data sets include (1) a comprehensive inventory of triggered landslides, (2) about 200 strong-motion records of the mainshock, (3) 1:24,000-scale geologic mapping of the region, (4) extensive data on engineering properties of geologic units, and (5) high-resolution digital elevation models of the topography. All of these data sets have been digitized and rasterized at 10-m grid spacing in the ARC/INFO GIS platform. Combining these data sets in a dynamic model based on Newmark's permanent-deformation (sliding-block) analysis yields estimates of coseismic landslide displacement in each grid cell from the Northridge earthquake. The modeled displacements are then compared with the digital inventory of landslides triggered by the Northridge earthquake to construct a probability curve relating predicted displacement to probability of failure. This probability function can be applied to predict and map the spatial variability in failure probability in any ground-shaking conditions of interest. We anticipate that this mapping procedure will be used to construct seismic landslide hazard maps that will assist in emergency preparedness planning and in making rational decisions regarding development and construction in areas susceptible to seismic slope failure.
NASA Astrophysics Data System (ADS)
Petrie, E. S.; Evans, J. P.; Richey, D.; Flores, S.; Barton, C.; Mozley, P.
2015-12-01
Sedimentary rocks in the San Rafael Swell, Utah, were deformed by Laramide compression and subsequent Neogene extension. We evaluate the effect of fault damage zone morphology as a function of structural position, and changes in mechanical stratigraphy on the distribution of secondary minerals across the reservoir-seal pair of the Navajo Sandstone and overlying Carmel Formation. We decipher paleo-fluid migration and examine the effect faults and fractures have on reservoir permeability and efficacy of top seal for a range of geo-engineering applications. Map-scale faults have an increased probability of allowing upward migration of fluids along the fault plane and within the damage zone, potentially bypassing the top seal. Field mapping, mesoscopic structural analyses, petrography, and geochemical observations demonstrate that fault zone thickness increases at structural intersections, fault relay zones, fault-related folds, and fault tips. Higher densities of faults with meters of slip and dense fracture populations are present in relay zones relative to single, discrete faults. Curvature analysis of the San Rafael monocline and fracture density data show that fracture density is highest where curvature is highest in the syncline hinge and near faults. Fractures cross the reservoir-seal interface where fracture density is highest and structural diagensis includes mineralization events and bleaching and calcite and gypsum mineralization. The link between fracture distributions and structural setting implys that transmissive fractures have predictable orientations and density distributions. At the m- to cm- scale, deformation-band faults and joints in the Navajo Sandstone penetrate the reservoir-seal interface and transition into open-mode fractures in the caprock seal. Scanline analysis and petrography of veins provide evidence for subsurface mineralization and fracture reactivation, suggesting that the fractures act as loci for fluid flow through time. Heterolithic caprock seals with variable fracture distributions and morphology highlight the strong link between the variation in material properties and the response to changing stress conditions. The variable connectivity of fractures and the changes in fracture density plays a critical role in subsurface fluid flow.
Mapping surface charge density of lipid bilayers by quantitative surface conductivity microscopy
Klausen, Lasse Hyldgaard; Fuhs, Thomas; Dong, Mingdong
2016-01-01
Local surface charge density of lipid membranes influences membrane–protein interactions leading to distinct functions in all living cells, and it is a vital parameter in understanding membrane-binding mechanisms, liposome design and drug delivery. Despite the significance, no method has so far been capable of mapping surface charge densities under physiologically relevant conditions. Here, we use a scanning nanopipette setup (scanning ion-conductance microscope) combined with a novel algorithm to investigate the surface conductivity near supported lipid bilayers, and we present a new approach, quantitative surface conductivity microscopy (QSCM), capable of mapping surface charge density with high-quantitative precision and nanoscale resolution. The method is validated through an extensive theoretical analysis of the ionic current at the nanopipette tip, and we demonstrate the capacity of QSCM by mapping the surface charge density of model cationic, anionic and zwitterionic lipids with results accurately matching theoretical values. PMID:27561322
Mapping surface charge density of lipid bilayers by quantitative surface conductivity microscopy
NASA Astrophysics Data System (ADS)
Klausen, Lasse Hyldgaard; Fuhs, Thomas; Dong, Mingdong
2016-08-01
Local surface charge density of lipid membranes influences membrane-protein interactions leading to distinct functions in all living cells, and it is a vital parameter in understanding membrane-binding mechanisms, liposome design and drug delivery. Despite the significance, no method has so far been capable of mapping surface charge densities under physiologically relevant conditions. Here, we use a scanning nanopipette setup (scanning ion-conductance microscope) combined with a novel algorithm to investigate the surface conductivity near supported lipid bilayers, and we present a new approach, quantitative surface conductivity microscopy (QSCM), capable of mapping surface charge density with high-quantitative precision and nanoscale resolution. The method is validated through an extensive theoretical analysis of the ionic current at the nanopipette tip, and we demonstrate the capacity of QSCM by mapping the surface charge density of model cationic, anionic and zwitterionic lipids with results accurately matching theoretical values.
Mapping surface charge density of lipid bilayers by quantitative surface conductivity microscopy.
Klausen, Lasse Hyldgaard; Fuhs, Thomas; Dong, Mingdong
2016-08-26
Local surface charge density of lipid membranes influences membrane-protein interactions leading to distinct functions in all living cells, and it is a vital parameter in understanding membrane-binding mechanisms, liposome design and drug delivery. Despite the significance, no method has so far been capable of mapping surface charge densities under physiologically relevant conditions. Here, we use a scanning nanopipette setup (scanning ion-conductance microscope) combined with a novel algorithm to investigate the surface conductivity near supported lipid bilayers, and we present a new approach, quantitative surface conductivity microscopy (QSCM), capable of mapping surface charge density with high-quantitative precision and nanoscale resolution. The method is validated through an extensive theoretical analysis of the ionic current at the nanopipette tip, and we demonstrate the capacity of QSCM by mapping the surface charge density of model cationic, anionic and zwitterionic lipids with results accurately matching theoretical values.
2012-01-01
Background Brassica oleracea encompass a family of vegetables and cabbage that are among the most widely cultivated crops. In 2009, the B. oleracea Genome Sequencing Project was launched using next generation sequencing technology. None of the available maps were detailed enough to anchor the sequence scaffolds for the Genome Sequencing Project. This report describes the development of a large number of SSR and SNP markers from the whole genome shotgun sequence data of B. oleracea, and the construction of a high-density genetic linkage map using a double haploid mapping population. Results The B. oleracea high-density genetic linkage map that was constructed includes 1,227 markers in nine linkage groups spanning a total of 1197.9 cM with an average of 0.98 cM between adjacent loci. There were 602 SSR markers and 625 SNP markers on the map. The chromosome with the highest number of markers (186) was C03, and the chromosome with smallest number of markers (99) was C09. Conclusions This first high-density map allowed the assembled scaffolds to be anchored to pseudochromosomes. The map also provides useful information for positional cloning, molecular breeding, and integration of information of genes and traits in B. oleracea. All the markers on the map will be transferable and could be used for the construction of other genetic maps. PMID:23033896
Probability and Quantum Paradigms: the Interplay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kracklauer, A. F.
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less
Probability and Quantum Paradigms: the Interplay
NASA Astrophysics Data System (ADS)
Kracklauer, A. F.
2007-12-01
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.
NASA Technical Reports Server (NTRS)
Weinman, James A.; Garan, Louis
1987-01-01
A more advanced cloud pattern analysis algorithm was subsequently developed to take the shape and brightness of the various clouds into account in a manner that is more consistent with the human analyst's perception of GOES cloud imagery. The results of that classification scheme were compared with precipitation probabilities observed from ships of opportunity off the U.S. east coast to derive empirical regressions between cloud types and precipitation probability. The cloud morphology was then quantitatively and objectively used to map precipitation probabilities during two winter months during which severe cold air outbreaks were observed over the northwest Atlantic. Precipitation probabilities associated with various cloud types are summarized. Maps of precipitation probability derived from the cloud morphology analysis program for two months and the precipitation probability derived from thirty years of ship observation were observed.
Oil spill contamination probability in the southeastern Levantine basin.
Goldman, Ron; Biton, Eli; Brokovich, Eran; Kark, Salit; Levin, Noam
2015-02-15
Recent gas discoveries in the eastern Mediterranean Sea led to multiple operations with substantial economic interest, and with them there is a risk of oil spills and their potential environmental impacts. To examine the potential spatial distribution of this threat, we created seasonal maps of the probability of oil spill pollution reaching an area in the Israeli coastal and exclusive economic zones, given knowledge of its initial sources. We performed simulations of virtual oil spills using realistic atmospheric and oceanic conditions. The resulting maps show dominance of the alongshore northerly current, which causes the high probability areas to be stretched parallel to the coast, increasing contamination probability downstream of source points. The seasonal westerly wind forcing determines how wide the high probability areas are, and may also restrict these to a small coastal region near source points. Seasonal variability in probability distribution, oil state, and pollution time is also discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engstroem, K; Casares-Magaz, O; Muren, L
Purpose: Multi-parametric MRI (mp-MRI) is being introduced in radiotherapy (RT) of prostate cancer, including for tumour delineation in focal boosting strategies. We recently developed an image-based tumour control probability model, based on cell density distributions derived from apparent diffusion coefficient (ADC) maps. Beyond tumour volume and cell densities, tumour hypoxia is also an important determinant of RT response. Since tissue perfusion from mp-MRI has been related to hypoxia we have explored the patterns of ADC and perfusion maps, and the relations between them, inside and outside prostate index lesions. Methods: ADC and perfusion maps from 20 prostate cancer patients weremore » used, with the prostate and index lesion delineated by a dedicated uro-radiologist. To reduce noise, the maps were averaged over a 3×3×3 voxel cube. Associations between different ADC and perfusion histogram parameters within the prostate, inside and outside the index lesion, were evaluated with the Pearson’s correlation coefficient. In the voxel-wise analysis, scatter plots of ADC vs perfusion were analysed for voxels in the prostate, inside and outside of the index lesion, again with the associations quantified with the Pearson’s correlation coefficient. Results: Overall ADC was lower inside the index lesion than in the normal prostate as opposed to ktrans that was higher inside the index lesion than outside. In the histogram analysis, the minimum ktrans was significantly correlated with the maximum ADC (Pearson=0.47; p=0.03). At the voxel level, 15 of the 20 cases had a statistically significant inverse correlation between ADC and perfusion inside the index lesion; ten of the cases had a Pearson < −0.4. Conclusion: The minimum value of ktrans across the tumour was correlated to the maximum ADC. However, on the voxel level, the ‘local’ ktrans in the index lesion is inversely (i.e. negatively) correlated to the ‘local’ ADC in most patients. Research agreement with Varian Medical Systems, not related to the work presented in this abstract.« less
Genome-wide linkage mapping of QTL for black point reaction in bread wheat (Triticum aestivum L.).
Liu, Jindong; He, Zhonghu; Wu, Ling; Bai, Bin; Wen, Weie; Xie, Chaojie; Xia, Xianchun
2016-11-01
Nine QTL for black point resistance in wheat were identified using a RIL population derived from a Linmai 2/Zhong 892 cross and 90K SNP assay. Black point, discoloration of the embryo end of the grain, downgrades wheat grain quality leading to significant economic losses to the wheat industry. The availability of molecular markers will accelerate improvement of black point resistance in wheat breeding. The aims of this study were to identify quantitative trait loci (QTL) for black point resistance and tightly linked molecular markers, and to search for candidate genes using a high-density genetic linkage map of wheat. A recombinant inbred line (RIL) population derived from the cross Linmai 2/Zhong 892 was evaluated for black point reaction during the 2011-2012, 2012-2013 and 2013-2014 cropping seasons, providing data for seven environments. A high-density linkage map was constructed by genotyping the RILs with the wheat 90K single nucleotide polymorphism (SNP) chip. Composite interval mapping detected nine QTL on chromosomes 2AL, 2BL, 3AL, 3BL, 5AS, 6A, 7AL (2) and 7BS, designated as QBp.caas-2AL, QBp.caas-2BL, QBp.caas-3AL, QBp.caas-3BL, QBp.caas-5AS, QBp.caas-6A, QBp.caas-7AL.1, QBp.caas-7AL.2 and QBp.caas-7BS, respectively. All resistance alleles, except for QBp.caas-7AL.1 from Linmai 2, were contributed by Zhong 892. QBp.caas-3BL, QBp.caas-5AS, QBp.caas-7AL.1, QBp.caas-7AL.2 and QBp.caas-7BS probably represent new loci for black point resistance. Sequences of tightly linked SNPs were used to survey wheat and related cereal genomes identifying three candidate genes for black point resistance. The tightly linked SNP markers can be used in marker-assisted breeding in combination with the kompetitive allele specific PCR technique to improve black point resistance.
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
NASA Astrophysics Data System (ADS)
Kim, Hannah; Hong, Helen
2014-03-01
We propose an automatic method for nipple detection on 3D automated breast ultrasound (3D ABUS) images using coronal slab-average-projection and cumulative probability map. First, to identify coronal images that appeared remarkable distinction between nipple-areola region and skin, skewness of each coronal image is measured and the negatively skewed images are selected. Then, coronal slab-average-projection image is reformatted from selected images. Second, to localize nipple-areola region, elliptical ROI covering nipple-areola region is detected using Hough ellipse transform in coronal slab-average-projection image. Finally, to separate the nipple from areola region, 3D Otsu's thresholding is applied to the elliptical ROI and cumulative probability map in the elliptical ROI is generated by assigning high probability to low intensity region. False detected small components are eliminated using morphological opening and the center point of detected nipple region is calculated. Experimental results show that our method provides 94.4% nipple detection rate.
NASA Astrophysics Data System (ADS)
Gomez, Jose Alfonso; Owens, Phillip N.; Koiter, Alex J.; Lobb, David
2016-04-01
One of the major sources of uncertainty in attributing sediment sources in fingerprinting studies is the uncertainty in determining the concentrations of the elements used in the mixing model due to the variability of the concentrations of these elements in the source materials (e.g., Kraushaar et al., 2015). The uncertainty in determining the "true" concentration of a given element in each one of the source areas depends on several factors, among them the spatial variability of that element, the sampling procedure and sampling density. Researchers have limited control over these factors, and usually sampling density tends to be sparse, limited by time and the resources available. Monte Carlo analysis has been used regularly in fingerprinting studies to explore the probable solutions within the measured variability of the elements in the source areas, providing an appraisal of the probability of the different solutions (e.g., Collins et al., 2012). This problem can be considered analogous to the propagation of uncertainty in hydrologic models due to uncertainty in the determination of the values of the model parameters, and there are many examples of Monte Carlo analysis of this uncertainty (e.g., Freeze, 1980; Gómez et al., 2001). Some of these model analyses rely on the simulation of "virtual" situations that were calibrated from parameter values found in the literature, with the purpose of providing insight about the response of the model to different configurations of input parameters. This approach - evaluating the answer for a "virtual" problem whose solution could be known in advance - might be useful in evaluating the propagation of uncertainty in mixing models in sediment fingerprinting studies. In this communication, we present the preliminary results of an on-going study evaluating the effect of variability of element concentrations in source materials, sampling density, and the number of elements included in the mixing models. For this study a virtual catchment was constructed, composed by three sub-catchments each of 500 x 500 m size. We assumed that there was no selectivity in sediment detachment or transport. A numerical excercise was performed considering these variables: 1) variability of element concentration: three levels with CVs of 20 %, 50 % and 80 %; 2) sampling density: 10, 25 and 50 "samples" per sub-catchment and element; and 3) number of elements included in the mixing model: two (determined), and five (overdetermined). This resulted in a total of 18 (3 x 3 x 2) possible combinations. The five fingerprinting elements considered in the study were: C, N, 40K, Al and Pavail, and their average values, taken from the literature, were: sub-catchment 1: 4.0 %, 0.35 %, 0.50 ppm, 5.0 ppm, 1.42 ppm, respectively; sub-catchment 2: 2.0 %, 0.18 %, 0.20 ppm, 10.0 ppm, 0.20 ppm, respectively; and sub-catchment 3: 1.0 %, 0.06 %, 1.0 ppm, 16.0 ppm, 7.8 ppm, respectively. For each sub-catchment, three maps of the spatial distribution of each element was generated using the random generator of Mejia and Rodriguez-Iturbe (1974) as described in Freeze (1980), using the average value and the three different CVs defined above. Each map for each source area and property was generated for a 100 x 100 square grid, each grid cell being 5 m x 5 m. Maps were randomly generated for each property and source area. In doing so, we did not consider the possibility of cross correlation among properties. Spatial autocorrelation was assumed to be weak. The reason for generating the maps was to create a "virtual" situation where all the element concentration values at each point are known. Simultaneously, we arbitrarily determined the percentage of sediment coming from sub-catchments. These values were 30 %, 10 % and 60 %, for sub-catchments 1, 2 and 3, respectively. Using these values, we determined the element concentrations in the sediment. The exercise consisted of creating different sampling strategies in a virtual environment to determine an average value for each of the different maps of element concentration and sub-catchment, under different sampling densities: 200 different average values for the "high" sampling density (average of 50 samples); 400 different average values for the "medium" sampling density (average of 25 samples); and 1,000 different average values for the "low" sampling density (average of 10 samples). All these combinations of possible values of element concentrations in the source areas were solved for the concentration in the sediment already determined for the "true" solution using limSolve (Soetaert et al., 2014) in R language. The sediment source solutions found for the different situations and values were analyzed in order to: 1) evaluate the uncertainty in the sediment source attribution; and 2) explore strategies to detect the most probable solutions that might lead to improved methods for constructing the most robust mixing models. Preliminary results on these will be presented and discussed in this communication. Key words: sediment, fingerprinting, uncertainty, variability, mixing model. References Collins, A.L., Zhang, Y., McChesney, D., Walling, D.E., Haley, S.M., Smith, P. 2012. Sediment source tracing in a lowland agricultural catchment in southern England using a modified procedure combining statistical analysis and numerical modelling. Science of the Total Environment 414: 301-317. Freeze, R.A. 1980. A stochastic-conceptual analysis of rainfall-runoff processes on a hillslope. Water Resources Research 16: 391-408.
A Mapping of the Electron Localization Function for Earth Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibbs, Gerald V.; Cox, David F.; Ross, Nancy
2005-06-01
The electron localization function, ELF, generated for a number of geometry-optimized earth materials, provides a graphical representation of the spatial localization of the probability electron density distribution as embodied in domains ascribed to localized bond and lone pair electrons. The lone pair domains, displayed by the silica polymorphs quartz, coesite and cristobalite, are typically banana-shaped and oriented perpendicular to the plane of the SiOSi angle at ~0.60 Å from the O atom on the reflex side of the angle. With decreasing angle, the domains increase in magnitude, indicating an increase in the nucleophilic character of the O atom, rendering itmore » more susceptible to potential electrophilic attack. The Laplacian isosurface maps of the experimental and theoretical electron density distribution for coesite substantiates the increase in the size of the domain with decreasing angle. Bond pair domains are displayed along each of the SiO bond vectors as discrete concave hemispherically-shaped domains at ~0.70 Å from the O atom. For more closed-shell ionic bonded interactions, the bond and lone pair domains are often coalesced, resulting in concave hemispherical toroidal-shaped domains with local maxima centered along the bond vectors. As the shared covalent character of the bonded interactions increases, the bond and lone pair domains are better developed as discrete domains. ELF isosurface maps generated for the earth materials tremolite, diopside, talc and dickite display banana-shaped lone pair domains associated with the bridging O atoms of SiOSi angles and concave hemispherical toroidal bond pair domains associated with the nonbridging ones. The lone pair domains in dickite and talc provide a basis for understanding the bonded interactions between the adjacent neutral layers. Maps were also generated for beryl, cordierite, quartz, low albite, forsterite, wadeite, åkermanite, pectolite, periclase, hurlbutite, thortveitite and vanthoffite. Strategies are reviewed for finding potential H docking sites in the silica polymorphs and related materials. As observed in an earlier study, the ELF is capable of generating bond and lone pair domains that are similar in number and arrangement to those provided by Laplacian and deformation electron density distributions. The formation of the bond and lone pair domains in the silica polymorphs and the progressive decrease in the SiO length as the value of the electron density at the bond critical point increases indicates that the SiO bonded interaction has a substantial component of covalent character.« less
In vivo mapping of current density distribution in brain tissues during deep brain stimulation (DBS)
NASA Astrophysics Data System (ADS)
Sajib, Saurav Z. K.; Oh, Tong In; Kim, Hyung Joong; Kwon, Oh In; Woo, Eung Je
2017-01-01
New methods for in vivo mapping of brain responses during deep brain stimulation (DBS) are indispensable to secure clinical applications. Assessment of current density distribution, induced by internally injected currents, may provide an alternative method for understanding the therapeutic effects of electrical stimulation. The current flow and pathway are affected by internal conductivity, and can be imaged using magnetic resonance-based conductivity imaging methods. Magnetic resonance electrical impedance tomography (MREIT) is an imaging method that can enable highly resolved mapping of electromagnetic tissue properties such as current density and conductivity of living tissues. In the current study, we experimentally imaged current density distribution of in vivo canine brains by applying MREIT to electrical stimulation. The current density maps of three canine brains were calculated from the measured magnetic flux density data. The absolute current density values of brain tissues, including gray matter, white matter, and cerebrospinal fluid were compared to assess the active regions during DBS. The resulting current density in different tissue types may provide useful information about current pathways and volume activation for adjusting surgical planning and understanding the therapeutic effects of DBS.
Rupert, Michael G.; Plummer, Niel
2009-01-01
This raster data set delineates the predicted probability of elevated volatile organic compound (VOC) concentrations in groundwater in the Eagle River watershed valley-fill aquifer, Eagle County, North-Central Colorado, 2006-2007. This data set was developed by a cooperative project between the U.S. Geological Survey, Eagle County, the Eagle River Water and Sanitation District, the Town of Eagle, the Town of Gypsum, and the Upper Eagle Regional Water Authority. This project was designed to evaluate potential land-development effects on groundwater and surface-water resources so that informed land-use and water management decisions can be made. This groundwater probability map and its associated probability maps was developed as follows: (1) A point data set of wells with groundwater quality and groundwater age data was overlaid with thematic layers of anthropogenic (related to human activities) and hydrogeologic data by using a geographic information system to assign each well values for depth to groundwater, distance to major streams and canals, distance to gypsum beds, precipitation, soils, and well depth. These data then were downloaded to a statistical software package for analysis by logistic regression. (2) Statistical models predicting the probability of elevated nitrate concentrations, the probability of unmixed young water (using chlorofluorocarbon-11 concentrations and tritium activities), and the probability of elevated volatile organic compound concentrations were developed using logistic regression techniques. (3) The statistical models were entered into a GIS and the probability map was constructed.
Rupert, Michael G.; Plummer, Niel
2009-01-01
This raster data set delineates the predicted probability of elevated nitrate concentrations in groundwater in the Eagle River watershed valley-fill aquifer, Eagle County, North-Central Colorado, 2006-2007. This data set was developed by a cooperative project between the U.S. Geological Survey, Eagle County, the Eagle River Water and Sanitation District, the Town of Eagle, the Town of Gypsum, and the Upper Eagle Regional Water Authority. This project was designed to evaluate potential land-development effects on groundwater and surface-water resources so that informed land-use and water management decisions can be made. This groundwater probability map and its associated probability maps was developed as follows: (1) A point data set of wells with groundwater quality and groundwater age data was overlaid with thematic layers of anthropogenic (related to human activities) and hydrogeologic data by using a geographic information system to assign each well values for depth to groundwater, distance to major streams and canals, distance to gypsum beds, precipitation, soils, and well depth. These data then were downloaded to a statistical software package for analysis by logistic regression. (2) Statistical models predicting the probability of elevated nitrate concentrations, the probability of unmixed young water (using chlorofluorocarbon-11 concentrations and tritium activities), and the probability of elevated volatile organic compound concentrations were developed using logistic regression techniques. (3) The statistical models were entered into a GIS and the probability map was constructed.
Qian, Wei; Fan, Guiyan; Liu, Dandan; Zhang, Helong; Wang, Xiaowu; Wu, Jian; Xu, Zhaosheng
2017-04-04
Cultivated spinach (Spinacia oleracea L.) is one of the most widely cultivated types of leafy vegetable in the world, and it has a high nutritional value. Spinach is also an ideal plant for investigating the mechanism of sex determination because it is a dioecious species with separate male and female plants. Some reports on the sex labeling and localization of spinach in the study of molecular markers have surfaced. However, there have only been two reports completed on the genetic map of spinach. The lack of rich and reliable molecular markers and the shortage of high-density linkage maps are important constraints in spinach research work. In this study, a high-density genetic map of spinach based on the Specific-locus Amplified Fragment Sequencing (SLAF-seq) technique was constructed; the sex-determining gene was also finely mapped. Through bio-information analysis, 50.75 Gb of data in total was obtained, including 207.58 million paired-end reads. Finally, 145,456 high-quality SLAF markers were obtained, with 27,800 polymorphic markers and 4080 SLAF markers were finally mapped onto the genetic map after linkage analysis. The map spanned 1,125.97 cM with an average distance of 0.31 cM between the adjacent marker loci. It was divided into 6 linkage groups corresponding to the number of spinach chromosomes. Besides, the combination of Bulked Segregation Analysis (BSA) with SLAF-seq technology(super-BSA) was employed to generate the linkage markers with the sex-determining gene. Combined with the high-density genetic map of spinach, the sex-determining gene X/Y was located at the position of the linkage group (LG) 4 (66.98 cM-69.72 cM and 75.48 cM-92.96 cM), which may be the ideal region for the sex-determining gene. A high-density genetic map of spinach based on the SLAF-seq technique was constructed with a backcross (BC 1 ) population (which is the highest density genetic map of spinach reported at present). At the same time, the sex-determining gene X/Y was mapped to LG4 with super-BSA. This map will offer a suitable basis for further study of spinach, such as gene mapping, map-based cloning of Specific genes, quantitative trait locus (QTL) mapping and marker-assisted selection (MAS). It will also provide an efficient reference for studies on the mechanism of sex determination in other dioecious plants.
A novel approach to validate satellite soil moisture retrievals using precipitation data
NASA Astrophysics Data System (ADS)
Karthikeyan, L.; Kumar, D. Nagesh
2016-10-01
A novel approach is proposed that attempts to validate passive microwave soil moisture retrievals using precipitation data (applied over India). It is based on the concept that the expectation of precipitation conditioned on soil moisture follows a sigmoidal convex-concave-shaped curve, the characteristic of which was recently shown to be represented by mutual information estimated between soil moisture and precipitation. On this basis, with an emphasis over distribution-free nonparametric computations, a new measure called Copula-Kernel Density Estimator based Mutual Information (CKDEMI) is introduced. The validation approach is generic in nature and utilizes CKDEMI in tandem with a couple of proposed bootstrap strategies, to check accuracy of any two soil moisture products (here Advanced Microwave Scanning Radiometer-EOS sensor's Vrije Universiteit Amsterdam-NASA (VUAN) and University of Montana (MONT) products) using precipitation (India Meteorological Department) data. The proposed technique yields a "best choice soil moisture product" map which contains locations where any one of the two/none of the two/both the products have produced accurate retrievals. The results indicated that in general, VUA-NASA product has performed well over University of Montana's product for India. The best choice soil moisture map is then integrated with land use land cover and elevation information using a novel probability density function-based procedure to gain insight on conditions under which each of the products has performed well. Finally, the impact of using a different precipitation (Asian Precipitation-Highly-Resolved Observational Data Integration Towards Evaluation of Water Resources) data set over the best choice soil moisture product map is also analyzed. The proposed methodology assists researchers and practitioners in selecting the appropriate soil moisture product for various assimilation strategies at both basin and continental scales.
NASA Astrophysics Data System (ADS)
Mondini, Alessandro C.; Chang, Kang-Tsung; Chiang, Shou-Hao; Schlögel, Romy; Notarnicola, Claudia; Saito, Hitoshi
2017-12-01
We propose a framework to systematically generate event landslide inventory maps from satellite images in southern Taiwan, where landslides are frequent and abundant. The spectral information is used to assess the pixel land cover class membership probability through a Maximum Likelihood classifier trained with randomly generated synthetic land cover spectral fingerprints, which are obtained from an independent training images dataset. Pixels are classified as landslides when the calculated landslide class membership probability, weighted by a susceptibility model, is higher than membership probabilities of other classes. We generated synthetic fingerprints from two FORMOSAT-2 images acquired in 2009 and tested the procedure on two other images, one in 2005 and the other in 2009. We also obtained two landslide maps through manual interpretation. The agreement between the two sets of inventories is given by the Cohen's k coefficients of 0.62 and 0.64, respectively. This procedure can now classify a new FORMOSAT-2 image automatically facilitating the production of landslide inventory maps.
Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P
2014-01-01
Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.
NASA Technical Reports Server (NTRS)
Hoge, F. E.; Swift, R. N.
1983-01-01
Airborne laser-induced, depth-resolved water Raman backscatter is useful in the detection and mapping of water optical transmission variations. This test, together with other field experiments, has identified the need for additional field experiments to resolve the degree of the contribution to the depth-resolved, Raman-backscattered signal waveform that is due to (1) sea surface height or elevation probability density; (2) off-nadir laser beam angle relative to the mean sea surface; and (3) the Gelbstoff fluorescence background, and the analytical techniques required to remove it. When converted to along-track profiles, the waveforms obtained reveal cells of a decreased Raman backscatter superimposed on an overall trend of monotonically decreasing water column optical transmission.
ERIC Educational Resources Information Center
Riggs, Peter J.
2013-01-01
Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Isostatic gravity map with simplified geology of the Los Angeles 30 x 60 minute quadrangle
Wooley, R.J.; Yerkes, R.F.; Langenheim, V.E.; Chuang, F.C.
2003-01-01
This isostatic residual gravity map is part of the Southern California Areal Mapping Project (SCAMP) and is intended to promote further understanding of the geology in the Los Angeles 30 x 60 minute quadrangle, California, by serving as a basis for geophysical interpretations and by supporting both geological mapping and topical (especially earthquake) studies. Local spatial variations in the Earth's gravity field (after various corrections for elevation, terrain, and deep crustal structure explained below) reflect the lateral variation in density in the mid- to upper crust. Densities often can be related to rock type, and abrupt spatial changes in density commonly mark lithologic boundaries. The map shows contours of isostatic gravity overlain on a simplified geology including faults and rock types. The map is draped over shaded-relief topography to show landforms.
N'Diaye, Amidou; Haile, Jemanesh K; Fowler, D Brian; Ammar, Karim; Pozniak, Curtis J
2017-01-01
Advances in sequencing and genotyping methods have enable cost-effective production of high throughput single nucleotide polymorphism (SNP) markers, making them the choice for linkage mapping. As a result, many laboratories have developed high-throughput SNP assays and built high-density genetic maps. However, the number of markers may, by orders of magnitude, exceed the resolution of recombination for a given population size so that only a minority of markers can accurately be ordered. Another issue attached to the so-called 'large p, small n' problem is that high-density genetic maps inevitably result in many markers clustering at the same position (co-segregating markers). While there are a number of related papers, none have addressed the impact of co-segregating markers on genetic maps. In the present study, we investigated the effects of co-segregating markers on high-density genetic map length and marker order using empirical data from two populations of wheat, Mohawk × Cocorit (durum wheat) and Norstar × Cappelle Desprez (bread wheat). The maps of both populations consisted of 85% co-segregating markers. Our study clearly showed that excess of co-segregating markers can lead to map expansion, but has little effect on markers order. To estimate the inflation factor (IF), we generated a total of 24,473 linkage maps (8,203 maps for Mohawk × Cocorit and 16,270 maps for Norstar × Cappelle Desprez). Using seven machine learning algorithms, we were able to predict with an accuracy of 0.7 the map expansion due to the proportion of co-segregating markers. For example in Mohawk × Cocorit, with 10 and 80% co-segregating markers the length of the map inflated by 4.5 and 16.6%, respectively. Similarly, the map of Norstar × Cappelle Desprez expanded by 3.8 and 11.7% with 10 and 80% co-segregating markers. With the increasing number of markers on SNP-chips, the proportion of co-segregating markers in high-density maps will continue to increase making map expansion unavoidable. Therefore, we suggest developers improve linkage mapping algorithms for efficient analysis of high-throughput data. This study outlines a practical strategy to estimate the IF due to the proportion of co-segregating markers and outlines a method to scale the length of the map accordingly.
N’Diaye, Amidou; Haile, Jemanesh K.; Fowler, D. Brian; Ammar, Karim; Pozniak, Curtis J.
2017-01-01
Advances in sequencing and genotyping methods have enable cost-effective production of high throughput single nucleotide polymorphism (SNP) markers, making them the choice for linkage mapping. As a result, many laboratories have developed high-throughput SNP assays and built high-density genetic maps. However, the number of markers may, by orders of magnitude, exceed the resolution of recombination for a given population size so that only a minority of markers can accurately be ordered. Another issue attached to the so-called ‘large p, small n’ problem is that high-density genetic maps inevitably result in many markers clustering at the same position (co-segregating markers). While there are a number of related papers, none have addressed the impact of co-segregating markers on genetic maps. In the present study, we investigated the effects of co-segregating markers on high-density genetic map length and marker order using empirical data from two populations of wheat, Mohawk × Cocorit (durum wheat) and Norstar × Cappelle Desprez (bread wheat). The maps of both populations consisted of 85% co-segregating markers. Our study clearly showed that excess of co-segregating markers can lead to map expansion, but has little effect on markers order. To estimate the inflation factor (IF), we generated a total of 24,473 linkage maps (8,203 maps for Mohawk × Cocorit and 16,270 maps for Norstar × Cappelle Desprez). Using seven machine learning algorithms, we were able to predict with an accuracy of 0.7 the map expansion due to the proportion of co-segregating markers. For example in Mohawk × Cocorit, with 10 and 80% co-segregating markers the length of the map inflated by 4.5 and 16.6%, respectively. Similarly, the map of Norstar × Cappelle Desprez expanded by 3.8 and 11.7% with 10 and 80% co-segregating markers. With the increasing number of markers on SNP-chips, the proportion of co-segregating markers in high-density maps will continue to increase making map expansion unavoidable. Therefore, we suggest developers improve linkage mapping algorithms for efficient analysis of high-throughput data. This study outlines a practical strategy to estimate the IF due to the proportion of co-segregating markers and outlines a method to scale the length of the map accordingly. PMID:28878789
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cikota, Aleksandar; Deustua, Susana; Marleau, Francine, E-mail: acikota@eso.org
We investigate limits on the extinction values of Type Ia supernovae (SNe Ia) to statistically determine the most probable color excess, E(B – V), with galactocentric distance, and use these statistics to determine the absorption-to-reddening ratio, R{sub V}, for dust in the host galaxies. We determined pixel-based dust mass surface density maps for 59 galaxies from the Key Insight on Nearby Galaxies: a Far-infrared Survey with Herschel (KINGFISH). We use SN Ia spectral templates to develop a Monte Carlo simulation of color excess E(B – V) with R{sub V} = 3.1 and investigate the color excess probabilities E(B – V) with projected radial galaxymore » center distance. Additionally, we tested our model using observed spectra of SN 1989B, SN 2002bo, and SN 2006X, which occurred in three KINGFISH galaxies. Finally, we determined the most probable reddening for Sa–Sap, Sab–Sbp, Sbc–Scp, Scd–Sdm, S0, and irregular galaxy classes as a function of R/R{sub 25}. We find that the largest expected reddening probabilities are in Sab–Sb and Sbc–Sc galaxies, while S0 and irregular galaxies are very dust poor. We present a new approach for determining the absorption-to-reddening ratio R{sub V} using color excess probability functions and find values of R{sub V} = 2.71 ± 1.58 for 21 SNe Ia observed in Sab–Sbp galaxies, and R{sub V} = 1.70 ± 0.38, for 34 SNe Ia observed in Sbc–Scp galaxies.« less
Switching probability of all-perpendicular spin valve nanopillars
NASA Astrophysics Data System (ADS)
Tzoufras, M.
2018-05-01
In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.
Klein, Daniel J.; Baym, Michael; Eckhoff, Philip
2014-01-01
Decision makers in epidemiology and other disciplines are faced with the daunting challenge of designing interventions that will be successful with high probability and robust against a multitude of uncertainties. To facilitate the decision making process in the context of a goal-oriented objective (e.g., eradicate polio by ), stochastic models can be used to map the probability of achieving the goal as a function of parameters. Each run of a stochastic model can be viewed as a Bernoulli trial in which “success” is returned if and only if the goal is achieved in simulation. However, each run can take a significant amount of time to complete, and many replicates are required to characterize each point in parameter space, so specialized algorithms are required to locate desirable interventions. To address this need, we present the Separatrix Algorithm, which strategically locates parameter combinations that are expected to achieve the goal with a user-specified probability of success (e.g. 95%). Technically, the algorithm iteratively combines density-corrected binary kernel regression with a novel information-gathering experiment design to produce results that are asymptotically correct and work well in practice. The Separatrix Algorithm is demonstrated on several test problems, and on a detailed individual-based simulation of malaria. PMID:25078087
Negative probability of random multiplier in turbulence
NASA Astrophysics Data System (ADS)
Bai, Xuan; Su, Weidong
2017-11-01
The random multiplicative process (RMP), which has been proposed for over 50 years, is a convenient phenomenological ansatz of turbulence cascade. In the RMP, the fluctuation in a large scale is statistically mapped to the one in a small scale by the linear action of an independent random multiplier (RM). Simple as it is, the RMP is powerful enough since all of the known scaling laws can be included in this model. So far as we know, however, a direct extraction for the probability density function (PDF) of RM has been absent yet. The reason is the deconvolution during the process is ill-posed. Nevertheless, with the progress in the studies of inverse problems, the situation can be changed. By using some new regularization techniques, for the first time we recover the PDFs of the RMs in some turbulent flows. All the consistent results from various methods point to an amazing observation-the PDFs can attain negative values in some intervals; and this can also be justified by some properties of infinitely divisible distributions. Despite the conceptual unconventionality, the present study illustrates the implications of negative probability in turbulence in several aspects, with emphasis on its role in describing the interaction between fluctuations at different scales. This work is supported by the NSFC (No. 11221062 and No. 11521091).
An efficient distribution method for nonlinear transport problems in stochastic porous media
NASA Astrophysics Data System (ADS)
Ibrahima, F.; Tchelepi, H.; Meyer, D. W.
2015-12-01
Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are convenient to explore possible scenarios and assess risks in subsurface problems. In particular, understanding how uncertainties propagate in porous media with nonlinear two-phase flow is essential, yet challenging, in reservoir simulation and hydrology. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the water saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. The method draws inspiration from the streamline approach and expresses the distributions of interest essentially in terms of an analytically derived mapping and the distribution of the time of flight. In a large class of applications the latter can be estimated at low computational costs (even via conventional Monte Carlo). Once the water saturation distribution is determined, any one-point statistics thereof can be obtained, especially its average and standard deviation. Moreover, rarely available in other approaches, yet crucial information such as the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be derived from the method. We provide various examples and comparisons with Monte Carlo simulations to illustrate the performance of the method.
NASA Astrophysics Data System (ADS)
Macedonio, Giovanni; Costa, Antonio; Scollo, Simona; Neri, Augusto
2015-04-01
Uncertainty in the tephra fallout hazard assessment may depend on different meteorological datasets and eruptive source parameters used in the modelling. We present a statistical study to analyze this uncertainty in the case of a sub-Plinian eruption of Vesuvius of VEI = 4, column height of 18 km and total erupted mass of 5 × 1011 kg. The hazard assessment for tephra fallout is performed using the advection-diffusion model Hazmap. Firstly, we analyze statistically different meteorological datasets: i) from the daily atmospheric soundings of the stations located in Brindisi (Italy) between 1962 and 1976 and between 1996 and 2012, and in Pratica di Mare (Rome, Italy) between 1996 and 2012; ii) from numerical weather prediction models of the National Oceanic and Atmospheric Administration and of the European Centre for Medium-Range Weather Forecasts. Furthermore, we modify the total mass, the total grain-size distribution, the eruption column height, and the diffusion coefficient. Then, we quantify the impact that different datasets and model input parameters have on the probability maps. Results shows that the parameter that mostly affects the tephra fallout probability maps, keeping constant the total mass, is the particle terminal settling velocity, which is a function of the total grain-size distribution, particle density and shape. Differently, the evaluation of the hazard assessment weakly depends on the use of different meteorological datasets, column height and diffusion coefficient.
STAR FORMATION IN TURBULENT MOLECULAR CLOUDS WITH COLLIDING FLOW
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsumoto, Tomoaki; Dobashi, Kazuhito; Shimoikura, Tomomi, E-mail: matsu@hosei.ac.jp
2015-03-10
Using self-gravitational hydrodynamical numerical simulations, we investigated the evolution of high-density turbulent molecular clouds swept by a colliding flow. The interaction of shock waves due to turbulence produces networks of thin filamentary clouds with a sub-parsec width. The colliding flow accumulates the filamentary clouds into a sheet cloud and promotes active star formation for initially high-density clouds. Clouds with a colliding flow exhibit a finer filamentary network than clouds without a colliding flow. The probability distribution functions (PDFs) for the density and column density can be fitted by lognormal functions for clouds without colliding flow. When the initial turbulence ismore » weak, the column density PDF has a power-law wing at high column densities. The colliding flow considerably deforms the PDF, such that the PDF exhibits a double peak. The stellar mass distributions reproduced here are consistent with the classical initial mass function with a power-law index of –1.35 when the initial clouds have a high density. The distribution of stellar velocities agrees with the gas velocity distribution, which can be fitted by Gaussian functions for clouds without colliding flow. For clouds with colliding flow, the velocity dispersion of gas tends to be larger than the stellar velocity dispersion. The signatures of colliding flows and turbulence appear in channel maps reconstructed from the simulation data. Clouds without colliding flow exhibit a cloud-scale velocity shear due to the turbulence. In contrast, clouds with colliding flow show a prominent anti-correlated distribution of thin filaments between the different velocity channels, suggesting collisions between the filamentary clouds.« less
USDA-ARS?s Scientific Manuscript database
Consensus linkage maps are important tools in crop genomics. We have assembled a high-density tetraploid wheat consensus map by integrating 13 datasets from independent biparental populations involving durum wheat cultivars (Triticum turgidum ssp. durum), cultivated emmer (T. turgidum ssp. dicoccum...
Yu, Yang; Zhang, Xiaojun; Yuan, Jianbo; Li, Fuhua; Chen, Xiaohan; Zhao, Yongzhen; Huang, Long; Zheng, Hongkun; Xiang, Jianhai
2015-01-01
The Pacific white shrimp Litopenaeus vannamei is the dominant crustacean species in global seafood mariculture. Understanding the genome and genetic architecture is useful for deciphering complex traits and accelerating the breeding program in shrimp. In this study, a genome survey was conducted and a high-density linkage map was constructed using a next-generation sequencing approach. The genome survey was used to identify preliminary genome characteristics and to generate a rough reference for linkage map construction. De novo SNP discovery resulted in 25,140 polymorphic markers. A total of 6,359 high-quality markers were selected for linkage map construction based on marker coverage among individuals and read depths. For the linkage map, a total of 6,146 markers spanning 4,271.43 cM were mapped to 44 sex-averaged linkage groups, with an average marker distance of 0.7 cM. An integration analysis linked 5,885 genome scaffolds and 1,504 BAC clones to the linkage map. Based on the high-density linkage map, several QTLs for body weight and body length were detected. This high-density genetic linkage map reveals basic genomic architecture and will be useful for comparative genomics research, genome assembly and genetic improvement of L. vannamei and other penaeid shrimp species. PMID:26503227
Postfragmentation density function for bacterial aggregates in laminar flow.
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M
2011-04-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society
Smith, Erik A.; Sanocki, Chris A.; Lorenz, David L.; Jacobsen, Katrin E.
2017-12-27
Streamflow distribution maps for the Cannon River and St. Louis River drainage basins were developed by the U.S. Geological Survey, in cooperation with the Legislative-Citizen Commission on Minnesota Resources, to illustrate relative and cumulative streamflow distributions. The Cannon River was selected to provide baseline data to assess the effects of potential surficial sand mining, and the St. Louis River was selected to determine the effects of ongoing Mesabi Iron Range mining. Each drainage basin (Cannon, St. Louis) was subdivided into nested drainage basins: the Cannon River was subdivided into 152 nested drainage basins, and the St. Louis River was subdivided into 353 nested drainage basins. For each smaller drainage basin, the estimated volumes of groundwater discharge (as base flow) and surface runoff flowing into all surface-water features were displayed under the following conditions: (1) extreme low-flow conditions, comparable to an exceedance-probability quantile of 0.95; (2) low-flow conditions, comparable to an exceedance-probability quantile of 0.90; (3) a median condition, comparable to an exceedance-probability quantile of 0.50; and (4) a high-flow condition, comparable to an exceedance-probability quantile of 0.02.Streamflow distribution maps were developed using flow-duration curve exceedance-probability quantiles in conjunction with Soil-Water-Balance model outputs; both the flow-duration curve and Soil-Water-Balance models were built upon previously published U.S. Geological Survey reports. The selected streamflow distribution maps provide a proactive water management tool for State cooperators by illustrating flow rates during a range of hydraulic conditions. Furthermore, after the nested drainage basins are highlighted in terms of surface-water flows, the streamflows can be evaluated in the context of meeting specific ecological flows under different flow regimes and potentially assist with decisions regarding groundwater and surface-water appropriations. Presented streamflow distribution maps are foundational work intended to support the development of additional streamflow distribution maps that include statistical constraints on the selected flow conditions.
ERIC Educational Resources Information Center
Heisler, Lori; Goffman, Lisa
2016-01-01
A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…
Automated MRI segmentation for individualized modeling of current flow in the human head.
Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C
2013-12-01
High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.
Hydrological alteration of the Upper Nakdong river under AR5 climate change scenarios
NASA Astrophysics Data System (ADS)
Kim, S.; Park, Y.; Cha, W. Y.; Okjeong, L.; Choi, J.; Lee, J.
2016-12-01
One of the tasks faced to water engineers is how to consider the climate change impact in our water resources management. Especially in South Korea, where almost all drinking water is taken from major rivers, the public attention is focused on their eco-hydrologic status. In this study, the effect of climate change on eco-hydrologic regime in the Upper Nakdong river which is one of major rivers in South Korea is investigated using SWAT. The simulation results are measured using the indicators of hydrological alteration (IHA) established by U.S. Nature Conservancy. Future climate information is obtained by scaling historical series, provided by Korean Meteorological Administration RCM (KMA RCM) and four RCP scenarios. KMA RCM has 12.5-km spatial resolution in Korean Peninsula and is produced by UK Hedley Centre regional climate model HadGEM3-RA. The RCM bias is corrected by the Kernel density distribution mapping (KDDM) method. The KDDM estimates the cumulative probability density function (CDF) of each dataset using kernel density estimation, and is implemented by quantile-mapping the CDF of a present climate variable obtained from the RCM onto that of the corresponding observed climate variable. Although the simulation results from different RCP scenarios show diverse hydrologic responses in our watershed, the mainstream of future simulation results indicate that there will be more river flow in southeast Korea. The predicted impacts of hydrological alteration caused by climate change on the aquatic ecosystem in the Upper Nakdong river will be presented. Acknowledgement This research was supported by a grant(14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.
Investigation of an Optimum Detection Scheme for a Star-Field Mapping System
NASA Technical Reports Server (NTRS)
Aldridge, M. D.; Credeur, L.
1970-01-01
An investigation was made to determine the optimum detection scheme for a star-field mapping system that uses coded detection resulting from starlight shining through specially arranged multiple slits of a reticle. The computer solution of equations derived from a theoretical model showed that the greatest probability of detection for a given star and background intensity occurred with the use of a single transparent slit. However, use of multiple slits improved the system's ability to reject the detection of undesirable lower intensity stars, but only by decreasing the probability of detection for lower intensity stars to be mapped. Also, it was found that the coding arrangement affected the root-mean-square star-position error and that detection is possible with error in the system's detected spin rate, though at a reduced probability.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Smyth, E.; Small, E. E.
2017-12-01
The spatial distribution of snow water equivalent (SWE) is not sufficiently monitored with either remotely sensed or ground-based observations for water resources management. Recent applications of airborne Lidar have yielded basin-wide mapping of SWE when combined with a snow density model. However, in the absence of snow density observations, the uncertainty in these SWE maps is dominated by uncertainty in modeled snow density rather than in Lidar measurement of snow depth. Available observations tend to have a bias in physiographic regime (e.g., flat open areas) and are often insufficient in number to support testing of models across a range of conditions. Thus, there is a need for targeted sampling strategies and controlled model experiments to understand where and why different snow density models diverge. This will enable identification of robust model structures that represent dominant processes controlling snow densification, in support of basin-scale estimation of SWE with remotely-sensed snow depth datasets. The NASA SnowEx mission is a unique opportunity to evaluate sampling strategies of snow density and to quantify and reduce uncertainty in modeled snow density. In this presentation, we present initial field data analyses and modeling results over the Colorado SnowEx domain in the 2016-2017 winter campaign. We detail a framework for spatially mapping the uncertainty in snowpack density, as represented across multiple models. Leveraging the modular SUMMA model, we construct a series of physically-based models to assess systematically the importance of specific process representations to snow density estimates. We will show how models and snow pit observations characterize snow density variations with forest cover in the SnowEx domains. Finally, we will use the spatial maps of density uncertainty to evaluate the selected locations of snow pits, thereby assessing the adequacy of the sampling strategy for targeting uncertainty in modeled snow density.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)
2005-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2006-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2008-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Simple gain probability functions for large reflector antennas of JPL/NASA
NASA Technical Reports Server (NTRS)
Jamnejad, V.
2003-01-01
Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.
Higher-dimensional attractors with absolutely continuous invariant probability
NASA Astrophysics Data System (ADS)
Bocker, Carlos; Bortolotti, Ricardo
2018-05-01
Consider a dynamical system given by , where E is a linear expanding map of , C is a linear contracting map of and f is in . We provide sufficient conditions for E that imply the existence of an open set of pairs for which the corresponding dynamic T admits a unique absolutely continuous invariant probability. A geometrical characteristic of transversality between self-intersections of images of is present in the dynamic of the maps in . In addition, we give a condition between E and C under which it is possible to perturb f to obtain a pair in .
Prior-knowledge-based spectral mixture analysis for impervious surface mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jinshui; He, Chunyang; Zhou, Yuyu
2014-01-03
In this study, we developed a prior-knowledge-based spectral mixture analysis (PKSMA) to map impervious surfaces by using endmembers derived separately for high- and low-density urban regions. First, an urban area was categorized into high- and low-density urban areas, using a multi-step classification method. Next, in high-density urban areas that were assumed to have only vegetation and impervious surfaces (ISs), the Vegetation-Impervious model (V-I) was used in a spectral mixture analysis (SMA) with three endmembers: vegetation, high albedo, and low albedo. In low-density urban areas, the Vegetation-Impervious-Soil model (V-I-S) was used in an SMA analysis with four endmembers: high albedo, lowmore » albedo, soil, and vegetation. The fraction of IS with high and low albedo in each pixel was combined to produce the final IS map. The root mean-square error (RMSE) of the IS map produced using PKSMA was about 11.0%, compared to 14.52% using four-endmember SMA. Particularly in high-density urban areas, PKSMA (RMSE = 6.47%) showed better performance than four-endmember (15.91%). The results indicate that PKSMA can improve IS mapping compared to traditional SMA by using appropriately selected endmembers and is particularly strong in high-density urban areas.« less
NASA Astrophysics Data System (ADS)
Arca, B.; Salis, M.; Bacciu, V.; Duce, P.; Pellizzaro, G.; Ventura, A.; Spano, D.
2009-04-01
Although in many countries lightning is the main cause of ignition, in the Mediterranean Basin the forest fires are predominantly ignited by arson, or by human negligence. The fire season peaks coincide with extreme weather conditions (mainly strong winds, hot temperatures, low atmospheric water vapour content) and high tourist presence. Many works reported that in the Mediterranean Basin the projected impacts of climate change will cause greater weather variability and extreme weather conditions, with drier and hotter summers and heat waves. At long-term scale, climate changes could affect the fuel load and the dead/live fuel ratio, and therefore could change the vegetation flammability. At short-time scale, the increase of extreme weather events could directly affect fuel water status, and it could increase large fire occurrence. In this context, detecting the areas characterized by both high probability of large fire occurrence and high fire severity could represent an important component of the fire management planning. In this work we compared several fire probability and severity maps (fire occurrence, rate of spread, fireline intensity, flame length) obtained for a study area located in North Sardinia, Italy, using FlamMap simulator (USDA Forest Service, Missoula). FlamMap computes the potential fire behaviour characteristics over a defined landscape for given weather, wind and fuel moisture data. Different weather and fuel moisture scenarios were tested to predict the potential impact of climate changes on fire parameters. The study area, characterized by a mosaic of urban areas, protected areas, and other areas subject to anthropogenic disturbances, is mainly composed by fire-prone Mediterranean maquis. The input themes needed to run FlamMap were input as grid of 10 meters; the wind data, obtained using a computational fluid-dynamic model, were inserted as gridded file, with a resolution of 50 m. The analysis revealed high fire probability and severity in most of the areas, and therefore a high potential danger. The FlamMap outputs and the derived fire probability maps can be used in decision support systems for fire spread and behaviour and for fire danger assessment with actual and future fire regimes.
Combined EDL-Mobility Planning for Planetary Missions
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki; Balaram, Bob
2011-01-01
This paper presents an analysis framework for planetary missions that have coupled mobility and EDL (Entry-Descent-Landing) systems. Traditional systems engineering approaches to mobility missions such as MERs (Mars Exploration Rovers) and MSL (Mars Science Laboratory) independently study the EDL system and the mobility system, and does not perform explicit trade-off between them or risk minimization of the overall system. A major challenge is that EDL operation is inherently uncertain and its analysis results such as landing footprint are described using PDF (Probability Density Function). The proposed approach first builds a mobility cost-to-go map that encodes the driving cost of any point on the map to a science target location. The cost could include variety of metrics such as traverse distance, time, wheel rotation on soft soil, and closeness to hazards. It then convolves the mobility cost-to-go map with the landing PDF given by the EDL system, which provides a histogram of driving cost, which can be used to evaluate the overall risk of the mission. By capturing the coupling between EDL and mobility explicitly, this analysis framework enables quantitative tradeoff between EDL and mobility system performance, as well as the characterization of risks in a statistical way. The simulation results are presented with a realistic Mars terrain data
Formaldehyde in the Diffuse Interstellar Cloud MBM40
NASA Astrophysics Data System (ADS)
Joy, Mackenzie; Magnani, Loris A.
2018-06-01
MBM40, a high-latitude molecular cloud, has been extensively studied using different molecular tracers. It appears that MBM40 is composed of a relatively dense, helical filament embedded in a more diffuse substrate of low density molecular gas. In order to study the transition between the two regimes, this project presents the first high-resolution mapping of MBM40 using the 110-111 hyperfine transition of formaldehyde (H2CO) at 4.83 GHz. We used H2CO spectra obtained with the Arecibo telescope more than a decade ago to construct this map. The results can be compared to previous maps made from the CO(1-0) transition to gain further understanding of the structure of the cloud. The intensity of the H2CO emission was compared to the CO emission. Although a correlation exists between the H2CO and CO emissivity, there seems to be a saturation of H2CO line strength for stronger CO emissivity. This is probably a radiative transfer effect of the CO emission. We have also found that the velocity dispersion of H2CO in the lower ridge of the cloud is significantly lower than in the rest of the cloud. This may indicate that this portion of the cloud is a coherent structure (analogous to an eddy) in a turbulent flow.
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
Chen, Kevin T; Izquierdo-Garcia, David; Poynton, Clare B; Chonde, Daniel B; Catana, Ciprian
2017-03-01
To propose an MR-based method for generating continuous-valued head attenuation maps and to assess its accuracy and reproducibility. Demonstrating that novel MR-based photon attenuation correction methods are both accurate and reproducible is essential prior to using them routinely in research and clinical studies on integrated PET/MR scanners. Continuous-valued linear attenuation coefficient maps ("μ-maps") were generated by combining atlases that provided the prior probability of voxel positions belonging to a certain tissue class (air, soft tissue, or bone) and an MR intensity-based likelihood classifier to produce posterior probability maps of tissue classes. These probabilities were used as weights to generate the μ-maps. The accuracy of this probabilistic atlas-based continuous-valued μ-map ("PAC-map") generation method was assessed by calculating the voxel-wise absolute relative change (RC) between the MR-based and scaled CT-based attenuation-corrected PET images. To assess reproducibility, we performed pair-wise comparisons of the RC values obtained from the PET images reconstructed using the μ-maps generated from the data acquired at three time points. The proposed method produced continuous-valued μ-maps that qualitatively reflected the variable anatomy in patients with brain tumor and agreed well with the scaled CT-based μ-maps. The absolute RC comparing the resulting PET volumes was 1.76 ± 2.33 %, quantitatively demonstrating that the method is accurate. Additionally, we also showed that the method is highly reproducible, the mean RC value for the PET images reconstructed using the μ-maps obtained at the three visits being 0.65 ± 0.95 %. Accurate and highly reproducible continuous-valued head μ-maps can be generated from MR data using a probabilistic atlas-based approach.
Victor A. Rudis
2000-01-01
Scant information exists about the spatial extent of human impact on forest resource supplies, i.e., depreciative and nonforest uses. I used observations of ground-sampled land use and intrusions on forest land to map the probability of resource use and human impact for broad areas. Data came from a seven-state survey region (Alabama, Arkansas, Louisiana, Mississippi,...
Victor A. Rudis
2000-01-01
Scant information exists about the spatial extent of human impact on forest resource supplies, i.e., depreciative and nonforest uses. I used observations of ground-sampled land use and intrusions on forest land to map the probability of resource use and human impact for broad areas. Data came from a seven State survey region (Alabama, Arkansas, Louisiana, Mississippi,...
2013-01-01
Background As for other major crops, achieving a complete wheat genome sequence is essential for the application of genomics to breeding new and improved varieties. To overcome the complexities of the large, highly repetitive and hexaploid wheat genome, the International Wheat Genome Sequencing Consortium established a chromosome-based strategy that was validated by the construction of the physical map of chromosome 3B. Here, we present improved strategies for the construction of highly integrated and ordered wheat physical maps, using chromosome 1BL as a template, and illustrate their potential for evolutionary studies and map-based cloning. Results Using a combination of novel high throughput marker assays and an assembly program, we developed a high quality physical map representing 93% of wheat chromosome 1BL, anchored and ordered with 5,489 markers including 1,161 genes. Analysis of the gene space organization and evolution revealed that gene distribution and conservation along the chromosome results from the superimposition of the ancestral grass and recent wheat evolutionary patterns, leading to a peak of synteny in the central part of the chromosome arm and an increased density of non-collinear genes towards the telomere. With a density of about 11 markers per Mb, the 1BL physical map provides 916 markers, including 193 genes, for fine mapping the 40 QTLs mapped on this chromosome. Conclusions Here, we demonstrate that high marker density physical maps can be developed in complex genomes such as wheat to accelerate map-based cloning, gain new insights into genome evolution, and provide a foundation for reference sequencing. PMID:23800011
Agarwal, Gaurav; Clevenger, Josh; Pandey, Manish K; Wang, Hui; Shasidhar, Yaduru; Chu, Ye; Fountain, Jake C; Choudhary, Divya; Culbreath, Albert K; Liu, Xin; Huang, Guodong; Wang, Xingjun; Deshmukh, Rupesh; Holbrook, C Corley; Bertioli, David J; Ozias-Akins, Peggy; Jackson, Scott A; Varshney, Rajeev K; Guo, Baozhu
2018-04-10
Whole-genome resequencing (WGRS) of mapping populations has facilitated development of high-density genetic maps essential for fine mapping and candidate gene discovery for traits of interest in crop species. Leaf spots, including early leaf spot (ELS) and late leaf spot (LLS), and Tomato spotted wilt virus (TSWV) are devastating diseases in peanut causing significant yield loss. We generated WGRS data on a recombinant inbred line population, developed a SNP-based high-density genetic map, and conducted fine mapping, candidate gene discovery and marker validation for ELS, LLS and TSWV. The first sequence-based high-density map was constructed with 8869 SNPs assigned to 20 linkage groups, representing 20 chromosomes, for the 'T' population (Tifrunner × GT-C20) with a map length of 3120 cM and an average distance of 1.45 cM. The quantitative trait locus (QTL) analysis using high-density genetic map and multiple season phenotyping data identified 35 main-effect QTLs with phenotypic variation explained (PVE) from 6.32% to 47.63%. Among major-effect QTLs mapped, there were two QTLs for ELS on B05 with 47.42% PVE and B03 with 47.38% PVE, two QTLs for LLS on A05 with 47.63% and B03 with 34.03% PVE and one QTL for TSWV on B09 with 40.71% PVE. The epistasis and environment interaction analyses identified significant environmental effects on these traits. The identified QTL regions had disease resistance genes including R-genes and transcription factors. KASP markers were developed for major QTLs and validated in the population and are ready for further deployment in genomics-assisted breeding in peanut. © 2018 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.
U.S.A. National Surface Rock Density Map - Part 2
NASA Astrophysics Data System (ADS)
Winester, D.
2016-12-01
A map of surface rock densities over the USA has been developed by the NOAA-National Geodetic Survey (NGS) as part of its Gravity for the Redefinition of the American Vertical Datum (GRAV-D) Program. GRAV-D is part of an international effort to generate a North American gravimetric geoid for use as the vertical datum reference surface. As a part of modeling process, it is necessary to eliminate from the observed gravity data the topographic and density effects of all masses above the geoid. However, the long-standing tradition in geoid modeling, which is to use an average rock density (e.g. 2.67 g/cm3), does not adequately represent the variety of lithologies in the USA. The U.S. Geological Survey has assembled a downloadable set of surface geologic formation maps (typically 1:100,000 to 1:500, 000 scale in NAD27) in GIS format. The lithologies were assigned densities typical of their rock type (Part 1) and these variety of densities were then rasterized and averaged over one arc-minute areas. All were then transformed into WGS84 datum. Thin layers of alluvium and some water bodies (interpreted to be less than 40 m thick) have been ignored in deference to underlying rocks. Deep alluvial basins have not been removed, since they represent significant fraction of local mass. The initial assumption for modeling densities will be that the surface rock densities extend down to the geoid. If this results in poor modeling, variable lithologies with depth can be attempted. Initial modeling will use elevations from the SRTM DEM. A map of CONUS densities is presented (denser lithologies are shown brighter). While a visual map at this scale does show detailed features, digital versions are available upon request. Also presented are some pitfalls of using source GIS maps digitized from variable reference sources, including the infamous `state line faults.'
Use of total electron content data to analyze ionosphere electron density gradients
NASA Astrophysics Data System (ADS)
Nava, B.; Radicella, S. M.; Leitinger, R.; Coïsson, P.
In the presence of electron density gradients the thin shell approximation for the ionosphere, used together with a simple mapping function to convert slant total electron content (TEC) to vertical TEC, could lead to TEC conversion errors. These "mapping function errors" can therefore be used to detect the electron density gradients in the ionosphere. In the present work GPS derived slant TEC data have been used to investigate the effects of the electron density gradients in the middle and low latitude ionosphere under geomagnetic quiet and disturbed conditions. In particular the data corresponding to the geographic area of the American Sector for the days 5-7 April 2000 have been used to perform a complete analysis of mapping function errors based on the "coinciding pierce point technique". The results clearly illustrate the electron density gradient effects according to the locations considered and to the actual levels of disturbance of the ionosphere. In addition, the possibility to assess an ionospheric shell height able to minimize the mapping function errors has been verified.
Rupert, Michael G.; Plummer, Niel
2009-01-01
This raster data set delineates the predicted probability of unmixed young groundwater (defined using chlorofluorocarbon-11 concentrations and tritium activities) in groundwater in the Eagle River watershed valley-fill aquifer, Eagle County, North-Central Colorado, 2006-2007. This data set was developed by a cooperative project between the U.S. Geological Survey, Eagle County, the Eagle River Water and Sanitation District, the Town of Eagle, the Town of Gypsum, and the Upper Eagle Regional Water Authority. This project was designed to evaluate potential land-development effects on groundwater and surface-water resources so that informed land-use and water management decisions can be made. This groundwater probability map and its associated probability maps were developed as follows: (1) A point data set of wells with groundwater quality and groundwater age data was overlaid with thematic layers of anthropogenic (related to human activities) and hydrogeologic data by using a geographic information system to assign each well values for depth to groundwater, distance to major streams and canals, distance to gypsum beds, precipitation, soils, and well depth. These data then were downloaded to a statistical software package for analysis by logistic regression. (2) Statistical models predicting the probability of elevated nitrate concentrations, the probability of unmixed young water (using chlorofluorocarbon-11 concentrations and tritium activities), and the probability of elevated volatile organic compound concentrations were developed using logistic regression techniques. (3) The statistical models were entered into a GIS and the probability map was constructed.
Model-independent analyses of non-Gaussianity in Planck CMB maps using Minkowski functionals
NASA Astrophysics Data System (ADS)
Buchert, Thomas; France, Martin J.; Steiner, Frank
2017-05-01
Despite the wealth of Planck results, there are difficulties in disentangling the primordial non-Gaussianity of the Cosmic Microwave Background (CMB) from the secondary and the foreground non-Gaussianity (NG). For each of these forms of NG the lack of complete data introduces model-dependences. Aiming at detecting the NGs of the CMB temperature anisotropy δ T , while paying particular attention to a model-independent quantification of NGs, our analysis is based upon statistical and morphological univariate descriptors, respectively: the probability density function P(δ T) , related to v0, the first Minkowski Functional (MF), and the two other MFs, v1 and v2. From their analytical Gaussian predictions we build the discrepancy functions {{ Δ }k} (k = P, 0, 1, 2) which are applied to an ensemble of 105 CMB realization maps of the Λ CDM model and to the Planck CMB maps. In our analysis we use general Hermite expansions of the {{ Δ }k} up to the 12th order, where the coefficients are explicitly given in terms of cumulants. Assuming hierarchical ordering of the cumulants, we obtain the perturbative expansions generalizing the second order expansions of Matsubara to arbitrary order in the standard deviation {σ0} for P(δ T) and v0, where the perturbative expansion coefficients are explicitly given in terms of complete Bell polynomials. The comparison of the Hermite expansions and the perturbative expansions is performed for the Λ CDM map sample and the Planck data. We confirm the weak level of non-Gaussianity (1-2)σ of the foreground corrected masked Planck 2015 maps.
Cannon, Susan H.; Gartner, Joseph E.; Rupert, Michael G.; Michael, John A.
2003-01-01
These maps present preliminary assessments of the probability of debris-flow activity and estimates of peak discharges that can potentially be generated by debris-flows issuing from basins burned by the Piru, Simi and Verdale Fires of October 2003 in southern California in response to the 25-year, 10-year, and 2-year 1-hour rain storms. The probability maps are based on the application of a logistic multiple regression model that describes the percent chance of debris-flow production from an individual basin as a function of burned extent, soil properties, basin gradients and storm rainfall. The peak discharge maps are based on application of a multiple-regression model that can be used to estimate debris-flow peak discharge at a basin outlet as a function of basin gradient, burn extent, and storm rainfall. Probabilities of debris-flow occurrence for the Piru Fire range between 2 and 94% and estimates of debris flow peak discharges range between 1,200 and 6,640 ft3/s (34 to 188 m3/s). Basins burned by the Simi Fire show probabilities for debris-flow occurrence between 1 and 98%, and peak discharge estimates between 1,130 and 6,180 ft3/s (32 and 175 m3/s). The probabilities for debris-flow activity calculated for the Verdale Fire range from negligible values to 13%. Peak discharges were not estimated for this fire because of these low probabilities. These maps are intended to identify those basins that are most prone to the largest debris-flow events and provide information for the preliminary design of mitigation measures and for the planning of evacuation timing and routes.
Reciprocal-space mapping of epitaxic thin films with crystallite size and shape polydispersity.
Boulle, A; Conchon, F; Guinebretière, R
2006-01-01
A development is presented that allows the simulation of reciprocal-space maps (RSMs) of epitaxic thin films exhibiting fluctuations in the size and shape of the crystalline domains over which diffraction is coherent (crystallites). Three different crystallite shapes are studied, namely parallelepipeds, trigonal prisms and hexagonal prisms. For each shape, two cases are considered. Firstly, the overall size is allowed to vary but with a fixed thickness/width ratio. Secondly, the thickness and width are allowed to vary independently. The calculations are performed assuming three different size probability density functions: the normal distribution, the lognormal distribution and a general histogram distribution. In all cases considered, the computation of the RSM only requires a two-dimensional Fourier integral and the integrand has a simple analytical expression, i.e. there is no significant increase in computing times by taking size and shape fluctuations into account. The approach presented is compatible with most lattice disorder models (dislocations, inclusions, mosaicity, ...) and allows a straightforward account of the instrumental resolution. The applicability of the model is illustrated with the case of an yttria-stabilized zirconia film grown on sapphire.
Clustering the Orion B giant molecular cloud based on its molecular emission.
Bron, Emeric; Daudon, Chloé; Pety, Jérôme; Levrier, François; Gerin, Maryvonne; Gratier, Pierre; Orkisz, Jan H; Guzman, Viviana; Bardeau, Sébastien; Goicoechea, Javier R; Liszt, Harvey; Öberg, Karin; Peretto, Nicolas; Sievers, Albrecht; Tremblin, Pascal
2018-02-01
Previous attempts at segmenting molecular line maps of molecular clouds have focused on using position-position-velocity data cubes of a single molecular line to separate the spatial components of the cloud. In contrast, wide field spectral imaging over a large spectral bandwidth in the (sub)mm domain now allows one to combine multiple molecular tracers to understand the different physical and chemical phases that constitute giant molecular clouds (GMCs). We aim at using multiple tracers (sensitive to different physical processes and conditions) to segment a molecular cloud into physically/chemically similar regions (rather than spatially connected components), thus disentangling the different physical/chemical phases present in the cloud. We use a machine learning clustering method, namely the Meanshift algorithm, to cluster pixels with similar molecular emission, ignoring spatial information. Clusters are defined around each maximum of the multidimensional Probability Density Function (PDF) of the line integrated intensities. Simple radiative transfer models were used to interpret the astrophysical information uncovered by the clustering analysis. A clustering analysis based only on the J = 1 - 0 lines of three isotopologues of CO proves suffcient to reveal distinct density/column density regimes ( n H ~ 100 cm -3 , ~ 500 cm -3 , and > 1000 cm -3 ), closely related to the usual definitions of diffuse, translucent and high-column-density regions. Adding two UV-sensitive tracers, the J = 1 - 0 line of HCO + and the N = 1 - 0 line of CN, allows us to distinguish two clearly distinct chemical regimes, characteristic of UV-illuminated and UV-shielded gas. The UV-illuminated regime shows overbright HCO + and CN emission, which we relate to a photochemical enrichment effect. We also find a tail of high CN/HCO + intensity ratio in UV-illuminated regions. Finer distinctions in density classes ( n H ~ 7 × 10 3 cm -3 ~ 4 × 10 4 cm -3 ) for the densest regions are also identified, likely related to the higher critical density of the CN and HCO + (1 - 0) lines. These distinctions are only possible because the high-density regions are spatially resolved. Molecules are versatile tracers of GMCs because their line intensities bear the signature of the physics and chemistry at play in the gas. The association of simultaneous multi-line, wide-field mapping and powerful machine learning methods such as the Meanshift clustering algorithm reveals how to decode the complex information available in these molecular tracers.
Comparison of methods for estimating density of forest songbirds from point counts
Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey
2011-01-01
New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...
Polder maps: Improving OMIT maps by excluding bulk solvent
Liebschner, Dorothee; Afonine, Pavel V.; Moriarty, Nigel W.; ...
2017-02-01
The crystallographic maps that are routinely used during the structure-solution workflow are almost always model-biased because model information is used for their calculation. As these maps are also used to validate the atomic models that result from model building and refinement, this constitutes an immediate problem: anything added to the model will manifest itself in the map and thus hinder the validation. OMIT maps are a common tool to verify the presence of atoms in the model. The simplest way to compute an OMIT map is to exclude the atoms in question from the structure, update the corresponding structure factorsmore » and compute a residual map. It is then expected that if these atoms are present in the crystal structure, the electron density for the omitted atoms will be seen as positive features in this map. This, however, is complicated by the flat bulk-solvent model which is almost universally used in modern crystallographic refinement programs. This model postulates constant electron density at any voxel of the unit-cell volume that is not occupied by the atomic model. Consequently, if the density arising from the omitted atoms is weak then the bulk-solvent model may obscure it further. A possible solution to this problem is to prevent bulk solvent from entering the selected OMIT regions, which may improve the interpretative power of residual maps. This approach is called a polder (OMIT) map. Polder OMIT maps can be particularly useful for displaying weak densities of ligands, solvent molecules, side chains, alternative conformations and residues both in terminal regions and in loops. As a result, the tools described in this manuscript have been implemented and are available in PHENIX.« less
NASA Astrophysics Data System (ADS)
Gaczkowski, B.; Preibisch, T.; Stanke, T.; Krause, M. G. H.; Burkert, A.; Diehl, R.; Fierlinger, K.; Kroell, D.; Ngoumou, J.; Roccatagliata, V.
2015-12-01
Context. The Lupus I cloud is found between the Upper Scorpius (USco) and the Upper Centaurus-Lupus (UCL) subgroups of the Scorpius-Centaurus OB association, where the expanding USco H I shell appears to interact with a bubble currently driven by the winds of the remaining B-stars of UCL. Aims: We want to study how collisions of large-scale interstellar gas flows form and influence new dense clouds in the ISM. Methods: We performed LABOCA continuum sub-mm observations of Lupus I that provide for the first time a direct view of the densest, coldest cloud clumps and cores at high angular resolution. We complemented these data with Herschel and Planck data from which we constructed column density and temperature maps. From the Herschel and LABOCA column density maps we calculated probability density functions (PDFs) to characterize the density structure of the cloud. Results: The northern part of Lupus I is found to have, on average, lower densities, higher temperatures, and no active star formation. The center-south part harbors dozens of pre-stellar cores where density and temperature reach their maximum and minimum, respectively. Our analysis of the column density PDFs from the Herschel data show double-peak profiles for all parts of the cloud, which we attribute to an external compression. In those parts with active star formation, the PDF shows a power-law tail at high densities. The PDFs we calculated from our LABOCA data trace the denser parts of the cloud showing one peak and a power-law tail. With LABOCA we find 15 cores with masses between 0.07 and 1.71 M⊙ and a total mass of ≈8 M⊙. The total gas and dust mass of the cloud is ≈164 M⊙ and hence ~5% of the mass is in cores. From the Herschel and Planck data we find a total mass of ≈174 M⊙ and ≈171 M⊙, respectively. Conclusions: The position, orientation, and elongated shape of Lupus I, the double-peak PDFs and the population of pre-stellar and protostellar cores could be explained by the large-scale compression from the advancing USco H I shell and the UCL wind bubble. The Atacama Pathfinder Experiment (APEX) is a collaboration between the Max-Planck-Institut für Radioastronomie (MPIfR), the European Southern Observatory (ESO), and the Onsala Space Observatory (OSO).Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.Final APEX cube and Herschel N and T maps as FITS files are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/584/A36
LaMotte, A.E.; Greene, E.A.
2007-01-01
Spatial relations between land use and groundwater quality in the watershed adjacent to Assateague Island National Seashore, Maryland and Virginia, USA were analyzed by the use of two spatial models. One model used a logit analysis and the other was based on geostatistics. The models were developed and compared on the basis of existing concentrations of nitrate as nitrogen in samples from 529 domestic wells. The models were applied to produce spatial probability maps that show areas in the watershed where concentrations of nitrate in groundwater are likely to exceed a predetermined management threshold value. Maps of the watershed generated by logistic regression and probability kriging analysis showing where the probability of nitrate concentrations would exceed 3 mg/L (>0.50) compared favorably. Logistic regression was less dependent on the spatial distribution of sampled wells, and identified an additional high probability area within the watershed that was missed by probability kriging. The spatial probability maps could be used to determine the natural or anthropogenic factors that best explain the occurrence and distribution of elevated concentrations of nitrate (or other constituents) in shallow groundwater. This information can be used by local land-use planners, ecologists, and managers to protect water supplies and identify land-use planning solutions and monitoring programs in vulnerable areas. ?? 2006 Springer-Verlag.
Predicting structures in the Zone of Avoidance
NASA Astrophysics Data System (ADS)
Sorce, Jenny G.; Colless, Matthew; Kraan-Korteweg, Renée C.; Gottlöber, Stefan
2017-11-01
The Zone of Avoidance (ZOA), whose emptiness is an artefact of our Galaxy dust, has been challenging observers as well as theorists for many years. Multiple attempts have been made on the observational side to map this region in order to better understand the local flows. On the theoretical side, however, this region is often simply statistically populated with structures but no real attempt has been made to confront theoretical and observed matter distributions. This paper takes a step forward using constrained realizations (CRs) of the local Universe shown to be perfect substitutes of local Universe-like simulations for smoothed high-density peak studies. Far from generating completely `random' structures in the ZOA, the reconstruction technique arranges matter according to the surrounding environment of this region. More precisely, the mean distributions of structures in a series of constrained and random realizations (RRs) differ: while densities annihilate each other when averaging over 200 RRs, structures persist when summing 200 CRs. The probability distribution function of ZOA grid cells to be highly overdense is a Gaussian with a 15 per cent mean in the random case, while that of the constrained case exhibits large tails. This implies that areas with the largest probabilities host most likely a structure. Comparisons between these predictions and observations, like those of the Puppis 3 cluster, show a remarkable agreement and allow us to assert the presence of the, recently highlighted by observations, Vela supercluster at about 180 h-1 Mpc, right behind the thickest dust layers of our Galaxy.
Li, Fei; Huang, Jinhui; Zeng, Guangming; Huang, Xiaolong; Liu, Wenchu; Wu, Haipeng; Yuan, Yujie; He, Xiaoxiao; Lai, Mingyong
2015-05-01
Spatial characteristics of the properties (dust organic material and pH), concentrations, and enrichment levels of toxic metals (Ni, Hg, Mn and As) in street dust from Xiandao District (Middle China) were investigated. Method of incorporating receptor population density into noncarcinogenic health risk assessment based on local land use map and geostatistics was developed to identify their priority pollutants/regions of concern. Mean enrichment factors of studied metals decreased in the order of Hg ≈ As > Mn > Ni. For noncarcinogenic effects, the exposure pathway which resulted in the highest levels of exposure risk for children and adults was ingestion except Hg (inhalation of vapors), followed by dermal contact and inhalation. Hazard indexes (HIs) for As, Hg, Mn, and Ni to children and adults revealed the following order: As > Hg > Mn > Ni. Mean HI for As exceeded safe level (1) for children, and the maximum HI (0.99) for Hg was most approached the safe level. Priority regions of concern were indentified in A region at each residential population density and the areas of B at high and moderate residential population density for As and the high residential density area within A region for Hg, respectively. The developed method was proved useful due to its improvement on previous study for making the priority areas of environmental management spatially hierarchical and thus reducing the probability of excessive environmental management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mysina, N Yu; Maksimova, L A; Ryabukho, V P
Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less
NASA Astrophysics Data System (ADS)
Risky, Yanuar S.; Aulia, Yogi H.; Widayani, Prima
2017-12-01
Spectral indices variations support for rapid and accurate extracting information such as built-up density. However, the exact determination of spectral waves for built-up density extraction is lacking. This study explains and compares the capabilities of 5 variations of spectral indices in spatiotemporal built-up density mapping using Landsat-7 ETM+ and Landsat-8 OLI/TIRS in Surakarta City on 2002 and 2015. The spectral indices variations used are 3 mid-infrared (MIR) based indices such as the Normalized Difference Built-up Index (NDBI), Urban Index (UI) and Built-up and 2 visible based indices such as VrNIR-BI (visible red) and VgNIR-BI (visible green). Linear regression statistics between ground value samples from Google Earth image in 2002 and 2015 and spectral indices for determining built-up land density. Ground value used amounted to 27 samples for model and 7 samples for accuracy test. The classification of built-up density mapping is divided into 9 classes: unclassified, 0-12.5%, 12.5-25%, 25-37.5%, 37.5-50%, 50-62.5%, 62.5-75%, 75-87.5% and 87.5-100 %. Accuracy of built-up land density mapping in 2002 and 2015 using VrNIR-BI (81,823% and 73.235%), VgNIR-BI (78.934% and 69.028%), NDBI (34.870% and 74.365%), UI (43.273% and 64.398%) and Built-up (59.755% and 72.664%). Based all spectral indices, Surakarta City on 2000-2015 has increased of built-up land density. VgNIR-BI has better capabilities for built-up land density mapping on Landsat-7 ETM + and Landsat-8 OLI/TIRS.
Using global maps to predict the risk of dengue in Europe.
Rogers, David J; Suk, Jonathan E; Semenza, Jan C
2014-01-01
This article attempts to quantify the risk to Europe of dengue, following the arrival and spread there of one of dengue's vector species Aedes (Stegomyia) albopictus. A global risk map for dengue is presented, based on a global database of the occurrence of this disease, derived from electronic literature searches. Remotely sensed satellite data (from NASA's MODIS series), interpolated meteorological data, predicted distribution maps of dengue's two main vector species, Aedes aegypti and Aedes albopictus, a digital elevation surface and human population density data were all used as potential predictor variables in a non-linear discriminant analysis modelling framework. One hundred bootstrap models were produced by randomly sub-sampling three different training sets for dengue fever, severe dengue (i.e. dengue haemorrhagic fever, DHF) and all-dengue, and output predictions were averaged to produce a single global risk map for each type of dengue. This paper concentrates on the all-dengue models. Key predictor variables were various thermal data layers, including both day- and night-time Land Surface Temperature, human population density, and a variety of rainfall variables. The relative importance of each may be shown visually using rainbow files and quantitatively using a ranking system. Vegetation Index variables (a common proxy for humidity or saturation deficit) were rarely chosen in the models. The kappa index of agreement indicated an excellent (dengue haemorrhagic fever, Cohen's kappa=0.79 ± 0.028, AUC=0.96 ± 0.007) or good fit of the top ten models in each series to the data (Cohen's kappa=0.73 ± 0.018, AUC=0.94 ± 0.007 for dengue fever and 0.74 ± 0.017, AUC=0.95 ± 0.005 for all dengue). The global risk map predicts widespread dengue risk in SE Asia and India, in Central America and parts of coastal South America, but in relatively few regions of Africa. In many cases these are less extensive predictions than those of other published dengue risk maps and arise because of the key importance of high human population density for the all-dengue risk maps produced here. Three published dengue risk maps are compared using the Fleiss kappa index, and are shown to have only fair agreement globally (Fleiss kappa=0.377). Regionally the maps show greater (but still only moderate) agreement in SE Asia (Fleiss kappa=0.566), fair agreement in the Americas (Fleiss kappa=0.325) and only slight agreement in Africa (Fleiss kappa=0.095). The global dengue risk maps show that very few areas of rural Europe are presently suitable for dengue, but several major cities appear to be at some degree of risk, probably due to a combination of thermal conditions and high human population density, the top two variables in many models. Mahalanobis distance images were produced of Europe and the southern United States showing the distance in environmental rather than geographical space of each site from any site where dengue currently occurs. Parts of Europe are quite similar in Mahalanobis distance terms to parts of the southern United States, where dengue occurred in the recent past and which remain environmentally suitable for it. High standards of living rather than a changed environmental suitability keep dengue out of the USA. The threat of dengue to Europe at present is considered to be low but sufficiently uncertain to warrant monitoring in those areas of greatest predicted environmental suitability, especially in northern Italy and parts of Austria, Slovenia and Croatia, Bosnia and Herzegovina, Serbia and Montenegro, Albania, Greece, south-eastern France, Germany and Switzerland, and in smaller regions elsewhere. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Probabilistic, Seismically-Induced Landslide Hazard Mapping of Western Oregon
NASA Astrophysics Data System (ADS)
Olsen, M. J.; Sharifi Mood, M.; Gillins, D. T.; Mahalingam, R.
2015-12-01
Earthquake-induced landslides can generate significant damage within urban communities by damaging structures, obstructing lifeline connection routes and utilities, generating various environmental impacts, and possibly resulting in loss of life. Reliable hazard and risk maps are important to assist agencies in efficiently allocating and managing limited resources to prepare for such events. This research presents a new methodology in order to communicate site-specific landslide hazard assessments in a large-scale, regional map. Implementation of the proposed methodology results in seismic-induced landslide hazard maps that depict the probabilities of exceeding landslide displacement thresholds (e.g. 0.1, 0.3, 1.0 and 10 meters). These maps integrate a variety of data sources including: recent landslide inventories, LIDAR and photogrammetric topographic data, geology map, mapped NEHRP site classifications based on available shear wave velocity data in each geologic unit, and USGS probabilistic seismic hazard curves. Soil strength estimates were obtained by evaluating slopes present along landslide scarps and deposits for major geologic units. Code was then developed to integrate these layers to perform a rigid, sliding block analysis to determine the amount and associated probabilities of displacement based on each bin of peak ground acceleration in the seismic hazard curve at each pixel. The methodology was applied to western Oregon, which contains weak, weathered, and often wet soils at steep slopes. Such conditions have a high landslide hazard even without seismic events. A series of landslide hazard maps highlighting the probabilities of exceeding the aforementioned thresholds were generated for the study area. These output maps were then utilized in a performance based design framework enabling them to be analyzed in conjunction with other hazards for fully probabilistic-based hazard evaluation and risk assessment. a) School of Civil and Construction Engineering, Oregon State University, Corvallis, OR 97331, USA
Gaussianization for fast and accurate inference from cosmological data
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2016-06-01
We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.
USDA-ARS?s Scientific Manuscript database
Fragaria iinumae is recognized as an ancestor of the octoploid strawberry species, including the cultivated strawberry, Fragaria ×ananassa. Here we report the construction of the first high density linkage map for F. iinumae. The map is based on two high-throughput techniques of single nucleotide p...
Demonstration of a Strategy to Perform Two-Dimensional Diode Laser Tomography
2008-03-01
training set allows interpolation between beam paths resulting in temperature and density maps. Finally, the TDLAS temperature and density maps are... TDLAS and Tomography Results .................................................................. 38 Introduction...38 vii Page TDLAS Burner Setup
Biometric recognition via fixation density maps
NASA Astrophysics Data System (ADS)
Rigas, Ioannis; Komogortsev, Oleg V.
2014-05-01
This work introduces and evaluates a novel eye movement-driven biometric approach that employs eye fixation density maps for person identification. The proposed feature offers a dynamic representation of the biometric identity, storing rich information regarding the behavioral and physical eye movement characteristics of the individuals. The innate ability of fixation density maps to capture the spatial layout of the eye movements in conjunction with their probabilistic nature makes them a particularly suitable option as an eye movement biometrical trait in cases when free-viewing stimuli is presented. In order to demonstrate the effectiveness of the proposed approach, the method is evaluated on three different datasets containing a wide gamut of stimuli types, such as static images, video and text segments. The obtained results indicate a minimum EER (Equal Error Rate) of 18.3 %, revealing the perspectives on the utilization of fixation density maps as an enhancing biometrical cue during identification scenarios in dynamic visual environments.
Ethical questions in landslide management and risk reduction in Norway
NASA Astrophysics Data System (ADS)
Taurisano, A.; Lyche, E.; Thakur, V.; Wiig, T.; Øvrelid, K.; Devoli, G.
2012-04-01
The loss of lives caused by landslides in Norway is smaller than in other countries due to the low population density in exposed areas. However, annual economic losses from damage to properties and infrastructures are vast. Yet nationally coordinated efforts to manage and reduce landslide and snow avalanche risk are a recent challenge, having started only in the last decade. Since 2009, this has been a task of the Norwegian Water Resources and Energy Directorate (NVE) under the Ministry of Petroleum and Energy. Ongoing work includes collection of landslide data, production of susceptibility and hazard maps, planning of mitigation measures along with monitoring and early warning systems, assistance to areal planning, providing expertise in emergencies and disseminating information to the public. These activities are realized in collaboration with the Norwegian Geological Survey (NGU), the Meteorological Institute, the Road and Railway authorities, universities and private consultant companies. As the total need for risk mitigating initiatives is by far larger than the annual budget, priority assessment is crucial. This brings about a number of ethical questions. 1. Susceptibility maps have been produced for the whole country and provide a first indication of areas with potential landslide or snow avalanche hazard, i.e. areas where special attention and expert assessments are needed before development. Areas where no potential hazard is shown can in practice be developed without further studies, which call for relatively conservative susceptibility maps. However, conservative maps are problematic as they too often increase both cost and duration of building projects beyond the reasonable. 2. Areas where hazard maps or risk mitigation initiatives will be funded are chosen by means of cost-benefits analyses which are often uncertain. How to estimate the benefits if the real probability for damage can only be judged on a very subjective level but not really calculated? As a result, we may use large amounts of money to mitigate the risk for a few houses with a yearly probability of damage of 1/300 and not do anything for an isolated farm with a yearly probability of damage larger than 1/50. 3. Is it ethical to stop the plan to construct a pedestrian and a cycling way or a new road crossing exposed to potential landslide hazard, when the delay or disapproval of the implementation of the plan itself involves a severe consequence than the actual landslide hazard? 4. Most fatalities from natural hazards in Norway happen because of snow avalanches in recreational activities. On the one hand, this suggests that one should use a large share of the annual budget to prevent this type of accident, where there are most lives to spare. On the other hand, one could argue that the voluntary exposure to hazard shouldn't be given too much priority at the expense of buildings and public infrastructures. 5. More generally, how ethical is it to use large amounts of money to manage hazards that has a remote probability to occur or that will not cause human losses or property damage, instead of for example strengthening other social demands?
Ellefsen, Karl J.
2017-06-27
MapMark4 is a software package that implements the probability calculations in three-part mineral resource assessments. Functions within the software package are written in the R statistical programming language. These functions, their documentation, and a copy of this user’s guide are bundled together in R’s unit of shareable code, which is called a “package.” This user’s guide includes step-by-step instructions showing how the functions are used to carry out the probability calculations. The calculations are demonstrated using test data, which are included in the package.
NASA Astrophysics Data System (ADS)
Meijs, M.; Debats, O.; Huisman, H.
2015-03-01
In prostate cancer, the detection of metastatic lymph nodes indicates progression from localized disease to metastasized cancer. The detection of positive lymph nodes is, however, a complex and time consuming task for experienced radiologists. Assistance of a two-stage Computer-Aided Detection (CAD) system in MR Lymphography (MRL) is not yet feasible due to the large number of false positives in the first stage of the system. By introducing a multi-structure, multi-atlas segmentation, using an affine transformation followed by a B-spline transformation for registration, the organ location is given by a mean density probability map. The atlas segmentation is semi-automatically drawn with ITK-SNAP, using Active Contour Segmentation. Each anatomic structure is identified by a label number. Registration is performed using Elastix, using Mutual Information and an Adaptive Stochastic Gradient optimization. The dataset consists of the MRL scans of ten patients, with lymph nodes manually annotated in consensus by two expert readers. The feature map of the CAD system consists of the Multi-Atlas and various other features (e.g. Normalized Intensity and multi-scale Blobness). The voxel-based Gentleboost classifier is evaluated using ROC analysis with cross validation. We show in a set of 10 studies that adding multi-structure, multi-atlas anatomical structure likelihood features improves the quality of the lymph node voxel likelihood map. Multiple structure anatomy maps may thus make MRL CAD more feasible.
Cerf, O; Griffiths, M; Aziza, F
2007-01-01
Conflicting laboratory-acquired data have been published about the heat resistance of Mycobacterium avium subsp. paratuberculosis (MAP), the cause of the deadly paratuberculosis (Johne's disease) of ruminants. Results of surveys of the presence of MAP in industrially pasteurized milk from several countries are conflicting also. This paper critically reviews the available data on the heat resistance of MAP and, based on these studies, a quantitative model describing the probability of finding MAP in pasteurized milk under the conditions prevailing in industrialized countries was derived using Monte Carlo simulation. The simulation assesses the probability of detecting MAP in 50-mL samples of pasteurized milk as lower than 1%. Hypotheses are presented to explain why higher frequencies were found by some authors; these included improper pasteurization and cross-contamination in the analytical laboratory. Hypotheses implicating a high rate of inter- and intraherd prevalence of paratuberculosis or heavy contamination of raw milk by feces were rejected.
Dominkovics, Pau; Granell, Carlos; Pérez-Navarro, Antoni; Casals, Martí; Orcau, Angels; Caylà, Joan A
2011-11-29
Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web processing services together with web-based mapping capabilities suit the needs of health practitioners in epidemiological analysis scenarios.
Garza-Gisholt, Eduardo; Hemmi, Jan M.; Hart, Nathan S.; Collin, Shaun P.
2014-01-01
Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed ‘by eye’. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation ‘respects’ the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the ‘noise’ caused by artefacts and permits a clearer representation of the dominant, ‘real’ distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome. PMID:24747568
2011-01-01
Background Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Methods Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. Results The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. Conclusions In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web processing services together with web-based mapping capabilities suit the needs of health practitioners in epidemiological analysis scenarios. PMID:22126392
NASA Astrophysics Data System (ADS)
Litvaitis, John A.; Reed, Gregory C.; Carroll, Rory P.; Litvaitis, Marian K.; Tash, Jeffrey; Mahard, Tyler; Broman, Derek J. A.; Callahan, Catherine; Ellingwood, Mark
2015-06-01
We are using bobcats ( Lynx rufus) as a model organism to examine how roads affect the abundance, distribution, and genetic structure of a wide-ranging carnivore. First, we compared the distribution of bobcat-vehicle collisions to road density and then estimated collision probabilities for specific landscapes using a moving window with road-specific traffic volume. Next, we obtained incidental observations of bobcats from the public, camera-trap detections, and locations of bobcats equipped with GPS collars to examine habitat selection. These data were used to generate a cost-surface map to investigate potential barrier effects of roads. Finally, we have begun an examination of genetic structure of bobcat populations in relation to major road networks. Distribution of vehicle-killed bobcats was correlated with road density, especially state and interstate highways. Collision models suggested that some regions may function as demographic sinks. Simulated movements in the context of the cost-surface map indicated that some major roads may be barriers. These patterns were supported by the genetic structure of bobcats. The sharpest divisions among genetically distinct demes occurred along natural barriers (mountains and large lakes) and in road-dense regions. In conclusion, our study has demonstrated the utility of using bobcats as a model organism to understand the variety of threats that roads pose to a wide-ranging species. Bobcats may also be useful as one of a group of focal species while developing approaches to maintain existing connectivity or mitigate the negative effects of roads.
Litvaitis, John A; Reed, Gregory C; Carroll, Rory P; Litvaitis, Marian K; Tash, Jeffrey; Mahard, Tyler; Broman, Derek J A; Callahan, Catherine; Ellingwood, Mark
2015-06-01
We are using bobcats (Lynx rufus) as a model organism to examine how roads affect the abundance, distribution, and genetic structure of a wide-ranging carnivore. First, we compared the distribution of bobcat-vehicle collisions to road density and then estimated collision probabilities for specific landscapes using a moving window with road-specific traffic volume. Next, we obtained incidental observations of bobcats from the public, camera-trap detections, and locations of bobcats equipped with GPS collars to examine habitat selection. These data were used to generate a cost-surface map to investigate potential barrier effects of roads. Finally, we have begun an examination of genetic structure of bobcat populations in relation to major road networks. Distribution of vehicle-killed bobcats was correlated with road density, especially state and interstate highways. Collision models suggested that some regions may function as demographic sinks. Simulated movements in the context of the cost-surface map indicated that some major roads may be barriers. These patterns were supported by the genetic structure of bobcats. The sharpest divisions among genetically distinct demes occurred along natural barriers (mountains and large lakes) and in road-dense regions. In conclusion, our study has demonstrated the utility of using bobcats as a model organism to understand the variety of threats that roads pose to a wide-ranging species. Bobcats may also be useful as one of a group of focal species while developing approaches to maintain existing connectivity or mitigate the negative effects of roads.
Xia, Zhiqiang; Zhang, Shengkui; Wen, Mingfu; Lu, Cheng; Sun, Yufang; Zou, Meiling; Wang, Wenquan
2018-01-01
As an important biofuel plant, the demand for higher yield Jatropha curcas L. is rapidly increasing. However, genetic analysis of Jatropha and molecular breeding for higher yield have been hampered by the limited number of molecular markers available. An ultrahigh-density linkage map for a Jatropha mapping population of 153 individuals was constructed and covered 1380.58 cM of the Jatropha genome, with average marker density of 0.403 cM. The genetic linkage map consisted of 3422 SNP and indel markers, which clustered into 11 linkage groups. With this map, 13 repeatable QTLs (reQTLs) for fruit yield traits were identified. Ten reQTLs, qNF - 1 , qNF - 2a , qNF - 2b , qNF - 2c , qNF - 3 , qNF - 4 , qNF - 6 , qNF - 7a , qNF - 7b and qNF - 8, that control the number of fruits (NF) mapped to LGs 1, 2, 3, 4, 6, 7 and 8, whereas three reQTLs, qTWF - 1 , qTWF - 2 and qTWF - 3, that control the total weight of fruits (TWF) mapped to LGs 1, 2 and 3, respectively. It is interesting that there are two candidate critical genes, which may regulate Jatropha fruit yield. We also identified three pleiotropic reQTL pairs associated with both the NF and TWF traits. This study is the first to report an ultrahigh-density Jatropha genetic linkage map construction, and the markers used in this study showed great potential for QTL mapping. Thirteen fruit-yield reQTLs and two important candidate genes were identified based on this linkage map. This genetic linkage map will be a useful tool for the localization of other economically important QTLs and candidate genes for Jatropha .
Özarslan, Evren; Koay, Cheng Guan; Shepherd, Timothy M; Komlosh, Michal E; İrfanoğlu, M Okan; Pierpaoli, Carlo; Basser, Peter J
2013-09-01
Diffusion-weighted magnetic resonance (MR) signals reflect information about underlying tissue microstructure and cytoarchitecture. We propose a quantitative, efficient, and robust mathematical and physical framework for representing diffusion-weighted MR imaging (MRI) data obtained in "q-space," and the corresponding "mean apparent propagator (MAP)" describing molecular displacements in "r-space." We also define and map novel quantitative descriptors of diffusion that can be computed robustly using this MAP-MRI framework. We describe efficient analytical representation of the three-dimensional q-space MR signal in a series expansion of basis functions that accurately describes diffusion in many complex geometries. The lowest order term in this expansion contains a diffusion tensor that characterizes the Gaussian displacement distribution, equivalent to diffusion tensor MRI (DTI). Inclusion of higher order terms enables the reconstruction of the true average propagator whose projection onto the unit "displacement" sphere provides an orientational distribution function (ODF) that contains only the orientational dependence of the diffusion process. The representation characterizes novel features of diffusion anisotropy and the non-Gaussian character of the three-dimensional diffusion process. Other important measures this representation provides include the return-to-the-origin probability (RTOP), and its variants for diffusion in one- and two-dimensions-the return-to-the-plane probability (RTPP), and the return-to-the-axis probability (RTAP), respectively. These zero net displacement probabilities measure the mean compartment (pore) volume and cross-sectional area in distributions of isolated pores irrespective of the pore shape. MAP-MRI represents a new comprehensive framework to model the three-dimensional q-space signal and transform it into diffusion propagators. Experiments on an excised marmoset brain specimen demonstrate that MAP-MRI provides several novel, quantifiable parameters that capture previously obscured intrinsic features of nervous tissue microstructure. This should prove helpful for investigating the functional organization of normal and pathologic nervous tissue. Copyright © 2013 Elsevier Inc. All rights reserved.
2012-01-01
Background High-density linkage maps facilitate the mapping of target genes and the construction of partial linkage maps around target loci to develop markers for marker-assisted selection (MAS). MAS is quite challenging in conifers because of their large, complex, and poorly-characterized genomes. Our goal was to construct a high-density linkage map to facilitate the identification of markers that are tightly linked to a major recessive male-sterile gene (ms1) for MAS in C. japonica, a species that is important in Japanese afforestation but which causes serious social pollinosis problems. Results We constructed a high-density saturated genetic linkage map for C. japonica using expressed sequence-derived co-dominant single nucleotide polymorphism (SNP) markers, most of which were genotyped using the GoldenGate genotyping assay. A total of 1261 markers were assigned to 11 linkage groups with an observed map length of 1405.2 cM and a mean distance between two adjacent markers of 1.1 cM; the number of linkage groups matched the basic chromosome number in C. japonica. Using this map, we located ms1 on the 9th linkage group and constructed a partial linkage map around the ms1 locus. This enabled us to identify a marker (hrmSNP970_sf) that is closely linked to the ms1 gene, being separated from it by only 0.5 cM. Conclusions Using the high-density map, we located the ms1 gene on the 9th linkage group and constructed a partial linkage map around the ms1 locus. The map distance between the ms1 gene and the tightly linked marker was only 0.5 cM. The identification of markers that are tightly linked to the ms1 gene will facilitate the early selection of male-sterile trees, which should expedite C. japonica breeding programs aimed at alleviating pollinosis problems without harming productivity. PMID:22424262
DCMDN: Deep Convolutional Mixture Density Network
NASA Astrophysics Data System (ADS)
D'Isanto, Antonio; Polsterer, Kai Lars
2017-09-01
Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.
Verification of the WFAS Lightning Efficiency Map
Paul Sopko; Don Latham; Isaac Grenfell
2007-01-01
A Lightning Ignition Efficiency map was added to the suite of daily maps offered by the Wildland Fire Assessment System (WFAS) in 1999. This map computes a lightning probability of ignition (POI) based on the estimated fuel type, fuel depth, and 100-hour fuel moisture interpolated from the Remote Automated Weather Station (RAWS) network. An attempt to verify the...
Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function
ERIC Educational Resources Information Center
Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.
2011-01-01
In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.
Coincidence probability as a measure of the average phase-space density at freeze-out
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyz, W.; Zalewski, K.
2006-02-01
It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.
Mathematics of gravitational lensing: multiple imaging and magnification
NASA Astrophysics Data System (ADS)
Petters, A. O.; Werner, M. C.
2010-09-01
The mathematical theory of gravitational lensing has revealed many generic and global properties. Beginning with multiple imaging, we review Morse-theoretic image counting formulas and lower bound results, and complex-algebraic upper bounds in the case of single and multiple lens planes. We discuss recent advances in the mathematics of stochastic lensing, discussing a general formula for the global expected number of minimum lensed images as well as asymptotic formulas for the probability densities of the microlensing random time delay functions, random lensing maps, and random shear, and an asymptotic expression for the global expected number of micro-minima. Multiple imaging in optical geometry and a spacetime setting are treated. We review global magnification relation results for model-dependent scenarios and cover recent developments on universal local magnification relations for higher order caustics.
Investigation of safety analysis methods using computer vision techniques
NASA Astrophysics Data System (ADS)
Shirazi, Mohammad Shokrolah; Morris, Brendan Tran
2017-09-01
This work investigates safety analysis methods using computer vision techniques. The vision-based tracking system is developed to provide the trajectory of road users including vehicles and pedestrians. Safety analysis methods are developed to estimate time to collision (TTC) and postencroachment time (PET) that are two important safety measurements. Corresponding algorithms are presented and their advantages and drawbacks are shown through their success in capturing the conflict events in real time. The performance of the tracking system is evaluated first, and probability density estimation of TTC and PET are shown for 1-h monitoring of a Las Vegas intersection. Finally, an idea of an intersection safety map is introduced, and TTC values of two different intersections are estimated for 1 day from 8:00 a.m. to 6:00 p.m.
StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.
Li, Chenhui; Baciu, George; Han, Yu
2018-03-01
Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.
Guo, Yinshan; Xing, Huiyang; Zhao, Yuhui; Liu, Zhendong; Li, Kun; Guo, Xiuwu
2017-01-01
Genetic maps are important tools in plant genomics and breeding. We report a large-scale discovery of single nucleotide polymorphisms (SNPs) using the specific length amplified fragment sequencing (SLAF-seq) technique for the construction of high-density genetic maps for two elite wine grape cultivars, ‘Chardonnay’ and ‘Beibinghong’, and their 130 F1 plants. A total of 372.53 M paired-end reads were obtained after preprocessing. The average sequencing depth was 33.81 for ‘Chardonnay’ (the female parent), 48.20 for ‘Beibinghong’ (the male parent), and 12.66 for the F1 offspring. We detected 202,349 high-quality SLAFs of which 144,972 were polymorphic; 10,042 SNPs were used to construct a genetic map that spanned 1,969.95 cM, with an average genetic distance of 0.23 cM between adjacent markers. This genetic map contains the largest molecular marker number of the grape maps so far reported. We thus demonstrate that SLAF-seq is a promising strategy for the construction of high-density genetic maps; the map that we report here is a good potential resource for QTL mapping of genes linked to major economic and agronomic traits, map-based cloning, and marker-assisted selection of grape. PMID:28746364
Sampling probability distributions of lesions in mammograms
NASA Astrophysics Data System (ADS)
Looney, P.; Warren, L. M.; Dance, D. R.; Young, K. C.
2015-03-01
One approach to image perception studies in mammography using virtual clinical trials involves the insertion of simulated lesions into normal mammograms. To facilitate this, a method has been developed that allows for sampling of lesion positions across the cranio-caudal and medio-lateral radiographic projections in accordance with measured distributions of real lesion locations. 6825 mammograms from our mammography image database were segmented to find the breast outline. The outlines were averaged and smoothed to produce an average outline for each laterality and radiographic projection. Lesions in 3304 mammograms with malignant findings were mapped on to a standardised breast image corresponding to the average breast outline using piecewise affine transforms. A four dimensional probability distribution function was found from the lesion locations in the cranio-caudal and medio-lateral radiographic projections for calcification and noncalcification lesions. Lesion locations sampled from this probability distribution function were mapped on to individual mammograms using a piecewise affine transform which transforms the average outline to the outline of the breast in the mammogram. The four dimensional probability distribution function was validated by comparing it to the two dimensional distributions found by considering each radiographic projection and laterality independently. The correlation of the location of the lesions sampled from the four dimensional probability distribution function across radiographic projections was shown to match the correlation of the locations of the original mapped lesion locations. The current system has been implemented as a web-service on a server using the Python Django framework. The server performs the sampling, performs the mapping and returns the results in a javascript object notation format.
2012-01-01
Background Most modern citrus cultivars have an interspecific origin. As a foundational step towards deciphering the interspecific genome structures, a reference whole genome sequence was produced by the International Citrus Genome Consortium from a haploid derived from Clementine mandarin. The availability of a saturated genetic map of Clementine was identified as an essential prerequisite to assist the whole genome sequence assembly. Clementine is believed to be a ‘Mediterranean’ mandarin × sweet orange hybrid, and sweet orange likely arose from interspecific hybridizations between mandarin and pummelo gene pools. The primary goals of the present study were to establish a Clementine reference map using codominant markers, and to perform comparative mapping of pummelo, sweet orange, and Clementine. Results Five parental genetic maps were established from three segregating populations, which were genotyped with Single Nucleotide Polymorphism (SNP), Simple Sequence Repeats (SSR) and Insertion-Deletion (Indel) markers. An initial medium density reference map (961 markers for 1084.1 cM) of the Clementine was established by combining male and female Clementine segregation data. This Clementine map was compared with two pummelo maps and a sweet orange map. The linear order of markers was highly conserved in the different species. However, significant differences in map size were observed, which suggests a variation in the recombination rates. Skewed segregations were much higher in the male than female Clementine mapping data. The mapping data confirmed that Clementine arose from hybridization between ‘Mediterranean’ mandarin and sweet orange. The results identified nine recombination break points for the sweet orange gamete that contributed to the Clementine genome. Conclusions A reference genetic map of citrus, used to facilitate the chromosome assembly of the first citrus reference genome sequence, was established. The high conservation of marker order observed at the interspecific level should allow reasonable inferences of most citrus genome sequences by mapping next-generation sequencing (NGS) data in the reference genome sequence. The genome of the haploid Clementine used to establish the citrus reference genome sequence appears to have been inherited primarily from the ‘Mediterranean’ mandarin. The high frequency of skewed allelic segregations in the male Clementine data underline the probable extent of deviation from Mendelian segregation for characters controlled by heterozygous loci in male parents. PMID:23126659
Novel density-based and hierarchical density-based clustering algorithms for uncertain data.
Zhang, Xianchao; Liu, Han; Zhang, Xiaotong
2017-09-01
Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Leherte, L.; Allen, F. H.; Vercauteren, D. P.
1995-04-01
A computational method is described for mapping the volume within the DNA double helix accessible to a groove-binding antibiotic, netropsin. Topological critical point analysis is used to locate maxima in electron density maps reconstructed from crystallographically determined atomic coordinates. The peaks obtained in this way are represented as ellipsoids with axes related to local curvature of the electron density function. Combining the ellipsoids produces a single electron density function which can be probed to estimate effective volumes of the interacting species. Close complementarity between host and ligand in this example shows the method to be a good representation of the electron density function at various resolutions; while at the atomic level the ellipsoid method gives results which are in close agreement with those from the conventional, spherical, van der Waals approach.
NASA Astrophysics Data System (ADS)
Leherte, Laurence; Allen, Frank H.
1994-06-01
A computational method is described for mapping the volume within the DNA double helix accessible to the groove-binding antibiotic netropsin. Topological critical point analysis is used to locate maxima in electron density maps reconstructed from crystallographically determined atomic coordinates. The peaks obtained in this way are represented as ellipsoids with axes related to local curvature of the electron density function. Combining the ellipsoids produces a single electron density function which can be probed to estimate effective volumes of the interacting species. Close complementarity between host and ligand in this example shows the method to give a good representation of the electron density function at various resolutions. At the atomic level, the ellipsoid method gives results which are in close agreement with those from the conventional spherical van der Waals approach.
USDA-ARS?s Scientific Manuscript database
Mapping and identification of quantitative trait loci (QTLs) are important for efficient marker-assisted breeding. Diseases such as leaf spots and Tomato spotted wilt virus (TSWV) cause significant loses to peanut growers. The U.S. Peanut Genome Initiative (PGI) was launched in 2004, and expanded to...
Serra-Sogas, Norma; O'Hara, Patrick D; Canessa, Rosaline; Keller, Peter; Pelot, Ronald
2008-05-01
This paper examines the use of exploratory spatial analysis for identifying hotspots of shipping-based oil pollution in the Pacific Region of Canada's Exclusive Economic Zone. It makes use of data collected from fiscal years 1997/1998 to 2005/2006 by the National Aerial Surveillance Program, the primary tool for monitoring and enforcing the provisions imposed by MARPOL 73/78. First, we present oil spill data as points in a "dot map" relative to coastlines, harbors and the aerial surveillance distribution. Then, we explore the intensity of oil spill events using the Quadrat Count method, and the Kernel Density Estimation methods with both fixed and adaptive bandwidths. We found that oil spill hotspots where more clearly defined using Kernel Density Estimation with an adaptive bandwidth, probably because of the "clustered" distribution of oil spill occurrences. Finally, we discuss the importance of standardizing oil spill data by controlling for surveillance effort to provide a better understanding of the distribution of illegal oil spills, and how these results can ultimately benefit a monitoring program.
Abiram, Angamuthu; Kolandaivel, Ponmalai
2010-02-01
The switching propensity and maximum probability of occurrence of the side chain imidazole group in the dipeptide cyclo(His-Pro) (CHP) were studied by applying molecular dynamics simulations and density functional theory. The atomistic behaviour of CHP with the neurotoxins glutamate (E) and paraquat (Pq) were also explored; E and Pq engage in hydrogen bond formation with the diketopiperazine (DKP) ring of the dipeptide, with which E shows a profound interaction, as confirmed further by NH and CO stretching vibrational frequencies. The effect of CHP was found to be greater on E than on Pq neurotoxin. A ring puckering study indicated a twist boat conformation for the six-membered DKP ring. Molecular electrostatic potential (MESP) mapping was also used to explore the hydrogen bond interactions prevailing between the neurotoxins and the DKP ring. The results of this study reveal that the DKP ring of the dipeptide CHP can be expected to play a significant role in reducing effects such as oxidative stress and cell death caused by neurotoxins.
Quantum Jeffreys prior for displaced squeezed thermal states
NASA Astrophysics Data System (ADS)
Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin
1999-09-01
It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.
Risk-targeted maps for Romania
NASA Astrophysics Data System (ADS)
Vacareanu, Radu; Pavel, Florin; Craciun, Ionut; Coliba, Veronica; Arion, Cristian; Aldea, Alexandru; Neagu, Cristian
2018-03-01
Romania has one of the highest seismic hazard levels in Europe. The seismic hazard is due to a combination of local crustal seismic sources, situated mainly in the western part of the country and the Vrancea intermediate-depth seismic source, which can be found at the bend of the Carpathian Mountains. Recent seismic hazard studies have shown that there are consistent differences between the slopes of the seismic hazard curves for sites situated in the fore-arc and back-arc of the Carpathian Mountains. Consequently, in this study we extend this finding to the evaluation of the probability of collapse of buildings and finally to the development of uniform risk-targeted maps. The main advantage of uniform risk approach is that the target probability of collapse will be uniform throughout the country. Finally, the results obtained are discussed in the light of a recent study with the same focus performed at European level using the hazard data from SHARE project. The analyses performed in this study have pointed out to a dominant influence of the quantile of peak ground acceleration used for anchoring the fragility function. This parameter basically alters the shape of the risk-targeted maps shifting the areas which have higher collapse probabilities from eastern Romania to western Romania, as its exceedance probability increases. Consequently, a uniform procedure for deriving risk-targeted maps appears as more than necessary.
An Ultra-High-Density, Transcript-Based, Genetic Map of Lettuce
Truco, Maria José; Ashrafi, Hamid; Kozik, Alexander; van Leeuwen, Hans; Bowers, John; Wo, Sebastian Reyes Chin; Stoffel, Kevin; Xu, Huaqin; Hill, Theresa; Van Deynze, Allen; Michelmore, Richard W.
2013-01-01
We have generated an ultra-high-density genetic map for lettuce, an economically important member of the Compositae, consisting of 12,842 unigenes (13,943 markers) mapped in 3696 genetic bins distributed over nine chromosomal linkage groups. Genomic DNA was hybridized to a custom Affymetrix oligonucleotide array containing 6.4 million features representing 35,628 unigenes of Lactuca spp. Segregation of single-position polymorphisms was analyzed using 213 F7:8 recombinant inbred lines that had been generated by crossing cultivated Lactuca sativa cv. Salinas and L. serriola acc. US96UC23, the wild progenitor species of L. sativa. The high level of replication of each allele in the recombinant inbred lines was exploited to identify single-position polymorphisms that were assigned to parental haplotypes. Marker information has been made available using GBrowse to facilitate access to the map. This map has been anchored to the previously published integrated map of lettuce providing candidate genes for multiple phenotypes. The high density of markers achieved in this ultradense map allowed syntenic studies between lettuce and Vitis vinifera as well as other plant species. PMID:23550116
An Ultra-High-Density, Transcript-Based, Genetic Map of Lettuce.
Truco, Maria José; Ashrafi, Hamid; Kozik, Alexander; van Leeuwen, Hans; Bowers, John; Wo, Sebastian Reyes Chin; Stoffel, Kevin; Xu, Huaqin; Hill, Theresa; Van Deynze, Allen; Michelmore, Richard W
2013-04-09
We have generated an ultra-high-density genetic map for lettuce, an economically important member of the Compositae, consisting of 12,842 unigenes (13,943 markers) mapped in 3696 genetic bins distributed over nine chromosomal linkage groups. Genomic DNA was hybridized to a custom Affymetrix oligonucleotide array containing 6.4 million features representing 35,628 unigenes of Lactuca spp. Segregation of single-position polymorphisms was analyzed using 213 F 7:8 recombinant inbred lines that had been generated by crossing cultivated Lactuca sativa cv. Salinas and L. serriola acc. US96UC23, the wild progenitor species of L. sativa The high level of replication of each allele in the recombinant inbred lines was exploited to identify single-position polymorphisms that were assigned to parental haplotypes. Marker information has been made available using GBrowse to facilitate access to the map. This map has been anchored to the previously published integrated map of lettuce providing candidate genes for multiple phenotypes. The high density of markers achieved in this ultradense map allowed syntenic studies between lettuce and Vitis vinifera as well as other plant species. Copyright © 2013 Truco et al.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
NASA Astrophysics Data System (ADS)
Vio, R.; Vergès, C.; Andreani, P.
2017-08-01
The matched filter (MF) is one of the most popular and reliable techniques to the detect signals of known structure and amplitude smaller than the level of the contaminating noise. Under the assumption of stationary Gaussian noise, MF maximizes the probability of detection subject to a constant probability of false detection or false alarm (PFA). This property relies upon a priori knowledge of the position of the searched signals, which is usually not available. Recently, it has been shown that when applied in its standard form, MF may severely underestimate the PFA. As a consequence the statistical significance of features that belong to noise is overestimated and the resulting detections are actually spurious. For this reason, an alternative method of computing the PFA has been proposed that is based on the probability density function (PDF) of the peaks of an isotropic Gaussian random field. In this paper we further develop this method. In particular, we discuss the statistical meaning of the PFA and show that, although useful as a preliminary step in a detection procedure, it is not able to quantify the actual reliability of a specific detection. For this reason, a new quantity is introduced called the specific probability of false alarm (SPFA), which is able to carry out this computation. We show how this method works in targeted simulations and apply it to a few interferometric maps taken with the Atacama Large Millimeter/submillimeter Array (ALMA) and the Australia Telescope Compact Array (ATCA). We select a few potential new point sources and assign an accurate detection reliability to these sources.
A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain
2015-05-18
approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a
Particle visualization in high-power impulse magnetron sputtering. I. 2D density mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Britun, Nikolay, E-mail: nikolay.britun@umons.ac.be; Palmucci, Maria; Konstantinidis, Stephanos
2015-04-28
Time-resolved characterization of an Ar-Ti high-power impulse magnetron sputtering discharge has been performed. This paper deals with two-dimensional density mapping in the discharge volume obtained by laser-induced fluorescence imaging. The time-resolved density evolution of Ti neutrals, singly ionized Ti atoms (Ti{sup +}), and Ar metastable atoms (Ar{sup met}) in the area above the sputtered cathode is mapped for the first time in this type of discharges. The energetic characteristics of the discharge species are additionally studied by Doppler-shift laser-induced fluorescence imaging. The questions related to the propagation of both the neutral and ionized discharge particles, as well as to theirmore » spatial density distributions, are discussed.« less
Zhao, Yuhui; Su, Kai; Wang, Gang; Zhang, Liping; Zhang, Jijun; Li, Junpeng; Guo, Yinshan
2017-07-14
Genetic linkage maps are an important tool in genetic and genomic research. In this study, two hawthorn cultivars, Qiujinxing and Damianqiu, and 107 progenies from a cross between them were used for constructing a high-density genetic linkage map using the 2b-restriction site-associated DNA (2b-RAD) sequencing method, as well as for mapping quantitative trait loci (QTL) for flavonoid content. In total, 206,411,693 single-end reads were obtained, with an average sequencing depth of 57× in the parents and 23× in the progeny. After quality trimming, 117,896 high-quality 2b-RAD tags were retained, of which 42,279 were polymorphic; of these, 12,951 markers were used for constructing the genetic linkage map. The map contained 17 linkage groups and 3,894 markers, with a total map length of 1,551.97 cM and an average marker interval of 0.40 cM. QTL mapping identified 21 QTLs associated with flavonoid content in 10 linkage groups, which explained 16.30-59.00% of the variance. This is the first high-density linkage map for hawthorn, which will serve as a basis for fine-scale QTL mapping and marker-assisted selection of important traits in hawthorn germplasm and will facilitate chromosome assignment for hawthorn whole-genome assemblies in the future.
BM-Map: Bayesian Mapping of Multireads for Next-Generation Sequencing Data
Ji, Yuan; Xu, Yanxun; Zhang, Qiong; Tsui, Kam-Wah; Yuan, Yuan; Norris, Clift; Liang, Shoudan; Liang, Han
2011-01-01
Summary Next-generation sequencing (NGS) technology generates millions of short reads, which provide valuable information for various aspects of cellular activities and biological functions. A key step in NGS applications (e.g., RNA-Seq) is to map short reads to correct genomic locations within the source genome. While most reads are mapped to a unique location, a significant proportion of reads align to multiple genomic locations with equal or similar numbers of mismatches; these are called multireads. The ambiguity in mapping the multireads may lead to bias in downstream analyses. Currently, most practitioners discard the multireads in their analysis, resulting in a loss of valuable information, especially for the genes with similar sequences. To refine the read mapping, we develop a Bayesian model that computes the posterior probability of mapping a multiread to each competing location. The probabilities are used for downstream analyses, such as the quantification of gene expression. We show through simulation studies and RNA-Seq analysis of real life data that the Bayesian method yields better mapping than the current leading methods. We provide a C++ program for downloading that is being packaged into a user-friendly software. PMID:21517792
Dukić, Marinela; Berner, Daniel; Roesti, Marius; Haag, Christoph R; Ebert, Dieter
2016-10-13
Recombination rate is an essential parameter for many genetic analyses. Recombination rates are highly variable across species, populations, individuals and different genomic regions. Due to the profound influence that recombination can have on intraspecific diversity and interspecific divergence, characterization of recombination rate variation emerges as a key resource for population genomic studies and emphasises the importance of high-density genetic maps as tools for studying genome biology. Here we present such a high-density genetic map for Daphnia magna, and analyse patterns of recombination rate across the genome. A F2 intercross panel was genotyped by Restriction-site Associated DNA sequencing to construct the third-generation linkage map of D. magna. The resulting high-density map included 4037 markers covering 813 scaffolds and contigs that sum up to 77 % of the currently available genome draft sequence (v2.4) and 55 % of the estimated genome size (238 Mb). Total genetic length of the map presented here is 1614.5 cM and the genome-wide recombination rate is estimated to 6.78 cM/Mb. Merging genetic and physical information we consistently found that recombination rate estimates are high towards the peripheral parts of the chromosomes, while chromosome centres, harbouring centromeres in D. magna, show very low recombination rate estimates. Due to its high-density, the third-generation linkage map for D. magna can be coupled with the draft genome assembly, providing an essential tool for genome investigation in this model organism. Thus, our linkage map can be used for the on-going improvements of the genome assembly, but more importantly, it has enabled us to characterize variation in recombination rate across the genome of D. magna for the first time. These new insights can provide a valuable assistance in future studies of the genome evolution, mapping of quantitative traits and population genetic studies.
A seismological model for earthquakes induced by fluid extraction from a subsurface reservoir
NASA Astrophysics Data System (ADS)
Bourne, S. J.; Oates, S. J.; van Elk, J.; Doornhof, D.
2014-12-01
A seismological model is developed for earthquakes induced by subsurface reservoir volume changes. The approach is based on the work of Kostrov () and McGarr () linking total strain to the summed seismic moment in an earthquake catalog. We refer to the fraction of the total strain expressed as seismic moment as the strain partitioning function, α. A probability distribution for total seismic moment as a function of time is derived from an evolving earthquake catalog. The moment distribution is taken to be a Pareto Sum Distribution with confidence bounds estimated using approximations given by Zaliapin et al. (). In this way available seismic moment is expressed in terms of reservoir volume change and hence compaction in the case of a depleting reservoir. The Pareto Sum Distribution for moment and the Pareto Distribution underpinning the Gutenberg-Richter Law are sampled using Monte Carlo methods to simulate synthetic earthquake catalogs for subsequent estimation of seismic ground motion hazard. We demonstrate the method by applying it to the Groningen gas field. A compaction model for the field calibrated using various geodetic data allows reservoir strain due to gas extraction to be expressed as a function of both spatial position and time since the start of production. Fitting with a generalized logistic function gives an empirical expression for the dependence of α on reservoir compaction. Probability density maps for earthquake event locations can then be calculated from the compaction maps. Predicted seismic moment is shown to be strongly dependent on planned gas production.
A multi-part matching strategy for mapping LOINC with laboratory terminologies
Lee, Li-Hui; Groß, Anika; Hartung, Michael; Liou, Der-Ming; Rahm, Erhard
2014-01-01
Objective To address the problem of mapping local laboratory terminologies to Logical Observation Identifiers Names and Codes (LOINC). To study different ontology matching algorithms and investigate how the probability of term combinations in LOINC helps to increase match quality and reduce manual effort. Materials and methods We proposed two matching strategies: full name and multi-part. The multi-part approach also considers the occurrence probability of combined concept parts. It can further recommend possible combinations of concept parts to allow more local terms to be mapped. Three real-world laboratory databases from Taiwanese hospitals were used to validate the proposed strategies with respect to different quality measures and execution run time. A comparison with the commonly used tool, Regenstrief LOINC Mapping Assistant (RELMA) Lab Auto Mapper (LAM), was also carried out. Results The new multi-part strategy yields the best match quality, with F-measure values between 89% and 96%. It can automatically match 70–85% of the laboratory terminologies to LOINC. The recommendation step can further propose mapping to (proposed) LOINC concepts for 9–20% of the local terminology concepts. On average, 91% of the local terminology concepts can be correctly mapped to existing or newly proposed LOINC concepts. Conclusions The mapping quality of the multi-part strategy is significantly better than that of LAM. It enables domain experts to perform LOINC matching with little manual work. The probability of term combinations proved to be a valuable strategy for increasing the quality of match results, providing recommendations for proposed LOINC conepts, and decreasing the run time for match processing. PMID:24363318
Automated segmentation of dental CBCT image with prior-guided sequential random forests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Li; Gao, Yaozong; Shi, Feng
Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimatemore » the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method for CBCT segmentation.« less
Holzer, Thomas L.; Noce, Thomas E.; Bennett, Michael J.
2008-01-01
Maps showing the probability of surface manifestations of liquefaction in the northern Santa Clara Valley were prepared with liquefaction probability curves. The area includes the communities of San Jose, Campbell, Cupertino, Los Altos, Los Gatos Milpitas, Mountain View, Palo Alto, Santa Clara, Saratoga, and Sunnyvale. The probability curves were based on complementary cumulative frequency distributions of the liquefaction potential index (LPI) for surficial geologic units in the study area. LPI values were computed with extensive cone penetration test soundings. Maps were developed for three earthquake scenarios, an M7.8 on the San Andreas Fault comparable to the 1906 event, an M6.7 on the Hayward Fault comparable to the 1868 event, and an M6.9 on the Calaveras Fault. Ground motions were estimated with the Boore and Atkinson (2008) attenuation relation. Liquefaction is predicted for all three events in young Holocene levee deposits along the major creeks. Liquefaction probabilities are highest for the M7.8 earthquake, ranging from 0.33 to 0.37 if a 1.5-m deep water table is assumed, and 0.10 to 0.14 if a 5-m deep water table is assumed. Liquefaction probabilities of the other surficial geologic units are less than 0.05. Probabilities for the scenario earthquakes are generally consistent with observations during historical earthquakes.
Jeong, Woo Chul; Chauhan, Munish; Sajib, Saurav Z K; Kim, Hyung Joong; Serša, Igor; Kwon, Oh In; Woo, Eung Je
2014-09-07
Magnetic Resonance Electrical Impedance Tomography (MREIT) is an MRI method that enables mapping of internal conductivity and/or current density via measurements of magnetic flux density signals. The MREIT measures only the z-component of the induced magnetic flux density B = (Bx, By, Bz) by external current injection. The measured noise of Bz complicates recovery of magnetic flux density maps, resulting in lower quality conductivity and current-density maps. We present a new method for more accurate measurement of the spatial gradient of the magnetic flux density gradient (∇ Bz). The method relies on the use of multiple radio-frequency receiver coils and an interleaved multi-echo pulse sequence that acquires multiple sampling points within each repetition time. The noise level of the measured magnetic flux density Bz depends on the decay rate of the signal magnitude, the injection current duration, and the coil sensitivity map. The proposed method uses three key steps. The first step is to determine a representative magnetic flux density gradient from multiple receiver coils by using a weighted combination and by denoising the measured noisy data. The second step is to optimize the magnetic flux density gradient by using multi-echo magnetic flux densities at each pixel in order to reduce the noise level of ∇ Bz and the third step is to remove a random noise component from the recovered ∇ Bz by solving an elliptic partial differential equation in a region of interest. Numerical simulation experiments using a cylindrical phantom model with included regions of low MRI signal to noise ('defects') verified the proposed method. Experimental results using a real phantom experiment, that included three different kinds of anomalies, demonstrated that the proposed method reduced the noise level of the measured magnetic flux density. The quality of the recovered conductivity maps using denoised ∇ Bz data showed that the proposed method reduced the conductivity noise level up to 3-4 times at each anomaly region in comparison to the conventional method.
A New, Large-scale Map of Interstellar Reddening Derived from H I Emission
NASA Astrophysics Data System (ADS)
Lenz, Daniel; Hensley, Brandon S.; Doré, Olivier
2017-09-01
We present a new map of interstellar reddening, covering the 39% of the sky with low H I column densities ({N}{{H}{{I}}}< 4× {10}20 cm-2 or E(B-V)≈ 45 mmag) at 16\\buildrel{ \\prime}\\over{.} 1 resolution, based on all-sky observations of Galactic H I emission by the HI4PI Survey. In this low-column-density regime, we derive a characteristic value of {N}{{H}{{I}}}/E(B-V)=8.8 × {10}21 {{cm}}2 {{mag}}-1 for gas with | {v}{LSR}| < 90 km s-1 and find no significant reddening associated with gas at higher velocities. We compare our H I-based reddening map with the Schlegel et al. (SFD) reddening map and find them consistent to within a scatter of ≃ 5 mmag. Further, the differences between our map and the SFD map are in excellent agreement with the low-resolution (4\\buildrel{\\circ}\\over{.} 5) corrections to the SFD map derived by Peek and Graves based on observed reddening toward passive galaxies. We therefore argue that our H I-based map provides the most accurate interstellar reddening estimates in the low-column-density regime to date. Our reddening map is made publicly available at doi.org/10.7910/DVN/AFJNWJ.
Probability density and exceedance rate functions of locally Gaussian turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1989-01-01
A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.
Exposing extinction risk analysis to pathogens: Is disease just another form of density dependence?
Gerber, L.R.; McCallum, H.; Lafferty, K.D.; Sabo, J.L.; Dobson, A.
2005-01-01
In the United States and several other countries, the development of population viability analyses (PVA) is a legal requirement of any species survival plan developed for threatened and endangered species. Despite the importance of pathogens in natural populations, little attention has been given to host-pathogen dynamics in PVA. To study the effect of infectious pathogens on extinction risk estimates generated from PVA, we review and synthesize the relevance of host-pathogen dynamics in analyses of extinction risk. We then develop a stochastic, density-dependent host-parasite model to investigate the effects of disease on the persistence of endangered populations. We show that this model converges on a Ricker model of density dependence under a suite of limiting assumptions, including a high probability that epidemics will arrive and occur. Using this modeling framework, we then quantify: (1) dynamic differences between time series generated by disease and Ricker processes with the same parameters; (2) observed probabilities of quasi-extinction for populations exposed to disease or self-limitation; and (3) bias in probabilities of quasi-extinction estimated by density-independent PVAs when populations experience either form of density dependence. Our results suggest two generalities about the relationships among disease, PVA, and the management of endangered species. First, disease more strongly increases variability in host abundance and, thus, the probability of quasi-extinction, than does self-limitation. This result stems from the fact that the effects and the probability of occurrence of disease are both density dependent. Second, estimates of quasi-extinction are more often overly optimistic for populations experiencing disease than for those subject to self-limitation. Thus, although the results of density-independent PVAs may be relatively robust to some particular assumptions about density dependence, they are less robust when endangered populations are known to be susceptible to disease. If potential management actions involve manipulating pathogens, then it may be useful to model disease explicitly. ?? 2005 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
Mishra, D. C.; Arora, K.; Tiwari, V. M.
2004-02-01
A combined gravity map over the Indian Peninsular Shield (IPS) and adjoining oceans brings out well the inter-relationships between the older tectonic features of the continent and the adjoining younger oceanic features. The NW-SE, NE-SW and N-S Precambrian trends of the IPS are reflected in the structural trends of the Arabian Sea and the Bay of Bengal suggesting their probable reactivation. The Simple Bouguer anomaly map shows consistent increase in gravity value from the continent to the deep ocean basins, which is attributed to isostatic compensation due to variations in the crustal thickness. A crustal density model computed along a profile across this region suggests a thick crust of 35-40 km under the continent, which reduces to 22/20-24 km under the Bay of Bengal with thick sediments of 8-10 km underlain by crustal layers of density 2720 and 2900/2840 kg/m 3. Large crustal thickness and trends of the gravity anomalies may suggest a transitional crust in the Bay of Bengal up to 150-200 km from the east coast. The crustal thickness under the Laxmi ridge and east of it in the Arabian Sea is 20 and 14 km, respectively, with 5-6 km thick Tertiary and Mesozoic sediments separated by a thin layer of Deccan Trap. Crustal layers of densities 2750 and 2950 kg/m 3 underlie sediments. The crustal density model in this part of the Arabian Sea (east of Laxmi ridge) and the structural trends similar to the Indian Peninsular Shield suggest a continent-ocean transitional crust (COTC). The COTC may represent down dropped and submerged parts of the Indian crust evolved at the time of break-up along the west coast of India and passage of Reunion hotspot over India during late Cretaceous. The crustal model under this part also shows an underplated lower crust and a low density upper mantle, extending over the continent across the west coast of India, which appears to be related to the Deccan volcanism. The crustal thickness under the western Arabian Sea (west of the Laxmi ridge) reduces to 8-9 km with crustal layers of densities 2650 and 2870 kg/m 3 representing an oceanic crust.
Potential Analysis of Rainfall-induced Sediment Disaster
NASA Astrophysics Data System (ADS)
Chen, Jing-Wen; Chen, Yie-Ruey; Hsieh, Shun-Chieh; Tsai, Kuang-Jung; Chue, Yung-Sheng
2014-05-01
Most of the mountain regions in Taiwan are sedimentary and metamorphic rocks which are fragile and highly weathered. Severe erosion occurs due to intensive rainfall and rapid flow, the erosion is even worsen by frequent earthquakes and severely affects the stability of hillsides. Rivers are short and steep in Taiwan with large runoff differences in wet and dry seasons. Discharges respond rapidly with rainfall intensity and flood flows usually carry large amount of sediment. Because of the highly growth in economics and social change, the development in the slope land is inevitable in Taiwan. However, sediment disasters occur frequently in high and precipitous region during typhoon. To make the execution of the regulation of slope land development more efficiency, construction of evaluation model for sediment potential is very important. In this study, the Genetic Adaptive Neural Network (GANN) was implemented in texture analysis techniques for the classification of satellite images of research region before and after typhoon or extreme rainfall and to obtain surface information and hazard log data. By using GANN weight analysis, factors, levels and probabilities of disaster of the research areas are presented. Then, through geographic information system the disaster potential map is plotted to distinguish high potential regions from low potential regions. Finally, the evaluation processes for sediment disaster after rainfall due to slope land use are established. In this research, the automatic image classification and evaluation modules for sediment disaster after rainfall due to slope land disturbance and natural environment are established in MATLAB to avoid complexity and time of computation. After implementation of texture analysis techniques, the results show that the values of overall accuracy and coefficient of agreement of the time-saving image classification for different time periods are at intermediate-high level and above. The results of GANN show that the weight of building density is the largest in all slope land disturbance factors, followed by road density, orchard density, baren land density, vegetation density, and farmland density. The weight of geology is the largest in all natural environment factors, followed by slope roughness, slope, and elevation. Overlaying the locations of large sediment disaster in the past on the potential map predicted by GANN, we found that most damage areas were in the region with medium-high or high potential of landslide. Therefore, the proposed potential model of sediment disaster can be used in practice.
Updating Mars-GRAM to Increase the Accuracy of Sensitivity Studies at Large Optical Depths
NASA Technical Reports Server (NTRS)
Justh, Hiliary L.; Justus, C. G.; Badger, Andrew M.
2010-01-01
The Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model widely used for diverse mission applications. Mars-GRAM s perturbation modeling capability is commonly used, in a Monte-Carlo mode, to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). During the Mars Science Laboratory (MSL) site selection process, it was discovered that Mars-GRAM, when used for sensitivity studies for MapYear=0 and large optical depth values such as tau=3, is less than realistic. From the surface to 80 km altitude, Mars-GRAM is based on the NASA Ames Mars General Circulation Model (MGCM). MGCM results that were used for Mars-GRAM with MapYear set to 0 were from a MGCM run with a fixed value of tau=3 for the entire year at all locations. This has resulted in an imprecise atmospheric density at all altitudes. As a preliminary fix to this pressure-density problem, density factor values were determined for tau=0.3, 1 and 3 that will adjust the input values of MGCM MapYear 0 pressure and density to achieve a better match of Mars-GRAM MapYear 0 with Thermal Emission Spectrometer (TES) observations for MapYears 1 and 2 at comparable dust loading. Currently, these density factors are fixed values for all latitudes and Ls. Results will be presented from work being done to derive better multipliers by including variation with latitude and/or Ls by comparison of MapYear 0 output directly against TES limb data. The addition of these more precise density factors to Mars-GRAM 2005 Release 1.4 will improve the results of the sensitivity studies done for large optical depths.
Improving effectiveness of systematic conservation planning with density data.
Veloz, Samuel; Salas, Leonardo; Altman, Bob; Alexander, John; Jongsomjit, Dennis; Elliott, Nathan; Ballard, Grant
2015-08-01
Systematic conservation planning aims to design networks of protected areas that meet conservation goals across large landscapes. The optimal design of these conservation networks is most frequently based on the modeled habitat suitability or probability of occurrence of species, despite evidence that model predictions may not be highly correlated with species density. We hypothesized that conservation networks designed using species density distributions more efficiently conserve populations of all species considered than networks designed using probability of occurrence models. To test this hypothesis, we used the Zonation conservation prioritization algorithm to evaluate conservation network designs based on probability of occurrence versus density models for 26 land bird species in the U.S. Pacific Northwest. We assessed the efficacy of each conservation network based on predicted species densities and predicted species diversity. High-density model Zonation rankings protected more individuals per species when networks protected the highest priority 10-40% of the landscape. Compared with density-based models, the occurrence-based models protected more individuals in the lowest 50% priority areas of the landscape. The 2 approaches conserved species diversity in similar ways: predicted diversity was higher in higher priority locations in both conservation networks. We conclude that both density and probability of occurrence models can be useful for setting conservation priorities but that density-based models are best suited for identifying the highest priority areas. Developing methods to aggregate species count data from unrelated monitoring efforts and making these data widely available through ecoinformatics portals such as the Avian Knowledge Network will enable species count data to be more widely incorporated into systematic conservation planning efforts. © 2015, Society for Conservation Biology.
NASA Technical Reports Server (NTRS)
Huckle, H. F. (Principal Investigator)
1980-01-01
The most probable current U.S. taxonomic classification of the soils estimated to dominate world soil map units (WSM)) in selected crop producing states of Argentina and Brazil are presented. Representative U.S. soil series the units are given. The map units occurring in each state are listed with areal extent and major U.S. land resource areas in which similar soils most probably occur. Soil series sampled in LARS Technical Report 111579 and major land resource areas in which they occur with corresponding similar WSM units at the taxonomic subgroup levels are given.
Mapping Tree Density at the Global Scale
NASA Astrophysics Data System (ADS)
Covey, K. R.; Crowther, T. W.; Glick, H.; Bettigole, C.; Bradford, M.
2015-12-01
The global extent and distribution of forest trees is central to our understanding of the terrestrial biosphere. We provide the first spatially continuous map of forest tree density at a global-scale. This map reveals that the global number of trees is approximately 3.04 trillion, an order of magnitude higher than the previous estimate. Of these trees, approximately 1.39 trillion exist in tropical and subtropical regions, with 0.74, and 0.61 trillion in boreal and temperate regions, respectively. Biome-level trends in tree density demonstrate the importance of climate and topography in controlling local tree densities at finer scales, as well as the overwhelming impact of humans across most of the world. Based on our projected tree densities, we estimate that deforestation is currently responsible for removing over 15 billion trees each year, and the global number of trees has fallen by approximately 46% since the start of human civilization.
Mapping tree density at a global scale
NASA Astrophysics Data System (ADS)
Crowther, T. W.; Glick, H. B.; Covey, K. R.; Bettigole, C.; Maynard, D. S.; Thomas, S. M.; Smith, J. R.; Hintler, G.; Duguid, M. C.; Amatulli, G.; Tuanmu, M.-N.; Jetz, W.; Salas, C.; Stam, C.; Piotto, D.; Tavani, R.; Green, S.; Bruce, G.; Williams, S. J.; Wiser, S. K.; Huber, M. O.; Hengeveld, G. M.; Nabuurs, G.-J.; Tikhonova, E.; Borchardt, P.; Li, C.-F.; Powrie, L. W.; Fischer, M.; Hemp, A.; Homeier, J.; Cho, P.; Vibrans, A. C.; Umunay, P. M.; Piao, S. L.; Rowe, C. W.; Ashton, M. S.; Crane, P. R.; Bradford, M. A.
2015-09-01
The global extent and distribution of forest trees is central to our understanding of the terrestrial biosphere. We provide the first spatially continuous map of forest tree density at a global scale. This map reveals that the global number of trees is approximately 3.04 trillion, an order of magnitude higher than the previous estimate. Of these trees, approximately 1.39 trillion exist in tropical and subtropical forests, with 0.74 trillion in boreal regions and 0.61 trillion in temperate regions. Biome-level trends in tree density demonstrate the importance of climate and topography in controlling local tree densities at finer scales, as well as the overwhelming effect of humans across most of the world. Based on our projected tree densities, we estimate that over 15 billion trees are cut down each year, and the global number of trees has fallen by approximately 46% since the start of human civilization.
Mapping tree density at a global scale.
Crowther, T W; Glick, H B; Covey, K R; Bettigole, C; Maynard, D S; Thomas, S M; Smith, J R; Hintler, G; Duguid, M C; Amatulli, G; Tuanmu, M-N; Jetz, W; Salas, C; Stam, C; Piotto, D; Tavani, R; Green, S; Bruce, G; Williams, S J; Wiser, S K; Huber, M O; Hengeveld, G M; Nabuurs, G-J; Tikhonova, E; Borchardt, P; Li, C-F; Powrie, L W; Fischer, M; Hemp, A; Homeier, J; Cho, P; Vibrans, A C; Umunay, P M; Piao, S L; Rowe, C W; Ashton, M S; Crane, P R; Bradford, M A
2015-09-10
The global extent and distribution of forest trees is central to our understanding of the terrestrial biosphere. We provide the first spatially continuous map of forest tree density at a global scale. This map reveals that the global number of trees is approximately 3.04 trillion, an order of magnitude higher than the previous estimate. Of these trees, approximately 1.39 trillion exist in tropical and subtropical forests, with 0.74 trillion in boreal regions and 0.61 trillion in temperate regions. Biome-level trends in tree density demonstrate the importance of climate and topography in controlling local tree densities at finer scales, as well as the overwhelming effect of humans across most of the world. Based on our projected tree densities, we estimate that over 15 billion trees are cut down each year, and the global number of trees has fallen by approximately 46% since the start of human civilization.
Use of Total Electron Content data to analyze ionosphere electron density gradients
NASA Astrophysics Data System (ADS)
Nava, B.; Radicella, S. M.; Leitinger, R.; Coisson, P.
In presence of electron density gradients the thin shell approximation for the ionosphere used together with a simple mapping function to convert slant Total Electron Content TEC to vertical TEC could lead to TEC conversion errors Therefore these mapping function errors can be used to identify the effects of the electron density gradients in the ionosphere In the present work high precision GPS derived slant TEC data have been used to investigate the effects of the electron density gradients in the middle and low latitude ionosphere under geomagnetic quiet and disturbed conditions In particular the data corresponding to the geographic area of the American sector for the days 5-7 April 2000 have been used to perform a complete analysis of mapping function errors based on the coinciding pierce point technique The results clearly illustrate the electron density gradient effects according to the locations considered and to the actual levels of disturbance of the ionosphere
Tatem, Andrew J; Guerra, Carlos A; Kabaria, Caroline W; Noor, Abdisalan M; Hay, Simon I
2008-10-27
The efficient allocation of financial resources for malaria control and the optimal distribution of appropriate interventions require accurate information on the geographic distribution of malaria risk and of the human populations it affects. Low population densities in rural areas and high population densities in urban areas can influence malaria transmission substantially. Here, the Malaria Atlas Project (MAP) global database of Plasmodium falciparum parasite rate (PfPR) surveys, medical intelligence and contemporary population surfaces are utilized to explore these relationships and other issues involved in combining malaria risk maps with those of human population distribution in order to define populations at risk more accurately. First, an existing population surface was examined to determine if it was sufficiently detailed to be used reliably as a mask to identify areas of very low and very high population density as malaria free regions. Second, the potential of international travel and health guidelines (ITHGs) for identifying malaria free cities was examined. Third, the differences in PfPR values between surveys conducted in author-defined rural and urban areas were examined. Fourth, the ability of various global urban extent maps to reliably discriminate these author-based classifications of urban and rural in the PfPR database was investigated. Finally, the urban map that most accurately replicated the author-based classifications was analysed to examine the effects of urban classifications on PfPR values across the entire MAP database. Masks of zero population density excluded many non-zero PfPR surveys, indicating that the population surface was not detailed enough to define areas of zero transmission resulting from low population densities. In contrast, the ITHGs enabled the identification and mapping of 53 malaria free urban areas within endemic countries. Comparison of PfPR survey results showed significant differences between author-defined 'urban' and 'rural' designations in Africa, but not for the remainder of the malaria endemic world. The Global Rural Urban Mapping Project (GRUMP) urban extent mask proved most accurate for mapping these author-defined rural and urban locations, and further sub-divisions of urban extents into urban and peri-urban classes enabled the effects of high population densities on malaria transmission to be mapped and quantified. The availability of detailed, contemporary census and urban extent data for the construction of coherent and accurate global spatial population databases is often poor. These known sources of uncertainty in population surfaces and urban maps have the potential to be incorporated into future malaria burden estimates. Currently, insufficient spatial information exists globally to identify areas accurately where population density is low enough to impact upon transmission. Medical intelligence does however exist to reliably identify malaria free cities. Moreover, in Africa, urban areas that have a significant effect on malaria transmission can be mapped.
Kim, Yusung; Tomé, Wolfgang A
2008-01-01
Voxel based iso-Tumor Control Probability (TCP) maps and iso-Complication maps are proposed as a plan-review tool especially for functional image-guided intensity-modulated radiotherapy (IMRT) strategies such as selective boosting (dose painting) and conformal avoidance IMRT. The maps employ voxel-based phenomenological biological dose-response models for target volumes and normal organs. Two IMRT strategies for prostate cancer, namely conventional uniform IMRT delivering an EUD = 84 Gy (equivalent uniform dose) to the entire PTV and selective boosting delivering an EUD = 82 Gy to the entire PTV, are investigated, to illustrate the advantages of this approach over iso-dose maps. Conventional uniform IMRT did yield a more uniform isodose map to the entire PTV while selective boosting did result in a nonuniform isodose map. However, when employing voxel based iso-TCP maps selective boosting exhibited a more uniform tumor control probability map compared to what could be achieved using conventional uniform IMRT, which showed TCP cold spots in high-risk tumor subvolumes despite delivering a higher EUD to the entire PTV. Voxel based iso-Complication maps are presented for rectum and bladder, and their utilization for selective avoidance IMRT strategies are discussed. We believe as the need for functional image guided treatment planning grows, voxel based iso-TCP and iso-Complication maps will become an important tool to assess the integrity of such treatment plans.
Predicting Anthropogenic Noise Contributions to US Waters.
Gedamke, Jason; Ferguson, Megan; Harrison, Jolie; Hatch, Leila; Henderson, Laurel; Porter, Michael B; Southall, Brandon L; Van Parijs, Sofie
2016-01-01
To increase understanding of the potential effects of chronic underwater noise in US waters, the National Oceanic and Atmospheric Administration (NOAA) organized two working groups in 2011, collectively called "CetSound," to develop tools to map the density and distribution of cetaceans (CetMap) and predict the contribution of human activities to underwater noise (SoundMap). The SoundMap effort utilized data on density, distribution, acoustic signatures of dominant noise sources, and environmental descriptors to map estimated temporal, spatial, and spectral contributions to background noise. These predicted soundscapes are an initial step toward assessing chronic anthropogenic noise impacts on the ocean's varied acoustic habitats and the animals utilizing them.
A Tomographic Method for the Reconstruction of Local Probability Density Functions
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.
Comparing the Performance of Japan's Earthquake Hazard Maps to Uniform and Randomized Maps
NASA Astrophysics Data System (ADS)
Brooks, E. M.; Stein, S. A.; Spencer, B. D.
2015-12-01
The devastating 2011 magnitude 9.1 Tohoku earthquake and the resulting shaking and tsunami were much larger than anticipated in earthquake hazard maps. Because this and all other earthquakes that caused ten or more fatalities in Japan since 1979 occurred in places assigned a relatively low hazard, Geller (2011) argued that "all of Japan is at risk from earthquakes, and the present state of seismological science does not allow us to reliably differentiate the risk level in particular geographic areas," so a map showing uniform hazard would be preferable to the existing map. Defenders of the maps countered by arguing that these earthquakes are low-probability events allowed by the maps, which predict the levels of shaking that should expected with a certain probability over a given time. Although such maps are used worldwide in making costly policy decisions for earthquake-resistant construction, how well these maps actually perform is unknown. We explore this hotly-contested issue by comparing how well a 510-year-long record of earthquake shaking in Japan is described by the Japanese national hazard (JNH) maps, uniform maps, and randomized maps. Surprisingly, as measured by the metric implicit in the JNH maps, i.e. that during the chosen time interval the predicted ground motion should be exceeded only at a specific fraction of the sites, both uniform and randomized maps do better than the actual maps. However, using as a metric the squared misfit between maximum observed shaking and that predicted, the JNH maps do better than uniform or randomized maps. These results indicate that the JNH maps are not performing as well as expected, that what factors control map performance is complicated, and that learning more about how maps perform and why would be valuable in making more effective policy.
Continuous-time random-walk model for financial distributions
NASA Astrophysics Data System (ADS)
Masoliver, Jaume; Montero, Miquel; Weiss, George H.
2003-02-01
We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollar deutsche mark future exchange, finding good agreement between theory and the observed data.
ERIC Educational Resources Information Center
Storkel, Holly L.; Lee, Su-Yeon
2011-01-01
The goal of this research was to disentangle effects of phonotactic probability, the likelihood of occurrence of a sound sequence, and neighbourhood density, the number of phonologically similar words, in lexical acquisition. Two-word learning experiments were conducted with 4-year-old children. Experiment 1 manipulated phonotactic probability…
Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers
ERIC Educational Resources Information Center
MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara
2013-01-01
Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…
Will it Blend? Visualization and Accuracy Evaluation of High-Resolution Fuzzy Vegetation Maps
NASA Astrophysics Data System (ADS)
Zlinszky, A.; Kania, A.
2016-06-01
Instead of assigning every map pixel to a single class, fuzzy classification includes information on the class assigned to each pixel but also the certainty of this class and the alternative possible classes based on fuzzy set theory. The advantages of fuzzy classification for vegetation mapping are well recognized, but the accuracy and uncertainty of fuzzy maps cannot be directly quantified with indices developed for hard-boundary categorizations. The rich information in such a map is impossible to convey with a single map product or accuracy figure. Here we introduce a suite of evaluation indices and visualization products for fuzzy maps generated with ensemble classifiers. We also propose a way of evaluating classwise prediction certainty with "dominance profiles" visualizing the number of pixels in bins according to the probability of the dominant class, also showing the probability of all the other classes. Together, these data products allow a quantitative understanding of the rich information in a fuzzy raster map both for individual classes and in terms of variability in space, and also establish the connection between spatially explicit class certainty and traditional accuracy metrics. These map products are directly comparable to widely used hard boundary evaluation procedures, support active learning-based iterative classification and can be applied for operational use.
Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth
2007-05-21
The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.
Development and implementation of a Bayesian-based aquifer vulnerability assessment in Florida
Arthur, J.D.; Wood, H.A.R.; Baker, A.E.; Cichon, J.R.; Raines, G.L.
2007-01-01
The Florida Aquifer Vulnerability Assessment (FAVA) was designed to provide a tool for environmental, regulatory, resource management, and planning professionals to facilitate protection of groundwater resources from surface sources of contamination. The FAVA project implements weights-of-evidence (WofE), a data-driven, Bayesian-probabilistic model to generate a series of maps reflecting relative aquifer vulnerability of Florida's principal aquifer systems. The vulnerability assessment process, from project design to map implementation is described herein in reference to the Floridan aquifer system (FAS). The WofE model calculates weighted relationships between hydrogeologic data layers that influence aquifer vulnerability and ambient groundwater parameters in wells that reflect relative degrees of vulnerability. Statewide model input data layers (evidential themes) include soil hydraulic conductivity, density of karst features, thickness of aquifer confinement, and hydraulic head difference between the FAS and the watertable. Wells with median dissolved nitrogen concentrations exceeding statistically established thresholds serve as training points in the WofE model. The resulting vulnerability map (response theme) reflects classified posterior probabilities based on spatial relationships between the evidential themes and training points. The response theme is subjected to extensive sensitivity and validation testing. Among the model validation techniques is calculation of a response theme based on a different water-quality indicator of relative recharge or vulnerability: dissolved oxygen. Successful implementation of the FAVA maps was facilitated by the overall project design, which included a needs assessment and iterative technical advisory committee input and review. Ongoing programs to protect Florida's springsheds have led to development of larger-scale WofE-based vulnerability assessments. Additional applications of the maps include land-use planning amendments and prioritization of land purchases to protect groundwater resources. ?? International Association for Mathematical Geology 2007.
Systematic exploration of unsupervised methods for mapping behavior
NASA Astrophysics Data System (ADS)
Todd, Jeremy G.; Kain, Jamey S.; de Bivort, Benjamin L.
2017-02-01
To fully understand the mechanisms giving rise to behavior, we need to be able to precisely measure it. When coupled with large behavioral data sets, unsupervised clustering methods offer the potential of unbiased mapping of behavioral spaces. However, unsupervised techniques to map behavioral spaces are in their infancy, and there have been few systematic considerations of all the methodological options. We compared the performance of seven distinct mapping methods in clustering a wavelet-transformed data set consisting of the x- and y-positions of the six legs of individual flies. Legs were automatically tracked by small pieces of fluorescent dye, while the fly was tethered and walking on an air-suspended ball. We find that there is considerable variation in the performance of these mapping methods, and that better performance is attained when clustering is done in higher dimensional spaces (which are otherwise less preferable because they are hard to visualize). High dimensionality means that some algorithms, including the non-parametric watershed cluster assignment algorithm, cannot be used. We developed an alternative watershed algorithm which can be used in high-dimensional spaces when a probability density estimate can be computed directly. With these tools in hand, we examined the behavioral space of fly leg postural dynamics and locomotion. We find a striking division of behavior into modes involving the fore legs and modes involving the hind legs, with few direct transitions between them. By computing behavioral clusters using the data from all flies simultaneously, we show that this division appears to be common to all flies. We also identify individual-to-individual differences in behavior and behavioral transitions. Lastly, we suggest a computational pipeline that can achieve satisfactory levels of performance without the taxing computational demands of a systematic combinatorial approach.
Yi, Liuxi; Gao, Fengyun; Siqin, Bateer; Zhou, Yu; Li, Qiang; Zhao, Xiaoqing; Jia, Xiaoyun; Zhang, Hui
2017-01-01
Flax is an important crop for oil and fiber, however, no high-density genetic maps have been reported for this species. Specific length amplified fragment sequencing (SLAF-seq) is a high-resolution strategy for large scale de novo discovery and genotyping of single nucleotide polymorphisms. In this study, SLAF-seq was employed to develop SNP markers in an F2 population to construct a high-density genetic map for flax. In total, 196.29 million paired-end reads were obtained. The average sequencing depth was 25.08 in male parent, 32.17 in the female parent, and 9.64 in each F2 progeny. In total, 389,288 polymorphic SLAFs were detected, from which 260,380 polymorphic SNPs were developed. After filtering, 4,638 SNPs were found suitable for genetic map construction. The final genetic map included 4,145 SNP markers on 15 linkage groups and was 2,632.94 cM in length, with an average distance of 0.64 cM between adjacent markers. To our knowledge, this map is the densest SNP-based genetic map for flax. The SNP markers and genetic map reported in here will serve as a foundation for the fine mapping of quantitative trait loci (QTLs), map-based gene cloning and marker assisted selection (MAS) for flax.
Wilcox, Taylor M; Mckelvey, Kevin S.; Young, Michael K.; Sepulveda, Adam; Shepard, Bradley B.; Jane, Stephen F; Whiteley, Andrew R.; Lowe, Winsor H.; Schwartz, Michael K.
2016-01-01
Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive independent estimates of eDNA production rates and downstream persistence from brook trout (Salvelinus fontinalis) in streams. We use these estimates to parameterize models comparing the false negative detection rates of eDNA sampling and traditional backpack electrofishing. We find that using the protocols in this study eDNA had reasonable detection probabilities at extremely low animal densities (e.g., probability of detection 0.18 at densities of one fish per stream kilometer) and very high detection probabilities at population-level densities (e.g., probability of detection > 0.99 at densities of ≥ 3 fish per 100 m). This is substantially more sensitive than traditional electrofishing for determining the presence of brook trout and may translate into important cost savings when animals are rare. Our findings are consistent with a growing body of literature showing that eDNA sampling is a powerful tool for the detection of aquatic species, particularly those that are rare and difficult to sample using traditional methods.
NASA Astrophysics Data System (ADS)
Piotrowska, M. J.; Bodnar, M.
2018-01-01
We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.
Coe, J.A.; Michael, J.A.; Crovelli, R.A.; Savage, W.Z.; Laprade, W.T.; Nashem, W.D.
2004-01-01
Ninety years of historical landslide records were used as input to the Poisson and binomial probability models. Results from these models show that, for precipitation-triggered landslides, approximately 9 percent of the area of Seattle has annual exceedance probabilities of 1 percent or greater. Application of the Poisson model for estimating the future occurrence of individual landslides results in a worst-case scenario map, with a maximum annual exceedance probability of 25 percent on a hillslope near Duwamish Head in West Seattle. Application of the binomial model for estimating the future occurrence of a year with one or more landslides results in a map with a maximum annual exceedance probability of 17 percent (also near Duwamish Head). Slope and geology both play a role in localizing the occurrence of landslides in Seattle. A positive correlation exists between slope and mean exceedance probability, with probability tending to increase as slope increases. Sixty-four percent of all historical landslide locations are within 150 m (500 ft, horizontal distance) of the Esperance Sand/Lawton Clay contact, but within this zone, no positive or negative correlation exists between exceedance probability and distance to the contact.
MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-21
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes
NASA Astrophysics Data System (ADS)
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-01
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
Competition between harvester ants and rodents in the cold desert
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landeen, D.S.; Jorgensen, C.D.; Smith, H.D.
1979-09-30
Local distribution patterns of three rodent species (Perognathus parvus, Peromyscus maniculatus, Reithrodontomys megalotis) were studied in areas of high and low densities of harvester ants (Pogonomyrmex owyheei) in Raft River Valley, Idaho. Numbers of rodents were greatest in areas of high ant-density during May, but partially reduced in August; whereas, the trend was reversed in areas of low ant-density. Seed abundance was probably not the factor limiting changes in rodent populations, because seed densities of annual plants were always greater in areas of high ant-density. Differences in seasonal population distributions of rodents between areas of high and low ant-densities weremore » probably due to interactions of seed availability, rodent energetics, and predation.« less
Spatial controls of occurrence and spread of wildfires in the Missouri Ozark Highlands.
Yang, Jian; He, Hong S; Shifley, Stephen R
2008-07-01
Understanding spatial controls on wildfires is important when designing adaptive fire management plans and optimizing fuel treatment locations on a forest landscape. Previous research about this topic focused primarily on spatial controls for fire origin locations alone. Fire spread and behavior were largely overlooked. This paper contrasts the relative importance of biotic, abiotic, and anthropogenic constraints on the spatial pattern of fire occurrence with that on burn probability (i.e., the probability that fire will spread to a particular location). Spatial point pattern analysis and landscape succession fire model (LANDIS) were used to create maps to show the contrast. We quantified spatial controls on both fire occurrence and fire spread in the Midwest Ozark Highlands region, USA. This area exhibits a typical anthropogenic surface fire regime. We found that (1) human accessibility and land ownership were primary limiting factors in shaping clustered fire origin locations; (2) vegetation and topography had a negligible influence on fire occurrence in this anthropogenic regime; (3) burn probability was higher in grassland and open woodland than in closed-canopy forest, even though fire occurrence density was less in these vegetation types; and (4) biotic and abiotic factors were secondary descriptive ingredients for determining the spatial patterns of burn probability. This study demonstrates how fire occurrence and spread interact with landscape patterns to affect the spatial distribution of wildfire risk. The application of spatial point pattern data analysis would also be valuable to researchers working on landscape forest fire models to integrate historical ignition location patterns in fire simulation.
Continental-scale, seasonal movements of a heterothermic migratory tree bat
Cryan, Paul M.; Stricker, Craig A.; Wunder, Michael B.
2014-01-01
Long-distance migration evolved independently in bats and unique migration behaviors are likely, but because of their cryptic lifestyles, many details remain unknown. North American hoary bats (Lasiurus cinereus cinereus) roost in trees year-round and probably migrate farther than any other bats, yet we still lack basic information about their migration patterns and wintering locations or strategies. This information is needed to better understand unprecedented fatality of hoary bats at wind turbines during autumn migration and to determine whether the species could be susceptible to an emerging disease affecting hibernating bats. Our aim was to infer probable seasonal movements of individual hoary bats to better understand their migration and seasonal distribution in North America. We analyzed the stable isotope values of non-exchangeable hydrogen in the keratin of bat hair and combined isotopic results with prior distributional information to derive relative probability density surfaces for the geographic origins of individuals. We then mapped probable directions and distances of seasonal movement. Results indicate that hoary bats summer across broad areas. In addition to assumed latitudinal migration, we uncovered evidence of longitudinal movement by hoary bats from inland summering grounds to coastal regions during autumn and winter. Coastal regions with nonfreezing temperatures may be important wintering areas for hoary bats. Hoary bats migrating through any particular area, such as a wind turbine facility in autumn, are likely to have originated from a broad expanse of summering grounds from which they have traveled in no recognizable order. Better characterizing migration patterns and wintering behaviors of hoary bats sheds light on the evolution of migration and provides context for conserving these migrants.
Vehicles instability criteria for flood risk assessment of a street network
NASA Astrophysics Data System (ADS)
Arrighi, Chiara; Huybrechts, Nicolas; Ouahsine, Abdellatif; Chassé, Patrick; Oumeraci, Hocine; Castelli, Fabio
2016-05-01
The mutual interaction between floods and human activity is a process, which has been evolving over history and has shaped flood risk pathways. In developed countries, many events have illustrated that the majority of the fatalities during a flood occurs in a vehicle, which is considered as a safe shelter but it may turn into a trap for several combinations of water depth and velocity. Thus, driving a car in floodwaters is recognized as the most crucial aggravating factor for people safety. On the other hand, the entrainment of vehicles may locally cause obstructions to the flow and induce the collapse of infrastructures. Flood risk to vehicles can be defined as the combination of the probability of a vehicle of being swept away (i.e. the hazard) and the actual traffic/parking density, i.e. the vulnerability. Hazard for vehicles can be assessed through the spatial identification and mapping of the critical conditions for vehicles incipient motion. This analysis requires a flood map with information on water depth and velocity and consistent instability criteria accounting for flood and vehicles characteristics. Vulnerability is evaluated thanks to the road network and traffic data. Therefore, vehicles flood risk mapping can support people's education and management practices in order to reduce the casualties. In this work, a flood hazard classification for vehicles is introduced and an application to a real case study is presented and discussed.
NASA Astrophysics Data System (ADS)
Taylor, Faith E.; Santangelo, Michele; Marchesini, Ivan; Malamud, Bruce D.
2013-04-01
During a landslide triggering event, the tens to thousands of landslides resulting from the trigger (e.g., earthquake, heavy rainfall) may block a number of sections of the road network, posing a risk to rescue efforts, logistics and accessibility to a region. Here, we present initial results from a semi-stochastic model we are developing to evaluate the probability of landslides intersecting a road network and the network-accessibility implications of this across a region. This was performed in the open source GRASS GIS software, where we took 'model' landslides and dropped them on a 79 km2 test area region in Collazzone, Umbria, Central Italy, with a given road network (major and minor roads, 404 km in length) and already determined landslide susceptibilities. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m.2 The number of landslide areas selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. 79 landslide areas chosen randomly for each iteration. Landslides were then 'dropped' over the region semi-stochastically: (i) random points were generated across the study region; (ii) based on the landslide susceptibility map, points were accepted/rejected based on the probability of a landslide occurring at that location. After a point was accepted, it was assigned a landslide area (AL) and length to width ratio. Landslide intersections with roads were then assessed and indices such as the location, number and size of road blockage recorded. The GRASS-GIS model was performed 1000 times in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event of 1 landslide km-2 over a 79 km2 region with 404 km of road, the number of road blockages ranges from 6 to 17, resulting in one road blockage every 24-67 km of roads. The average length of road blocked was 33 m. As we progress with model development and more sophisticated network analysis, we believe this semi-stochastic modelling approach will aid civil protection agencies to get a rough idea for the probability of road network potential damage (road block number and extent) as the result of different magnitude landslide triggering event scenarios.
Multi-Agent Cooperative Target Search
Hu, Jinwen; Xie, Lihua; Xu, Jun; Xu, Zhao
2014-01-01
This paper addresses a vision-based cooperative search for multiple mobile ground targets by a group of unmanned aerial vehicles (UAVs) with limited sensing and communication capabilities. The airborne camera on each UAV has a limited field of view and its target discriminability varies as a function of altitude. First, by dividing the whole surveillance region into cells, a probability map can be formed for each UAV indicating the probability of target existence within each cell. Then, we propose a distributed probability map updating model which includes the fusion of measurement information, information sharing among neighboring agents, information decay and transmission due to environmental changes such as the target movement. Furthermore, we formulate the target search problem as a multi-agent cooperative coverage control problem by optimizing the collective coverage area and the detection performance. The proposed map updating model and the cooperative control scheme are distributed, i.e., assuming that each agent only communicates with its neighbors within its communication range. Finally, the effectiveness of the proposed algorithms is illustrated by simulation. PMID:24865884
Probing the statistics of transport in the Hénon Map
NASA Astrophysics Data System (ADS)
Alus, O.; Fishman, S.; Meiss, J. D.
2016-09-01
The phase space of an area-preserving map typically contains infinitely many elliptic islands embedded in a chaotic sea. Orbits near the boundary of a chaotic region have been observed to stick for long times, strongly influencing their transport properties. The boundary is composed of invariant "boundary circles." We briefly report recent results of the distribution of rotation numbers of boundary circles for the Hénon quadratic map and show that the probability of occurrence of small integer entries of their continued fraction expansions is larger than would be expected for a number chosen at random. However, large integer entries occur with probabilities distributed proportionally to the random case. The probability distributions of ratios of fluxes through island chains is reported as well. These island chains are neighbours in the sense of the Meiss-Ott Markov-tree model. Two distinct universality families are found. The distributions of the ratio between the flux and orbital period are also presented. All of these results have implications for models of transport in mixed phase space.
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...
2017-08-25
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
Distribution of submerged aquatic vegetation in the St. Louis River estuary: Maps and models
In late summer of 2011 and 2012 we used echo-sounding gear to map the distribution of submerged aquatic vegetation (SAV) in the St. Louis River Estuary (SLRE). From these data we produced maps of SAV distribution and we created logistic models to predict the probability of occurr...
Zeil, Stephanie; Kovacs, Julio; Wriggers, Willy; He, Jing
2017-01-01
Three-dimensional density maps of biological specimens from cryo-electron microscopy (cryo-EM) can be interpreted in the form of atomic models that are modeled into the density, or they can be compared to known atomic structures. When the central axis of a helix is detectable in a cryo-EM density map, it is possible to quantify the agreement between this central axis and a central axis calculated from the atomic model or structure. We propose a novel arc-length association method to compare the two axes reliably. This method was applied to 79 helices in simulated density maps and six case studies using cryo-EM maps at 6.4-7.7 Å resolution. The arc-length association method is then compared to three existing measures that evaluate the separation of two helical axes: a two-way distance between point sets, the length difference between two axes, and the individual amino acid detection accuracy. The results show that our proposed method sensitively distinguishes lateral and longitudinal discrepancies between the two axes, which makes the method particularly suitable for the systematic investigation of cryo-EM map-model pairs.
Models for loosely linked gene duplicates suggest lengthy persistence of both copies.
O'Hely, Martin; Wockner, Leesa
2007-06-21
Consider the appearance of a duplicate copy of a gene at a locus linked loosely, if at all, to the locus at which the gene is usually found. If all copies of the gene are subject to non-functionalizing mutations, then two fates are possible: loss of functional copies at the duplicate locus (loss of duplicate expression), or loss of functional copies at the original locus (map change). This paper proposes a simple model to address the probability of map change, the time taken for a map change and/or loss of duplicate expression, and considers where in the spectrum between loss of duplicate expression and map change such a duplicate complex is likely to be found. The findings are: the probability of map change is always half the reciprocal of the population size N, the time for a map change to occur is order NlogN generations, and that there is a marked tendency for duplicates to remain near equi-frequency with the gene at the original locus for a large portion of that time. This is in excellent agreement with simulations.
A virtual pebble game to ensemble average graph rigidity.
González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J
2015-01-01
The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most accurate but slowest method of ensemble averaging over hundreds to thousands of independent PG runs, and the fastest but least accurate MCC.
Redundancy and reduction: Speakers manage syntactic information density
Florian Jaeger, T.
2010-01-01
A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141
The difference between two random mixed quantum states: exact and asymptotic spectral analysis
NASA Astrophysics Data System (ADS)
Mejía, José; Zapata, Camilo; Botero, Alonso
2017-01-01
We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Surface Snow Density of East Antarctica Derived from In-Situ Observations
NASA Astrophysics Data System (ADS)
Tian, Y.; Zhang, S.; Du, W.; Chen, J.; Xie, H.; Tong, X.; Li, R.
2018-04-01
Models based on physical principles or semi-empirical parameterizations have used to compute the firn density, which is essential for the study of surface processes in the Antarctic ice sheet. However, parameterization of surface snow density is often challenged by the description of detailed local characterization. In this study we propose to generate a surface density map for East Antarctica from all the filed observations that are available. Considering that the observations are non-uniformly distributed around East Antarctica, obtained by different methods, and temporally inhomogeneous, the field observations are used to establish an initial density map with a grid size of 30 × 30 km2 in which the observations are averaged at a temporal scale of five years. We then construct an observation matrix with its columns as the map grids and rows as the temporal scale. If a site has an unknown density value for a period, we will set it to 0 in the matrix. In order to construct the main spatial and temple information of surface snow density matrix we adopt Empirical Orthogonal Function (EOF) method to decompose the observation matrix and only take first several lower-order modes, because these modes already contain most information of the observation matrix. However, there are a lot of zeros in the matrix and we solve it by using matrix completion algorithm, and then we derive the time series of surface snow density at each observation site. Finally, we can obtain the surface snow density by multiplying the modes interpolated by kriging with the corresponding amplitude of the modes. Comparative analysis have done between our surface snow density map and model results. The above details will be introduced in the paper.
Vehicle Detection for RCTA/ANS (Autonomous Navigation System)
NASA Technical Reports Server (NTRS)
Brennan, Shane; Bajracharya, Max; Matthies, Larry H.; Howard, Andrew B.
2012-01-01
Using a stereo camera pair, imagery is acquired and processed through the JPLV stereo processing pipeline. From this stereo data, large 3D blobs are found. These blobs are then described and classified by their shape to determine which are vehicles and which are not. Prior vehicle detection algorithms are either targeted to specific domains, such as following lead cars, or are intensity- based methods that involve learning typical vehicle appearances from a large corpus of training data. In order to detect vehicles, the JPL Vehicle Detection (JVD) algorithm goes through the following steps: 1. Take as input a left disparity image and left rectified image from JPLV stereo. 2. Project the disparity data onto a two-dimensional Cartesian map. 3. Perform some post-processing of the map built in the previous step in order to clean it up. 4. Take the processed map and find peaks. For each peak, grow it out into a map blob. These map blobs represent large, roughly vehicle-sized objects in the scene. 5. Take these map blobs and reject those that do not meet certain criteria. Build descriptors for the ones that remain. Pass these descriptors onto a classifier, which determines if the blob is a vehicle or not. The probability of detection is the probability that if a vehicle is present in the image, is visible, and un-occluded, then it will be detected by the JVD algorithm. In order to estimate this probability, eight sequences were ground-truthed from the RCTA (Robotics Collaborative Technology Alliances) program, totaling over 4,000 frames with 15 unique vehicles. Since these vehicles were observed at varying ranges, one is able to find the probability of detection as a function of range. At the time of this reporting, the JVD algorithm was tuned to perform best at cars seen from the front, rear, or either side, and perform poorly on vehicles seen from oblique angles.
The Mass Surface Density Distribution of a High-Mass Protocluster forming from an IRDC and GMC
NASA Astrophysics Data System (ADS)
Lim, Wanggi; Tan, Jonathan C.; Kainulainen, Jouni; Ma, Bo; Butler, Michael
2016-01-01
We study the probability distribution function (PDF) of mass surface densities of infrared dark cloud (IRDC) G028.36+00.07 and its surrounding giant molecular cloud (GMC). Such PDF analysis has the potential to probe the physical processes that are controlling cloud structure and star formation activity. The chosen IRDC is of particular interest since it has almost 100,000 solar masses within a radius of 8 parsecs, making it one of the most massive, dense molecular structures known and is thus a potential site for the formation of a high-mass, "super star cluster". We study mass surface densities in two ways. First, we use a combination of NIR, MIR and FIR extinction maps that are able to probe the bulk of the cloud structure that is not yet forming stars. This analysis also shows evidence for flattening of the IR extinction law as mass surface density increases, consistent with increasing grain size and/or growth of ice mantles. Second, we study the FIR and sub-mm dust continuum emission from the cloud, especially utlizing Herschel PACS and SPIRE images. We first subtract off the contribution of the foreground diffuse emission that contaminates these images. Next we examine the effects of background subtraction and choice of dust opacities on the derived mass surface density PDF. The final derived PDFs from both methods are compared, including also with other published studies of this cloud. The implications for theoretical models and simulations of cloud structure, including the role of turbulence and magnetic fields, are discussed.
The effect of respiratory induced density variations on non-TOF PET quantitation in the lung.
Holman, Beverley F; Cuplov, Vesna; Hutton, Brian F; Groves, Ashley M; Thielemans, Kris
2016-04-21
Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant (18)F-FDG and (18)F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.
The effect of respiratory induced density variations on non-TOF PET quantitation in the lung
NASA Astrophysics Data System (ADS)
Holman, Beverley F.; Cuplov, Vesna; Hutton, Brian F.; Groves, Ashley M.; Thielemans, Kris
2016-04-01
Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant 18F-FDG and 18F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.
Ma, Hao; Moore, Paul H; Liu, Zhiyong; Kim, Minna S; Yu, Qingyi; Fitch, Maureen M M; Sekioka, Terry; Paterson, Andrew H; Ming, Ray
2004-01-01
A high-density genetic map of papaya (Carica papaya L.) was constructed using 54 F(2) plants derived from cultivars Kapoho and SunUp with 1501 markers, including 1498 amplified fragment length polymorphism (AFLP) markers, the papaya ringspot virus coat protein marker, morphological sex type, and fruit flesh color. These markers were mapped into 12 linkage groups at a LOD score of 5.0 and recombination frequency of 0.25. The 12 major linkage groups covered a total length of 3294.2 cM, with an average distance of 2.2 cM between adjacent markers. This map revealed severe suppression of recombination around the sex determination locus with a total of 225 markers cosegregating with sex types. The cytosine bases were highly methylated in this region on the basis of the distribution of methylation-sensitive and -insensitive markers. This high-density genetic map is essential for cloning of specific genes of interest such as the sex determination gene and for the integration of genetic and physical maps of papaya. PMID:15020433
NASA Astrophysics Data System (ADS)
Marrero, J. M.; García, A.; Llinares, A.; Rodriguez-Losada, J. A.; Ortiz, R.
2012-03-01
One of the critical issues in managing volcanic crises is making the decision to evacuate a densely-populated region. In order to take a decision of such importance it is essential to estimate the cost in lives for each of the expected eruptive scenarios. One of the tools that assist in estimating the number of potential fatalities for such decision-making is the calculation of the FN-curves. In this case the FN-curve is a graphical representation that relates the frequency of the different hazards to be expected for a particular volcano or volcanic area, and the number of potential fatalities expected for each event if the zone of impact is not evacuated. In this study we propose a method for assessing the impact that a possible eruption from the Tenerife Central Volcanic Complex (CVC) would have on the population at risk. Factors taken into account include the spatial probability of the eruptive scenarios (susceptibility) and the temporal probability of the magnitudes of the eruptive scenarios. For each point or cell of the susceptibility map with greater probability, a series of probability-scaled hazard maps is constructed for the whole range of magnitudes expected. The number of potential fatalities is obtained from the intersection of the hazard maps with the spatial map of population distribution. The results show that the Emergency Plan for Tenerife must provide for the evacuation of more than 100,000 persons.
NASA Astrophysics Data System (ADS)
Freire, Sérgio; Aubrecht, Christoph
2010-05-01
The recent 7.0 M earthquake that caused severe damage and destruction in parts of Haiti struck close to 5 PM (local time), at a moment when many people were not in their residences, instead being in their workplaces, schools, or churches. Community vulnerability assessment to seismic hazard relying solely on the location and density of resident-based census population, as is commonly the case, would grossly misrepresent the real situation. In particular in the context of global (climate) change, risk analysis is a research field increasingly gaining in importance whereas risk is usually defined as a function of hazard probability and vulnerability. Assessment and mapping of human vulnerability has however generally been lagging behind hazard analysis efforts. Central to the concept of vulnerability is the issue of human exposure. Analysis of exposure is often spatially tied to administrative units or reference objects such as buildings, spanning scales from the regional level to local studies for small areas. Due to human activities and mobility, the spatial distribution of population is time-dependent, especially in metropolitan areas. Accurately estimating population exposure is a key component of catastrophe loss modeling, one element of effective risk analysis and emergency management. Therefore, accounting for the spatio-temporal dynamics of human vulnerability correlates with recent recommendations to improve vulnerability analyses. Earthquakes are the prototype for a major disaster, being low-probability, rapid-onset, high-consequence events. Lisbon, Portugal, is subject to a high risk of earthquake, which can strike at any day and time, as confirmed by modern history (e.g. December 2009). The recently-approved Special Emergency and Civil Protection Plan (PEERS) is based on a Seismic Intensity map, and only contemplates resident population from the census as proxy for human exposure. In the present work we map and analyze the spatio-temporal distribution of population in the daily cycle to re-assess exposure to earthquake hazard in the Lisbon Metropolitan Area, home to almost three million people. New high-resolution (50 m grids) daytime and nighttime population distribution maps are developed using dasymetric mapping. The modeling approach uses areal interpolation to combine best-available census data and statistics with land use and land cover data. Mobility statistics are considered for mapping daytime distribution, and empirical parameters used for interpolation are obtained from a previous effort in high resolution population mapping of part of the study area. Finally, the population distribution maps are combined with the Seismic Hazard Intensity map to: (1) quantify and compare human exposure to seismic intensity levels in the daytime and nighttime periods, and (2) derive nighttime and daytime overall Earthquake Risk maps. This novel approach yields previously unavailable spatio-temporal population distribution information for the study area, enabling refined and more accurate earthquake risk mapping and assessment. Additionally, such population exposure datasets can be combined with different hazard maps to improve spatio-temporal assessment and risk mapping for any type of hazard, natural or man-made. We believe this improved characterization of vulnerability and risk can benefit all phases of the disaster management process where human exposure has to be considered, namely in emergency planning, risk mitigation, preparedness, and response to an event.
Harrison, Jolie; Ferguson, Megan; Gedamke, Jason; Hatch, Leila; Southall, Brandon; Van Parijs, Sofie
2016-01-01
To help manage chronic and cumulative impacts of human activities on marine mammals, the National Oceanic and Atmospheric Administration (NOAA) convened two working groups, the Underwater Sound Field Mapping Working Group (SoundMap) and the Cetacean Density and Distribution Mapping Working Group (CetMap), with overarching effort of both groups referred to as CetSound, which (1) mapped the predicted contribution of human sound sources to ocean noise and (2) provided region/time/species-specific cetacean density and distribution maps. Mapping products were presented at a symposium where future priorities were identified, including institutionalization/integration of the CetSound effort within NOAA-wide goals and programs, creation of forums and mechanisms for external input and funding, and expanded outreach/education. NOAA is subsequently developing an ocean noise strategy to articulate noise conservation goals and further identify science and management actions needed to support them.
Graph edit distance from spectral seriation.
Robles-Kelly, Antonio; Hancock, Edwin R
2005-03-01
This paper is concerned with computing graph edit distance. One of the criticisms that can be leveled at existing methods for computing graph edit distance is that they lack some of the formality and rigor of the computation of string edit distance. Hence, our aim is to convert graphs to string sequences so that string matching techniques can be used. To do this, we use a graph spectral seriation method to convert the adjacency matrix into a string or sequence order. We show how the serial ordering can be established using the leading eigenvector of the graph adjacency matrix. We pose the problem of graph-matching as a maximum a posteriori probability (MAP) alignment of the seriation sequences for pairs of graphs. This treatment leads to an expression in which the edit cost is the negative logarithm of the a posteriori sequence alignment probability. We compute the edit distance by finding the sequence of string edit operations which minimizes the cost of the path traversing the edit lattice. The edit costs are determined by the components of the leading eigenvectors of the adjacency matrix and by the edge densities of the graphs being matched. We demonstrate the utility of the edit distance on a number of graph clustering problems.
Risk assessment for tephra dispersal and sedimentation: the example of four Icelandic volcanoes
NASA Astrophysics Data System (ADS)
Biass, Sebastien; Scaini, Chiara; Bonadonna, Costanza; Smith, Kate; Folch, Arnau; Höskuldsson, Armann; Galderisi, Adriana
2014-05-01
In order to assist the elaboration of proactive measures for the management of future Icelandic volcanic eruptions, we developed a new approach to assess the impact associated with tephra dispersal and sedimentation at various scales and for multiple sources. Target volcanoes are Hekla, Katla, Eyjafjallajökull and Askja, selected for their high probabilities of eruption and/or their high potential impact. We combined stratigraphic studies, probabilistic strategies and numerical modelling to develop comprehensive eruption scenarios and compile hazard maps for local ground deposition and regional atmospheric concentration using both TEPHRA2 and FALL3D models. New algorithms for the identification of comprehensive probability density functions of eruptive source parameters were developed for both short and long-lasting activity scenarios. A vulnerability assessment of socioeconomic and territorial aspects was also performed at both national and continental scales. The identification of relevant vulnerability indicators allowed for the identification of the most critical areas and territorial nodes. At a national scale, the vulnerability of economic activities and the accessibility to critical infrastructures was assessed. At a continental scale, we assessed the vulnerability of the main airline routes and airports. Resulting impact and risk were finally assessed by combining hazard and vulnerability analysis.
Xu, Yueqing; McNamara, Paul; Wu, Yanfang; Dong, Yue
2013-10-15
Arable land in China has been decreasing as a result of rapid population growth and economic development as well as urban expansion, especially in developed regions around cities where quality farmland quickly disappears. This paper analyzed changes in arable land utilization during 1993-2008 in the Pinggu district, Beijing, China, developed a multinomial logit (MNL) model to determine spatial driving factors influencing arable land-use change, and simulated arable land transition probabilities. Land-use maps, as well as social-economic and geographical data were used in the study. The results indicated that arable land decreased significantly between 1993 and 2008. Lost arable land shifted into orchard, forestland, settlement, and transportation land. Significant differences existed for arable land transitions among different landform areas. Slope, elevation, population density, urbanization rate, distance to settlements, and distance to roadways were strong drivers influencing arable land transition to other uses. The MNL model was proved effective for predicting transition probabilities in land use from arable land to other land-use types, thus can be used for scenario analysis to develop land-use policies and land-management measures in this metropolitan area. Copyright © 2013 Elsevier Ltd. All rights reserved.
Hierarchical heuristic search using a Gaussian mixture model for UAV coverage planning.
Lin, Lanny; Goodrich, Michael A
2014-12-01
During unmanned aerial vehicle (UAV) search missions, efficient use of UAV flight time requires flight paths that maximize the probability of finding the desired subject. The probability of detecting the desired subject based on UAV sensor information can vary in different search areas due to environment elements like varying vegetation density or lighting conditions, making it likely that the UAV can only partially detect the subject. This adds another dimension of complexity to the already difficult (NP-Hard) problem of finding an optimal search path. We present a new class of algorithms that account for partial detection in the form of a task difficulty map and produce paths that approximate the payoff of optimal solutions. The algorithms use the mode goodness ratio heuristic that uses a Gaussian mixture model to prioritize search subregions. The algorithms search for effective paths through the parameter space at different levels of resolution. We compare the performance of the new algorithms against two published algorithms (Bourgault's algorithm and LHC-GW-CONV algorithm) in simulated searches with three real search and rescue scenarios, and show that the new algorithms outperform existing algorithms significantly and can yield efficient paths that yield payoffs near the optimal.
On the physical nature of globular cluster candidates in the Milky Way bulge
NASA Astrophysics Data System (ADS)
Piatti, Andrés E.
2018-06-01
We present results from 2MASS JKs photometry on the physical reality of recently reported globular cluster (GC) candidates in the Milky Way (MW) bulge. We relied our analysis on photometric membership probabilities that allowed us to distinguish real stellar aggregates from the composite field star population. When building colour-magnitude diagrams and stellar density maps for stars at different membership probability levels, the genuine GC candidate populations are clearly highlighted. We then used the tip of the red giant branch (RGB) as distance estimator, resulting in heliocentric distances that place many of the objects in regions near the MW bulge, where no GC had been previously recognized. Some few GC candidates resulted to be MW halo/disc objects. Metallicities estimated from the standard RGB method are in agreement with the values expected according to the position of the GC candidates in the Galaxy. Finally, we derived, for the first time, their structural parameters. We found that the studied objects have core, half-light, and tidal radii in the ranges spanned by the population of known MW GCs. Their internal dynamical evolutionary stages will be described properly when their masses are estimated.
Predicting coastal cliff erosion using a Bayesian probabilistic model
Hapke, Cheryl J.; Plant, Nathaniel G.
2010-01-01
Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70–90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale.
NASA Astrophysics Data System (ADS)
Angraini, Lily Maysari; Suparmi, Variani, Viska Inda
2010-12-01
SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.
NASA Astrophysics Data System (ADS)
Tremblin, P.; Schneider, N.; Minier, V.; Didelon, P.; Hill, T.; Anderson, L. D.; Motte, F.; Zavagno, A.; André, Ph.; Arzoumanian, D.; Audit, E.; Benedettini, M.; Bontemps, S.; Csengeri, T.; Di Francesco, J.; Giannini, T.; Hennemann, M.; Nguyen Luong, Q.; Marston, A. P.; Peretto, N.; Rivera-Ingraham, A.; Russeil, D.; Rygl, K. L. J.; Spinoglio, L.; White, G. J.
2014-04-01
Aims: Ionization feedback should impact the probability distribution function (PDF) of the column density of cold dust around the ionized gas. We aim to quantify this effect and discuss its potential link to the core and initial mass function (CMF/IMF). Methods: We used Herschel column density maps of several regions observed within the HOBYS key program in a systematic way: M 16, the Rosette and Vela C molecular clouds, and the RCW 120 H ii region. We computed the PDFs in concentric disks around the main ionizing sources, determined their properties, and discuss the effect of ionization pressure on the distribution of the column density. Results: We fitted the column density PDFs of all clouds with two lognormal distributions, since they present a "double-peak" or an enlarged shape in the PDF. Our interpretation is that the lowest part of the column density distribution describes the turbulent molecular gas, while the second peak corresponds to a compression zone induced by the expansion of the ionized gas into the turbulent molecular cloud. Such a double peak is not visible for all clouds associated with ionization fronts, but it depends on the relative importance of ionization pressure and turbulent ram pressure. A power-law tail is present for higher column densities, which are generally ascribed to the effect of gravity. The condensations at the edge of the ionized gas have a steep compressed radial profile, sometimes recognizable in the flattening of the power-law tail. This could lead to an unambiguous criterion that is able to disentangle triggered star formation from pre-existing star formation. Conclusions: In the context of the gravo-turbulent scenario for the origin of the CMF/IMF, the double-peaked or enlarged shape of the PDF may affect the formation of objects at both the low-mass and the high-mass ends of the CMF/IMF. In particular, a broader PDF is required by the gravo-turbulent scenario to fit the IMF properly with a reasonable initial Mach number for the molecular cloud. Since other physical processes (e.g., the equation of state and the variations among the core properties) have already been said to broaden the PDF, the relative importance of the different effects remains an open question. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
Temperature, Oxygen, and Soot-Volume-Fraction Measurements in a Turbulent C 2H 4-Fueled Jet Flame
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kearney, Sean P.; Guildenbecher, Daniel Robert; Winters, Caroline
2015-09-01
We present a detailed set of measurements from a piloted, sooting, turbulent C 2 H 4 - fueled diffusion flame. Hybrid femtosecond/picosecond coherent anti-Stokes Raman scattering (CARS) is used to monitor temperature and oxygen, while laser-induced incandescence (LII) is applied for imaging of the soot volume fraction in the challenging jet-flame environment at Reynolds number, Re = 20,000. Single-laser shot results are used to map the mean and rms statistics, as well as probability densities. LII data from the soot-growth region of the flame are used to benchmark the soot source term for one-dimensional turbulence (ODT) modeling of this turbulentmore » flame. The ODT code is then used to predict temperature and oxygen fluctuations higher in the soot oxidation region higher in the flame.« less
An experimental investigation of the force network ensemble
NASA Astrophysics Data System (ADS)
Kollmer, Jonathan E.; Daniels, Karen E.
2017-06-01
We present an experiment in which a horizontal quasi-2D granular system with a fixed neighbor network is cyclically compressed and decompressed over 1000 cycles. We remove basal friction by floating the particles on a thin air cushion, so that particles only interact in-plane. As expected for a granular system, the applied load is not distributed uniformly, but is instead concentrated in force chains which form a network throughout the system. To visualize the structure of these networks, we use particles made from photoelastic material. The experimental setup and a new data-processing pipeline allow us to map out the evolution subject to the cyclic compressions. We characterize several statistical properties of the packing, including the probability density function of the contact force, and compare them with theoretical and numerical predictions from the force network ensemble theory.
Faraday rotation in the M87 radio/X-ray halo
NASA Technical Reports Server (NTRS)
Dennison, B.
1980-01-01
Comparison of polarization maps at various wavelengths demonstrates the existence of a large Faraday rotation uniform over the radio core of M87. Much of this rotation must be external to the core, lest it appear completely depolarized when the rotation is about 90 degrees. The Faraday rotation is shown to occur primarily in the surrounding radio/X-ray halo. Using the electron density inferred from X-ray observations, the magnetic field in the halo is found to be 2.5 microgauss. The deduced magnetic field strength permits an evaluation of the importance of Compton scattering of 3 K background photons by relativistic electrons in the radio halo. The emergent Compton-scattered spectrum is calculated, and its contribution to the observed X-ray flux is small, probably about a percent or so, while the rest is due to thermal bremsstrahlung.
Zhang, Zhanhui; Wu, Xiangyuan; Shi, Chaonan; Wang, Rongna; Li, Shengfei; Wang, Zhaohui; Liu, Zonghua; Xue, Yadong; Tang, Guiliang; Tang, Jihua
2016-02-01
Kernel development is an important dynamic trait that determines the final grain yield in maize. To dissect the genetic basis of maize kernel development process, a conditional quantitative trait locus (QTL) analysis was conducted using an immortalized F2 (IF2) population comprising 243 single crosses at two locations over 2 years. Volume (KV) and density (KD) of dried developing kernels, together with kernel weight (KW) at different developmental stages, were used to describe dynamic changes during kernel development. Phenotypic analysis revealed that final KW and KD were determined at DAP22 and KV at DAP29. Unconditional QTL mapping for KW, KV and KD uncovered 97 QTLs at different kernel development stages, of which qKW6b, qKW7a, qKW7b, qKW10b, qKW10c, qKV10a, qKV10b and qKV7 were identified under multiple kernel developmental stages and environments. Among the 26 QTLs detected by conditional QTL mapping, conqKW7a, conqKV7a, conqKV10a, conqKD2, conqKD7 and conqKD8a were conserved between the two mapping methodologies. Furthermore, most of these QTLs were consistent with QTLs and genes for kernel development/grain filling reported in previous studies. These QTLs probably contain major genes associated with the kernel development process, and can be used to improve grain yield and quality through marker-assisted selection.
A Self-Contained Mapping Closure Approximation for Scalar Mixing
NASA Technical Reports Server (NTRS)
He, Guo-Wei; Zhang, Zi-Fan
2003-01-01
Scalar turbulence exhibits interplays of coherent structures and random fluctuations over a broad range of spatial and temporal scales. This feature necessitates a probabilistic description of the scalar dynamics, which can be achieved comprehensively by using probability density functions (PDFs). Therefore, the challenge is to obtain the scalar PDFs (Lundgren 1967; Dopazo 1979). Generally, the evolution of a scalar is governed by three dynamical processes: advection, diffusion and reaction. In a PDF approach (Pope 1985), the advection and reaction can be treated exactly but the effect of molecular diffusion has to be modeled. It has been shown (Pope 1985) that the effect of molecular diffusion can be expressed as conditional dissipation rates or conditional diffusions. The currently used models for the conditional dissipation rates and conditional diffusions (Pope 1991) have resisted deduction from the fundamental equations and are unable to yield satisfactory results for the basic test cases of decaying scalars in isotropic turbulence, although they have achieved some success in a variety of individual cases. The recently developed mapping closure approach (Pope 1991; Chen, Chen & Kraichnan 1989; Kraichnan 1990; Klimenko & Pope 2003) provides a deductive method for conditional dissipation rates and conditional di usions, and the models obtained can successfully describe the shape relaxation of the scalar PDF from an initial double delta distribution to a Gaussian one. However, the mapping closure approach is not able to provide the rate at which the scalar evolves. The evolution rate has to be modeled. Therefore, the mapping closure approach is not closed. In this letter, we will address this problem.
Yabe, Shiori; Hara, Takashi; Ueno, Mariko; Enoki, Hiroyuki; Kimura, Tatsuro; Nishimura, Satoru; Yasui, Yasuo; Ohsawa, Ryo; Iwata, Hiroyoshi
2014-01-01
For genetic studies and genomics-assisted breeding, particularly of minor crops, a genotyping system that does not require a priori genomic information is preferable. Here, we demonstrated the potential of a novel array-based genotyping system for the rapid construction of high-density linkage map and quantitative trait loci (QTL) mapping. By using the system, we successfully constructed an accurate, high-density linkage map for common buckwheat (Fagopyrum esculentum Moench); the map was composed of 756 loci and included 8,884 markers. The number of linkage groups converged to eight, which is the basic number of chromosomes in common buckwheat. The sizes of the linkage groups of the P1 and P2 maps were 773.8 and 800.4 cM, respectively. The average interval between adjacent loci was 2.13 cM. The linkage map constructed here will be useful for the analysis of other common buckwheat populations. We also performed QTL mapping for main stem length and detected four QTL. It took 37 days to process 178 samples from DNA extraction to genotyping, indicating the system enables genotyping of genome-wide markers for a few hundred buckwheat plants before the plants mature. The novel system will be useful for genomics-assisted breeding in minor crops without a priori genomic information. PMID:25914583
Yabe, Shiori; Hara, Takashi; Ueno, Mariko; Enoki, Hiroyuki; Kimura, Tatsuro; Nishimura, Satoru; Yasui, Yasuo; Ohsawa, Ryo; Iwata, Hiroyoshi
2014-12-01
For genetic studies and genomics-assisted breeding, particularly of minor crops, a genotyping system that does not require a priori genomic information is preferable. Here, we demonstrated the potential of a novel array-based genotyping system for the rapid construction of high-density linkage map and quantitative trait loci (QTL) mapping. By using the system, we successfully constructed an accurate, high-density linkage map for common buckwheat (Fagopyrum esculentum Moench); the map was composed of 756 loci and included 8,884 markers. The number of linkage groups converged to eight, which is the basic number of chromosomes in common buckwheat. The sizes of the linkage groups of the P1 and P2 maps were 773.8 and 800.4 cM, respectively. The average interval between adjacent loci was 2.13 cM. The linkage map constructed here will be useful for the analysis of other common buckwheat populations. We also performed QTL mapping for main stem length and detected four QTL. It took 37 days to process 178 samples from DNA extraction to genotyping, indicating the system enables genotyping of genome-wide markers for a few hundred buckwheat plants before the plants mature. The novel system will be useful for genomics-assisted breeding in minor crops without a priori genomic information.
NASA Astrophysics Data System (ADS)
Telles, Eduardo; Thuan, Trinh X.; Izotov, Yuri I.; Carrasco, Eleazar R.
2014-01-01
Aims: We present an integral field spectroscopic study with the Gemini Multi-Object Spectrograph (GMOS) of the unusual blue compact dwarf (BCD) galaxy Mrk 996. Methods: We show through velocity and dispersion maps, emission-line intensity and ratio maps, and by a new technique of electron density limit imaging that the ionization properties of different regions in Mrk 996 are correlated with their kinematic properties. Results: From the maps, we can spatially distinguish a very dense high-ionization zone with broad lines in the nuclear region, and a less dense low-ionization zone with narrow lines in the circumnuclear region. Four kinematically distinct systems of lines are identified in the integrated spectrum of Mrk 996, suggesting stellar wind outflows from a population of Wolf-Rayet (WR) stars in the nuclear region, superposed on an underlying rotation pattern. From the intensities of the blue and red bumps, we derive a population of ~473 late nitrogen (WNL) stars and ~98 early carbon (WCE) stars in the nucleus of Mrk 996, resulting in a high N(WR)/N(O+WR) of 0.19. We derive, for the outer narrow-line region, an oxygen abundance 12 + log (O/H) = 7.94 ± 0.30 (~0.2 Z⊙) by using the direct Te method derived from the detected narrow [O iii]λ4363 line. The nucleus of Mrk 996 is, however, nitrogen-enhanced by a factor of ~20, in agreement with previous CLOUDY modeling. This nitrogen enhancement is probably due to nitrogen-enriched WR ejecta, but also to enhanced nitrogen line emission in a high-density environment. Although we have made use here of two new methods - principal component analysis (PCA) tomography and a method for mapping low- and high-density clouds - to analyze our data, new methodology is needed to further exploit the wealth of information provided by integral field spectroscopy. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil), and SECYT (Argentina).Reduced and calibrated data cubes are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/561/A64
Probabilistic self-organizing maps for continuous data.
Lopez-Rubio, Ezequiel
2010-10-01
The original self-organizing feature map did not define any probability distribution on the input space. However, the advantages of introducing probabilistic methodologies into self-organizing map models were soon evident. This has led to a wide range of proposals which reflect the current emergence of probabilistic approaches to computational intelligence. The underlying estimation theories behind them derive from two main lines of thought: the expectation maximization methodology and stochastic approximation methods. Here, we present a comprehensive view of the state of the art, with a unifying perspective of the involved theoretical frameworks. In particular, we examine the most commonly used continuous probability distributions, self-organization mechanisms, and learning schemes. Special emphasis is given to the connections among them and their relative advantages depending on the characteristics of the problem at hand. Furthermore, we evaluate their performance in two typical applications of self-organizing maps: classification and visualization.
Westermeier, Christian; Fiebig, Matthias; Nickel, Bert
2013-10-25
Frequency-resolved scanning photoresponse microscopy of pentacene thin-film transistors is reported. The photoresponse pattern maps the in-plane distribution of trap states which is superimposed by the level of trap filling adjusted by the gate voltage of the transistor. Local hotspots in the photoresponse map thus indicate areas of high trap densities within the pentacene thin film. © 2013 WILEY-VCH Verlag GmbH 8 Co. KGaA, Weinheim.
Identifying western yellow-billed cuckoo breeding habitat with a dual modelling approach
Johnson, Matthew J.; Hatten, James R.; Holmes, Jennifer A.; Shafroth, Patrick B.
2017-01-01
The western population of the yellow-billed cuckoo (Coccyzus americanus) was recently listed as threatened under the federal Endangered Species Act. Yellow-billed cuckoo conservation efforts require the identification of features and area requirements associated with high quality, riparian forest habitat at spatial scales that range from nest microhabitat to landscape, as well as lower-suitability areas that can be enhanced or restored. Spatially explicit models inform conservation efforts by increasing ecological understanding of a target species, especially at landscape scales. Previous yellow-billed cuckoo modelling efforts derived plant-community maps from aerial photography, an expensive and oftentimes inconsistent approach. Satellite models can remotely map vegetation features (e.g., vegetation density, heterogeneity in vegetation density or structure) across large areas with near perfect repeatability, but they usually cannot identify plant communities. We used aerial photos and satellite imagery, and a hierarchical spatial scale approach, to identify yellow-billed cuckoo breeding habitat along the Lower Colorado River and its tributaries. Aerial-photo and satellite models identified several key features associated with yellow-billed cuckoo breeding locations: (1) a 4.5 ha core area of dense cottonwood-willow vegetation, (2) a large native, heterogeneously dense forest (72 ha) around the core area, and (3) moderately rough topography. The odds of yellow-billed cuckoo occurrence decreased rapidly as the amount of tamarisk cover increased or when cottonwood-willow vegetation was limited. We achieved model accuracies of 75–80% in the project area the following year after updating the imagery and location data. The two model types had very similar probability maps, largely predicting the same areas as high quality habitat. While each model provided unique information, a dual-modelling approach provided a more complete picture of yellow-billed cuckoo habitat requirements and will be useful for management and conservation activities.
Chao, Hongbo; Wang, Hao; Wang, Xiaodong; Guo, Liangxing; Gu, Jianwei; Zhao, Weiguo; Li, Baojun; Chen, Dengyan; Raboanatahiry, Nadia; Li, Maoteng
2017-04-10
High-density linkage maps can improve the precision of QTL localization. A high-density SNP-based linkage map containing 3207 markers covering 3072.7 cM of the Brassica napus genome was constructed in the KenC-8 × N53-2 (KNDH) population. A total of 67 and 38 QTLs for seed oil and protein content were identified with an average confidence interval of 5.26 and 4.38 cM, which could explain up to 22.24% and 27.48% of the phenotypic variation, respectively. Thirty-eight associated genomic regions from BSA overlapped with and/or narrowed the SOC-QTLs, further confirming the QTL mapping results based on the high-density linkage map. Potential candidates related to acyl-lipid and seed storage underlying SOC and SPC, respectively, were identified and analyzed, among which six were checked and showed expression differences between the two parents during different embryonic developmental periods. A large primary carbohydrate pathway based on potential candidates underlying SOC- and SPC-QTLs, and interaction networks based on potential candidates underlying SOC-QTLs, was constructed to dissect the complex mechanism based on metabolic and gene regulatory features, respectively. Accurate QTL mapping and potential candidates identified based on high-density linkage map and BSA analyses provide new insights into the complex genetic mechanism of oil and protein accumulation in the seeds of rapeseed.
A climatology of total ozone mapping spectrometer data using rotated principal component analysis
NASA Astrophysics Data System (ADS)
Eder, Brian K.; Leduc, Sharon K.; Sickles, Joseph E.
1999-02-01
The spatial and temporal variability of total column ozone (Ω) obtained from the total ozone mapping spectrometer (TOMS version 7.0) during the period 1980-1992 was examined through the use of a multivariate statistical technique called rotated principal component analysis. Utilization of Kaiser's varimax orthogonal rotation led to the identification of 14, mostly contiguous subregions that together accounted for more than 70% of the total Ω variance. Each subregion displayed statistically unique Ω characteristics that were further examined through time series and spectral density analyses, revealing significant periodicities on semiannual, annual, quasi-biennial, and longer term time frames. This analysis facilitated identification of the probable mechanisms responsible for the variability of Ω within the 14 homogeneous subregions. The mechanisms were either dynamical in nature (i.e., advection associated with baroclinic waves, the quasi-biennial oscillation, or El Niño-Southern Oscillation) or photochemical in nature (i.e., production of odd oxygen (O or O3) associated with the annual progression of the Sun). The analysis has also revealed that the influence of a data retrieval artifact, found in equatorial latitudes of version 6.0 of the TOMS data, has been reduced in version 7.0.
Zhang, Lu; Yang, Wanling; Ying, Dingge; Cherny, Stacey S; Hildebrandt, Friedhelm; Sham, Pak Chung; Lau, Yu Lung
2011-03-01
Homozygosity mapping has played an important role in detecting recessive mutations using families of consanguineous marriages. However, detection of regions identical and homozygosity by descent (HBD) when family data are not available, or when relationships are unknown, is still a challenge. Making use of population data from high-density SNP genotyping may allow detection of regions HBD from recent common founders in singleton patients without genealogy information. We report a novel algorithm that detects such regions by estimating the population haplotype frequencies (HF) for an entire homozygous region. We also developed a simulation method to evaluate the probability of HBD and linkage to disease for a homozygous region by examining the best regions in unaffected controls from the host population. The method can be applied to diseases of Mendelian inheritance but can also be extended to complex diseases to detect rare founder mutations that affect a very small number of patients using either multiplex families or sporadic cases. Testing of the method on both real cases (singleton affected) and simulated data demonstrated its superb sensitivity and robustness under genetic heterogeneity. © 2011 Wiley-Liss, Inc.
Community Detection in Signed Networks: the Role of Negative ties in Different Scales
Esmailian, Pouya; Jalili, Mahdi
2015-01-01
Extracting community structure of complex network systems has many applications from engineering to biology and social sciences. There exist many algorithms to discover community structure of networks. However, it has been significantly under-explored for networks with positive and negative links as compared to unsigned ones. Trying to fill this gap, we measured the quality of partitions by introducing a Map Equation for signed networks. It is based on the assumption that negative relations weaken positive flow from a node towards a community, and thus, external (internal) negative ties increase the probability of staying inside (escaping from) a community. We further extended the Constant Potts Model, providing a map spectrum for signed networks. Accordingly, a partition is selected through balancing between abridgment and expatiation of a signed network. Most importantly, multi-scale spectrum of signed networks revealed how informative are negative ties in different scales, and quantified the topological placement of negative ties between dense positive ones. Moreover, an inconsistency was found in the signed Modularity: as the number of negative ties increases, the density of positive ties is neglected more. These results shed lights on the community structure of signed networks. PMID:26395815
NASA Astrophysics Data System (ADS)
Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan
2018-02-01
Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.
Connecting the Cosmic Star Formation Rate with the Local Star Formation
NASA Astrophysics Data System (ADS)
Gribel, Carolina; Miranda, Oswaldo D.; Williams Vilas-Boas, José
2017-11-01
We present a model that unifies the cosmic star formation rate (CSFR), obtained through the hierarchical structure formation scenario, with the (Galactic) local star formation rate (SFR). It is possible to use the SFR to generate a CSFR mapping through the density probability distribution functions commonly used to study the role of turbulence in the star-forming regions of the Galaxy. We obtain a consistent mapping from redshift z˜ 20 up to the present (z = 0). Our results show that the turbulence exhibits a dual character, providing high values for the star formation efficiency (< \\varepsilon > ˜ 0.32) in the redshift interval z˜ 3.5{--}20 and reducing its value to < \\varepsilon > =0.021 at z = 0. The value of the Mach number ({{ M }}{crit}), from which < \\varepsilon > rapidly decreases, is dependent on both the polytropic index (Γ) and the minimum density contrast of the gas. We also derive Larson’s first law associated with the velocity dispersion (< {V}{rms}> ) in the local star formation regions. Our model shows good agreement with Larson’s law in the ˜ 10{--}50 {pc} range, providing typical temperatures {T}0˜ 10{--}80 {{K}} for the gas associated with star formation. As a consequence, dark matter halos of great mass could contain a number of halos of much smaller mass, and be able to form structures similar to globular clusters. Thus, Larson’s law emerges as a result of the very formation of large-scale structures, which in turn would allow the formation of galactic systems, including our Galaxy.
LOFAR 150-MHz observations of SS 433 and W 50
NASA Astrophysics Data System (ADS)
Broderick, J. W.; Fender, R. P.; Miller-Jones, J. C. A.; Trushkin, S. A.; Stewart, A. J.; Anderson, G. E.; Staley, T. D.; Blundell, K. M.; Pietka, M.; Markoff, S.; Rowlinson, A.; Swinbank, J. D.; van der Horst, A. J.; Bell, M. E.; Breton, R. P.; Carbone, D.; Corbel, S.; Eislöffel, J.; Falcke, H.; Grießmeier, J.-M.; Hessels, J. W. T.; Kondratiev, V. I.; Law, C. J.; Molenaar, G. J.; Serylak, M.; Stappers, B. W.; van Leeuwen, J.; Wijers, R. A. M. J.; Wijnands, R.; Wise, M. W.; Zarka, P.
2018-04-01
We present Low-Frequency Array (LOFAR) high-band data over the frequency range 115-189 MHz for the X-ray binary SS 433, obtained in an observing campaign from 2013 February to 2014 May. Our results include a deep, wide-field map, allowing a detailed view of the surrounding supernova remnant W 50 at low radio frequencies, as well as a light curve for SS 433 determined from shorter monitoring runs. The complex morphology of W 50 is in excellent agreement with previously published higher frequency maps; we find additional evidence for a spectral turnover in the eastern wing, potentially due to foreground free-free absorption. Furthermore, SS 433 is tentatively variable at 150 MHz, with both a debiased modulation index of 11 per cent and a χ2 probability of a flat light curve of 8.2 × 10-3. By comparing the LOFAR flux densities with contemporaneous observations carried out at 4800 MHz with the RATAN-600 telescope, we suggest that an observed ˜0.5-1 Jy rise in the 150-MHz flux density may correspond to sustained flaring activity over a period of approximately 6 months at 4800 MHz. However, the increase is too large to be explained with a standard synchrotron bubble model. We also detect a wealth of structure along the nearby Galactic plane, including the most complete detection to date of the radio shell of the candidate supernova remnant G 38.7-1.4. This further demonstrates the potential of supernova remnant studies with the current generation of low-frequency radio telescopes.
Clustering the Orion B giant molecular cloud based on its molecular emission
Bron, Emeric; Daudon, Chloé; Pety, Jérôme; Levrier, François; Gerin, Maryvonne; Gratier, Pierre; Orkisz, Jan H.; Guzman, Viviana; Bardeau, Sébastien; Goicoechea, Javier R.; Liszt, Harvey; Öberg, Karin; Peretto, Nicolas; Sievers, Albrecht; Tremblin, Pascal
2017-01-01
Context Previous attempts at segmenting molecular line maps of molecular clouds have focused on using position-position-velocity data cubes of a single molecular line to separate the spatial components of the cloud. In contrast, wide field spectral imaging over a large spectral bandwidth in the (sub)mm domain now allows one to combine multiple molecular tracers to understand the different physical and chemical phases that constitute giant molecular clouds (GMCs). Aims We aim at using multiple tracers (sensitive to different physical processes and conditions) to segment a molecular cloud into physically/chemically similar regions (rather than spatially connected components), thus disentangling the different physical/chemical phases present in the cloud. Methods We use a machine learning clustering method, namely the Meanshift algorithm, to cluster pixels with similar molecular emission, ignoring spatial information. Clusters are defined around each maximum of the multidimensional Probability Density Function (PDF) of the line integrated intensities. Simple radiative transfer models were used to interpret the astrophysical information uncovered by the clustering analysis. Results A clustering analysis based only on the J = 1 – 0 lines of three isotopologues of CO proves suffcient to reveal distinct density/column density regimes (nH ~ 100 cm−3, ~ 500 cm−3, and > 1000 cm−3), closely related to the usual definitions of diffuse, translucent and high-column-density regions. Adding two UV-sensitive tracers, the J = 1 − 0 line of HCO+ and the N = 1 − 0 line of CN, allows us to distinguish two clearly distinct chemical regimes, characteristic of UV-illuminated and UV-shielded gas. The UV-illuminated regime shows overbright HCO+ and CN emission, which we relate to a photochemical enrichment effect. We also find a tail of high CN/HCO+ intensity ratio in UV-illuminated regions. Finer distinctions in density classes (nH ~ 7 × 103 cm−3 ~ 4 × 104 cm−3) for the densest regions are also identified, likely related to the higher critical density of the CN and HCO+ (1 – 0) lines. These distinctions are only possible because the high-density regions are spatially resolved. Conclusions Molecules are versatile tracers of GMCs because their line intensities bear the signature of the physics and chemistry at play in the gas. The association of simultaneous multi-line, wide-field mapping and powerful machine learning methods such as the Meanshift clustering algorithm reveals how to decode the complex information available in these molecular tracers. PMID:29456256
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
Bled, F.; Royle, J. Andrew; Cam, E.
2011-01-01
Invasive species are regularly claimed as the second threat to biodiversity. To apply a relevant response to the potential consequences associated with invasions (e.g., emphasize management efforts to prevent new colonization or to eradicate the species in places where it has already settled), it is essential to understand invasion mechanisms and dynamics. Quantifying and understanding what influences rates of spatial spread is a key research area for invasion theory. In this paper, we develop a model to account for occupancy dynamics of an invasive species. Our model extends existing models to accommodate several elements of invasive processes; we chose the framework of hierarchical modeling to assess site occupancy status during an invasion. First, we explicitly accounted for spatial structure and how distance among sites and position relative to one another affect the invasion spread. In particular, we accounted for the possibility of directional propagation and provided a way of estimating the direction of this possible spread. Second, we considered the influence of local density on site occupancy. Third, we decided to split the colonization process into two subprocesses, initial colonization and recolonization, which may be ground-breaking because these subprocesses may exhibit different relationships with environmental variations (such as density variation) or colonization history (e.g., initial colonization might facilitate further colonization events). Finally, our model incorporates imperfection in detection, which might be a source of substantial bias in estimating population parameters. We focused on the case of the Eurasian Collared-Dove (Streptopelia decaocto) and its invasion of the United States since its introduction in the early 1980s, using data from the North American BBS (Breeding Bird Survey). The Eurasian Collared-Dove is one of the most successful invasive species, at least among terrestrial vertebrates. Our model provided estimation of the spread direction consistent with empirical observations. Site persistence probability exhibits a quadratic response to density. We also succeeded at detecting differences in the relationship between density and initial colonization vs. recolonization probabilities. We provide a map of sites that may be colonized in the future as an example of possible practical application of our work. ?? 2011 by the Ecological Society of America.
Remanent magnetization and three-dimensional density model of the Kentucky anomaly region
NASA Technical Reports Server (NTRS)
1982-01-01
Existing software was modified to handle 3-D density and magnetization models of the Kentucky body and is being tested. Gravity and magnetic anomaly data sets are ready for use. A preliminary block model is under construction using the 1:1,000,000 maps. An x-y grid to overlay the 1:2,500,000 Albers maps and keyed to the 1:1,000,000 scale block models was created. Software was developed to generate a smoothed MAGSAT data set over this grid; this is to be input to an inversion program for generating the regional magnetization map. The regional scale 1:2,500,000 map mosaic is being digitized using previous magnetization models, the U.S. magnetic anomaly map, and regional tectonic maps as a guide.
Kim, Yusung; Tomé, Wolfgang A.
2010-01-01
Summary Voxel based iso-Tumor Control Probability (TCP) maps and iso-Complication maps are proposed as a plan-review tool especially for functional image-guided intensity-modulated radiotherapy (IMRT) strategies such as selective boosting (dose painting) and conformal avoidance IMRT. The maps employ voxel-based phenomenological biological dose-response models for target volumes and normal organs. Two IMRT strategies for prostate cancer, namely conventional uniform IMRT delivering an EUD = 84 Gy (equivalent uniform dose) to the entire PTV and selective boosting delivering an EUD = 82 Gy to the entire PTV, are investigated, to illustrate the advantages of this approach over iso-dose maps. Conventional uniform IMRT did yield a more uniform isodose map to the entire PTV while selective boosting did result in a nonuniform isodose map. However, when employing voxel based iso-TCP maps selective boosting exhibited a more uniform tumor control probability map compared to what could be achieved using conventional uniform IMRT, which showed TCP cold spots in high-risk tumor subvolumes despite delivering a higher EUD to the entire PTV. Voxel based iso-Complication maps are presented for rectum and bladder, and their utilization for selective avoidance IMRT strategies are discussed. We believe as the need for functional image guided treatment planning grows, voxel based iso-TCP and iso-Complication maps will become an important tool to assess the integrity of such treatment plans. PMID:21151734
Structure and stability in TMC-1: Analysis of NH3 molecular line and Herschel continuum data
NASA Astrophysics Data System (ADS)
Fehér, O.; Tóth, L. V.; Ward-Thompson, D.; Kirk, J.; Kraus, A.; Pelkonen, V.-M.; Pintér, S.; Zahorecz, S.
2016-05-01
Aims: We examined the velocity, density, and temperature structure of Taurus molecular cloud-1 (TMC-1), a filamentary cloud in a nearby quiescent star forming area, to understand its morphology and evolution. Methods: We observed high signal-to-noise (S/N), high velocity resolution NH3(1,1), and (2, 2) emission on an extended map. By fitting multiple hyperfine-split line profiles to the NH3(1, 1) spectra, we derived the velocity distribution of the line components and calculated gas parameters on several positions. Herschel SPIRE far-infrared continuum observations were reduced and used to calculate the physical parameters of the Planck Galactic Cold Clumps (PGCCs) in the region, including the two in TMC-1. The morphology of TMC-1 was investigated with several types of clustering methods in the parameter space consisting of position, velocity, and column density. Results: Our Herschel-based column density map shows a main ridge with two local maxima and a separated peak to the south-west. The H2 column densities and dust colour temperatures are in the range of 0.5-3.3 × 1022 cm-2 and 10.5-12 K, respectively. The NH3 column densities and H2 volume densities are in the range of 2.8-14.2 × 1014 cm-2 and 0.4-2.8 × 104 cm-3. Kinetic temperatures are typically very low with a minimum of 9 K at the maximum NH3 and H2 column density region. The kinetic temperature maximum was found at the protostar IRAS 04381+2540 with a value of 13.7 K. The kinetic temperatures vary similarly to the colour temperatures in spite of the fact that densities are lower than the critical density for coupling between the gas and dust phase. The k-means clustering method separated four sub-filaments in TMC-1 with masses of 32.5, 19.6, 28.9, and 45.9 M⊙ and low turbulent velocity dispersion in the range of 0.13-0.2 km s-1. Conclusions: The main ridge of TMC-1 is composed of four sub-filaments that are close to gravitational equilibrium. We label these TMC-1F1 through F4. The sub-filaments TMC-1F1, TMC-1F2, and TMC-1F4 are very elongated, dense, and cold. TMC-1F3 is a little less elongated and somewhat warmer, and probably heated by the Class I protostar, IRAS 04381+2540, which is embedded in it. TMC-1F3 is approximately 0.1 pc behind TMC1-F1. Because of its structure, TMC-1 is a good target to test filament evolution scenarios.
Local thermodynamic mapping for effective liquid density-functional theory
NASA Technical Reports Server (NTRS)
Kyrlidis, Agathagelos; Brown, Robert A.
1992-01-01
The structural-mapping approximation introduced by Lutsko and Baus (1990) in the generalized effective-liquid approximation is extended to include a local thermodynamic mapping based on a spatially dependent effective density for approximating the solid phase in terms of the uniform liquid. This latter approximation, called the local generalized effective-liquid approximation (LGELA) yields excellent predictions for the free energy of hard-sphere solids and for the conditions of coexistence of a hard-sphere fcc solid with a liquid. Moreover, the predicted free energy remains single valued for calculations with more loosely packed crystalline structures, such as the diamond lattice. The spatial dependence of the weighted density makes the LGELA useful in the study of inhomogeneous solids.