Spencer, Amy V; Cox, Angela; Lin, Wei-Yu; Easton, Douglas F; Michailidou, Kyriaki; Walters, Kevin
2016-04-01
There is a large amount of functional genetic data available, which can be used to inform fine-mapping association studies (in diseases with well-characterised disease pathways). Single nucleotide polymorphism (SNP) prioritization via Bayes factors is attractive because prior information can inform the effect size or the prior probability of causal association. This approach requires the specification of the effect size. If the information needed to estimate a priori the probability density for the effect sizes for causal SNPs in a genomic region isn't consistent or isn't available, then specifying a prior variance for the effect sizes is challenging. We propose both an empirical method to estimate this prior variance, and a coherent approach to using SNP-level functional data, to inform the prior probability of causal association. Through simulation we show that when ranking SNPs by our empirical Bayes factor in a fine-mapping study, the causal SNP rank is generally as high or higher than the rank using Bayes factors with other plausible values of the prior variance. Importantly, we also show that assigning SNP-specific prior probabilities of association based on expert prior functional knowledge of the disease mechanism can lead to improved causal SNPs ranks compared to ranking with identical prior probabilities of association. We demonstrate the use of our methods by applying the methods to the fine mapping of the CASP8 region of chromosome 2 using genotype data from the Collaborative Oncological Gene-Environment Study (COGS) Consortium. The data we analysed included approximately 46,000 breast cancer case and 43,000 healthy control samples. © 2016 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.
An open source multivariate framework for n-tissue segmentation with evaluation on public data.
Avants, Brian B; Tustison, Nicholas J; Wu, Jue; Cook, Philip A; Gee, James C
2011-12-01
We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs ( http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool.
An Open Source Multivariate Framework for n-Tissue Segmentation with Evaluation on Public Data
Tustison, Nicholas J.; Wu, Jue; Cook, Philip A.; Gee, James C.
2012-01-01
We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs (http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool. PMID:21373993
CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation
Wilke, Marko; Altaye, Mekibib; Holland, Scott K.
2017-01-01
Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating “unusual” populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php. PMID:28275348
CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation.
Wilke, Marko; Altaye, Mekibib; Holland, Scott K
2017-01-01
Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating "unusual" populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php.
HELP: XID+, the probabilistic de-blender for Herschel SPIRE maps
NASA Astrophysics Data System (ADS)
Hurley, P. D.; Oliver, S.; Betancourt, M.; Clarke, C.; Cowley, W. I.; Duivenvoorden, S.; Farrah, D.; Griffin, M.; Lacey, C.; Le Floc'h, E.; Papadopoulos, A.; Sargent, M.; Scudder, J. M.; Vaccari, M.; Valtchanov, I.; Wang, L.
2017-01-01
We have developed a new prior-based source extraction tool, XID+, to carry out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. XID+ is developed using a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates. In this paper, we discuss the details of XID+ and demonstrate the basic capabilities and performance by running it on simulated SPIRE maps resembling the COSMOS field, and comparing to the current prior-based source extraction tool DESPHOT. Not only we show that XID+ performs better on metrics such as flux accuracy and flux uncertainty accuracy, but we also illustrate how obtaining the posterior probability distribution can help overcome some of the issues inherent with maximum-likelihood-based source extraction routines. We run XID+ on the COSMOS SPIRE maps from Herschel Multi-Tiered Extragalactic Survey using a 24-μm catalogue as a positional prior, and a uniform flux prior ranging from 0.01 to 1000 mJy. We show the marginalized SPIRE colour-colour plot and marginalized contribution to the cosmic infrared background at the SPIRE wavelengths. XID+ is a core tool arising from the Herschel Extragalactic Legacy Project (HELP) and we discuss how additional work within HELP providing prior information on fluxes can and will be utilized. The software is available at https://github.com/H-E-L-P/XID_plus. We also provide the data product for COSMOS. We believe this is the first time that the full posterior probability of galaxy photometry has been provided as a data product.
Chen, Kevin T; Izquierdo-Garcia, David; Poynton, Clare B; Chonde, Daniel B; Catana, Ciprian
2017-03-01
To propose an MR-based method for generating continuous-valued head attenuation maps and to assess its accuracy and reproducibility. Demonstrating that novel MR-based photon attenuation correction methods are both accurate and reproducible is essential prior to using them routinely in research and clinical studies on integrated PET/MR scanners. Continuous-valued linear attenuation coefficient maps ("μ-maps") were generated by combining atlases that provided the prior probability of voxel positions belonging to a certain tissue class (air, soft tissue, or bone) and an MR intensity-based likelihood classifier to produce posterior probability maps of tissue classes. These probabilities were used as weights to generate the μ-maps. The accuracy of this probabilistic atlas-based continuous-valued μ-map ("PAC-map") generation method was assessed by calculating the voxel-wise absolute relative change (RC) between the MR-based and scaled CT-based attenuation-corrected PET images. To assess reproducibility, we performed pair-wise comparisons of the RC values obtained from the PET images reconstructed using the μ-maps generated from the data acquired at three time points. The proposed method produced continuous-valued μ-maps that qualitatively reflected the variable anatomy in patients with brain tumor and agreed well with the scaled CT-based μ-maps. The absolute RC comparing the resulting PET volumes was 1.76 ± 2.33 %, quantitatively demonstrating that the method is accurate. Additionally, we also showed that the method is highly reproducible, the mean RC value for the PET images reconstructed using the μ-maps obtained at the three visits being 0.65 ± 0.95 %. Accurate and highly reproducible continuous-valued head μ-maps can be generated from MR data using a probabilistic atlas-based approach.
XID+: Next generation XID development
NASA Astrophysics Data System (ADS)
Hurley, Peter
2017-04-01
XID+ is a prior-based source extraction tool which carries out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. It uses a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates.
Automated segmentation of dental CBCT image with prior-guided sequential random forests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Li; Gao, Yaozong; Shi, Feng
Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimatemore » the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method for CBCT segmentation.« less
Improving experimental phases for strong reflections prior to density modification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.
Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Improving experimental phases for strong reflections prior to density modification
Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.; ...
2013-09-20
Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Xue, Zhong; Shen, Dinggang; Li, Hai; Wong, Stephen
2010-01-01
The traditional fuzzy clustering algorithm and its extensions have been successfully applied in medical image segmentation. However, because of the variability of tissues and anatomical structures, the clustering results might be biased by the tissue population and intensity differences. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serial MR brain image segmentation, i.e., a series of 3-D MR brain images of the same subject at different time points. Using the new serial image segmentation algorithm in the framework of the CLASSIC framework, which iteratively segments the images and estimates the longitudinal deformations, we improved both accuracy and robustness for serial image computing, and at the mean time produced longitudinally consistent segmentation and stable measures. In the algorithm, the tissue probability maps consist of both the population-based and subject-specific segmentation priors. Experimental study using both simulated longitudinal MR brain data and the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data confirmed that using both priors more accurate and robust segmentation results can be obtained. The proposed algorithm can be applied in longitudinal follow up studies of MR brain imaging with subtle morphological changes for neurological disorders. PMID:26566399
Positive contraction mappings for classical and quantum Schrödinger systems
NASA Astrophysics Data System (ADS)
Georgiou, Tryphon T.; Pavon, Michele
2015-03-01
The classical Schrödinger bridge seeks the most likely probability law for a diffusion process, in path space, that matches marginals at two end points in time; the likelihood is quantified by the relative entropy between the sought law and a prior. Jamison proved that the new law is obtained through a multiplicative functional transformation of the prior. This transformation is characterised by an automorphism on the space of endpoints probability measures, which has been studied by Fortet, Beurling, and others. A similar question can be raised for processes evolving in a discrete time and space as well as for processes defined over non-commutative probability spaces. The present paper builds on earlier work by Pavon and Ticozzi and begins by establishing solutions to Schrödinger systems for Markov chains. Our approach is based on the Hilbert metric and shows that the solution to the Schrödinger bridge is provided by the fixed point of a contractive map. We approach, in a similar manner, the steering of a quantum system across a quantum channel. We are able to establish existence of quantum transitions that are multiplicative functional transformations of a given Kraus map for the cases where the marginals are either uniform or pure states. As in the Markov chain case, and for uniform density matrices, the solution of the quantum bridge can be constructed from the fixed point of a certain contractive map. For arbitrary marginal densities, extensive numerical simulations indicate that iteration of a similar map leads to fixed points from which we can construct a quantum bridge. For this general case, however, a proof of convergence remains elusive.
NASA Astrophysics Data System (ADS)
Gloger, Oliver; Tönnies, Klaus; Bülow, Robin; Völzke, Henry
2017-07-01
To develop the first fully automated 3D spleen segmentation framework derived from T1-weighted magnetic resonance (MR) imaging data and to verify its performance for spleen delineation and volumetry. This approach considers the issue of low contrast between spleen and adjacent tissue in non-contrast-enhanced MR images. Native T1-weighted MR volume data was performed on a 1.5 T MR system in an epidemiological study. We analyzed random subsamples of MR examinations without pathologies to develop and verify the spleen segmentation framework. The framework is modularized to include different kinds of prior knowledge into the segmentation pipeline. Classification by support vector machines differentiates between five different shape types in computed foreground probability maps and recognizes characteristic spleen regions in axial slices of MR volume data. A spleen-shape space generated by training produces subject-specific prior shape knowledge that is then incorporated into a final 3D level set segmentation method. Individually adapted shape-driven forces as well as image-driven forces resulting from refined foreground probability maps steer the level set successfully to the segment the spleen. The framework achieves promising segmentation results with mean Dice coefficients of nearly 0.91 and low volumetric mean errors of 6.3%. The presented spleen segmentation approach can delineate spleen tissue in native MR volume data. Several kinds of prior shape knowledge including subject-specific 3D prior shape knowledge can be used to guide segmentation processes achieving promising results.
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots' search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot's detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection-diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method.
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650
Improving experimental phases for strong reflections prior to density modification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uervirojnangkoorn, Monarin; University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck; Hilgenfeld, Rolf, E-mail: hilgenfeld@biochem.uni-luebeck.de
A genetic algorithm has been developed to optimize the phases of the strongest reflections in SIR/SAD data. This is shown to facilitate density modification and model building in several test cases. Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the mapsmore » can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005 ▶), Acta Cryst. D61, 899–902], the impact of identifying optimized phases for a small number of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. A computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Development of a Random Field Model for Gas Plume Detection in Multiple LWIR Images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heasler, Patrick G.
This report develops a random field model that describes gas plumes in LWIR remote sensing images. The random field model serves as a prior distribution that can be combined with LWIR data to produce a posterior that determines the probability that a gas plume exists in the scene and also maps the most probable location of any plume. The random field model is intended to work with a single pixel regression estimator--a regression model that estimates gas concentration on an individual pixel basis.
NASA Astrophysics Data System (ADS)
Khodabakhshi, M.; Jafarpour, B.
2013-12-01
Characterization of complex geologic patterns that create preferential flow paths in certain reservoir systems requires higher-order geostatistical modeling techniques. Multipoint statistics (MPS) provides a flexible grid-based approach for simulating such complex geologic patterns from a conceptual prior model known as a training image (TI). In this approach, a stationary TI that encodes the higher-order spatial statistics of the expected geologic patterns is used to represent the shape and connectivity of the underlying lithofacies. While MPS is quite powerful for describing complex geologic facies connectivity, the nonlinear and complex relation between the flow data and facies distribution makes flow data conditioning quite challenging. We propose an adaptive technique for conditioning facies simulation from a prior TI to nonlinear flow data. Non-adaptive strategies for conditioning facies simulation to flow data can involves many forward flow model solutions that can be computationally very demanding. To improve the conditioning efficiency, we develop an adaptive sampling approach through a data feedback mechanism based on the sampling history. In this approach, after a short period of sampling burn-in time where unconditional samples are generated and passed through an acceptance/rejection test, an ensemble of accepted samples is identified and used to generate a facies probability map. This facies probability map contains the common features of the accepted samples and provides conditioning information about facies occurrence in each grid block, which is used to guide the conditional facies simulation process. As the sampling progresses, the initial probability map is updated according to the collective information about the facies distribution in the chain of accepted samples to increase the acceptance rate and efficiency of the conditioning. This conditioning process can be viewed as an optimization approach where each new sample is proposed based on the sampling history to improve the data mismatch objective function. We extend the application of this adaptive conditioning approach to the case where multiple training images are proposed to describe the geologic scenario in a given formation. We discuss the advantages and limitations of the proposed adaptive conditioning scheme and use numerical experiments from fluvial channel formations to demonstrate its applicability and performance compared to non-adaptive conditioning techniques.
Using Tranformation Group Priors and Maximum Relative Entropy for Bayesian Glaciological Inversions
NASA Astrophysics Data System (ADS)
Arthern, R. J.; Hindmarsh, R. C. A.; Williams, C. R.
2014-12-01
One of the key advances that has allowed better simulations of the large ice sheets of Greenland and Antarctica has been the use of inverse methods. These have allowed poorly known parameters such as the basal drag coefficient and ice viscosity to be constrained using a wide variety of satellite observations. Inverse methods used by glaciologists have broadly followed one of two related approaches. The first is minimization of a cost function that describes the misfit to the observations, often accompanied by some kind of explicit or implicit regularization that promotes smallness or smoothness in the inverted parameters. The second approach is a probabilistic framework that makes use of Bayes' theorem to update prior assumptions about the probability of parameters, making use of data with known error estimates. Both approaches have much in common and questions of regularization often map onto implicit choices of prior probabilities that are made explicit in the Bayesian framework. In both approaches questions can arise that seem to demand subjective input. What should the functional form of the cost function be if there are alternatives? What kind of regularization should be applied, and how much? How should the prior probability distribution for a parameter such as basal slipperiness be specified when we know so little about the details of the subglacial environment? Here we consider some approaches that have been used to address these questions and discuss ways that probabilistic prior information used for regularizing glaciological inversions might be specified with greater objectivity.
Linking sounds to meanings: infant statistical learning in a natural language.
Hay, Jessica F; Pelucchi, Bruna; Graf Estes, Katharine; Saffran, Jenny R
2011-09-01
The processes of infant word segmentation and infant word learning have largely been studied separately. However, the ease with which potential word forms are segmented from fluent speech seems likely to influence subsequent mappings between words and their referents. To explore this process, we tested the link between the statistical coherence of sequences presented in fluent speech and infants' subsequent use of those sequences as labels for novel objects. Notably, the materials were drawn from a natural language unfamiliar to the infants (Italian). The results of three experiments suggest that there is a close relationship between the statistics of the speech stream and subsequent mapping of labels to referents. Mapping was facilitated when the labels contained high transitional probabilities in the forward and/or backward direction (Experiment 1). When no transitional probability information was available (Experiment 2), or when the internal transitional probabilities of the labels were low in both directions (Experiment 3), infants failed to link the labels to their referents. Word learning appears to be strongly influenced by infants' prior experience with the distribution of sounds that make up words in natural languages. Copyright © 2011 Elsevier Inc. All rights reserved.
Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf
2010-07-01
Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties. Copyright 2010 Elsevier Inc. All rights reserved.
Kim, Youngwoo; Hong, Byung Woo; Kim, Seung Ja; Kim, Jong Hyo
2014-07-01
A major challenge when distinguishing glandular tissues on mammograms, especially for area-based estimations, lies in determining a boundary on a hazy transition zone from adipose to glandular tissues. This stems from the nature of mammography, which is a projection of superimposed tissues consisting of different structures. In this paper, the authors present a novel segmentation scheme which incorporates the learned prior knowledge of experts into a level set framework for fully automated mammographic density estimations. The authors modeled the learned knowledge as a population-based tissue probability map (PTPM) that was designed to capture the classification of experts' visual systems. The PTPM was constructed using an image database of a selected population consisting of 297 cases. Three mammogram experts extracted regions for dense and fatty tissues on digital mammograms, which was an independent subset used to create a tissue probability map for each ROI based on its local statistics. This tissue class probability was taken as a prior in the Bayesian formulation and was incorporated into a level set framework as an additional term to control the evolution and followed the energy surface designed to reflect experts' knowledge as well as the regional statistics inside and outside of the evolving contour. A subset of 100 digital mammograms, which was not used in constructing the PTPM, was used to validate the performance. The energy was minimized when the initial contour reached the boundary of the dense and fatty tissues, as defined by experts. The correlation coefficient between mammographic density measurements made by experts and measurements by the proposed method was 0.93, while that with the conventional level set was 0.47. The proposed method showed a marked improvement over the conventional level set method in terms of accuracy and reliability. This result suggests that the proposed method successfully incorporated the learned knowledge of the experts' visual systems and has potential to be used as an automated and quantitative tool for estimations of mammographic breast density levels.
Vehicle Detection for RCTA/ANS (Autonomous Navigation System)
NASA Technical Reports Server (NTRS)
Brennan, Shane; Bajracharya, Max; Matthies, Larry H.; Howard, Andrew B.
2012-01-01
Using a stereo camera pair, imagery is acquired and processed through the JPLV stereo processing pipeline. From this stereo data, large 3D blobs are found. These blobs are then described and classified by their shape to determine which are vehicles and which are not. Prior vehicle detection algorithms are either targeted to specific domains, such as following lead cars, or are intensity- based methods that involve learning typical vehicle appearances from a large corpus of training data. In order to detect vehicles, the JPL Vehicle Detection (JVD) algorithm goes through the following steps: 1. Take as input a left disparity image and left rectified image from JPLV stereo. 2. Project the disparity data onto a two-dimensional Cartesian map. 3. Perform some post-processing of the map built in the previous step in order to clean it up. 4. Take the processed map and find peaks. For each peak, grow it out into a map blob. These map blobs represent large, roughly vehicle-sized objects in the scene. 5. Take these map blobs and reject those that do not meet certain criteria. Build descriptors for the ones that remain. Pass these descriptors onto a classifier, which determines if the blob is a vehicle or not. The probability of detection is the probability that if a vehicle is present in the image, is visible, and un-occluded, then it will be detected by the JVD algorithm. In order to estimate this probability, eight sequences were ground-truthed from the RCTA (Robotics Collaborative Technology Alliances) program, totaling over 4,000 frames with 15 unique vehicles. Since these vehicles were observed at varying ranges, one is able to find the probability of detection as a function of range. At the time of this reporting, the JVD algorithm was tuned to perform best at cars seen from the front, rear, or either side, and perform poorly on vehicles seen from oblique angles.
Zollanvari, Amin; Dougherty, Edward R
2016-12-01
In classification, prior knowledge is incorporated in a Bayesian framework by assuming that the feature-label distribution belongs to an uncertainty class of feature-label distributions governed by a prior distribution. A posterior distribution is then derived from the prior and the sample data. An optimal Bayesian classifier (OBC) minimizes the expected misclassification error relative to the posterior distribution. From an application perspective, prior construction is critical. The prior distribution is formed by mapping a set of mathematical relations among the features and labels, the prior knowledge, into a distribution governing the probability mass across the uncertainty class. In this paper, we consider prior knowledge in the form of stochastic differential equations (SDEs). We consider a vector SDE in integral form involving a drift vector and dispersion matrix. Having constructed the prior, we develop the optimal Bayesian classifier between two models and examine, via synthetic experiments, the effects of uncertainty in the drift vector and dispersion matrix. We apply the theory to a set of SDEs for the purpose of differentiating the evolutionary history between two species.
Brain tumor segmentation from multimodal magnetic resonance images via sparse representation.
Li, Yuhong; Jia, Fucang; Qin, Jing
2016-10-01
Accurately segmenting and quantifying brain gliomas from magnetic resonance (MR) images remains a challenging task because of the large spatial and structural variability among brain tumors. To develop a fully automatic and accurate brain tumor segmentation algorithm, we present a probabilistic model of multimodal MR brain tumor segmentation. This model combines sparse representation and the Markov random field (MRF) to solve the spatial and structural variability problem. We formulate the tumor segmentation problem as a multi-classification task by labeling each voxel as the maximum posterior probability. We estimate the maximum a posteriori (MAP) probability by introducing the sparse representation into a likelihood probability and a MRF into the prior probability. Considering the MAP as an NP-hard problem, we convert the maximum posterior probability estimation into a minimum energy optimization problem and employ graph cuts to find the solution to the MAP estimation. Our method is evaluated using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013) and obtained Dice coefficient metric values of 0.85, 0.75, and 0.69 on the high-grade Challenge data set, 0.73, 0.56, and 0.54 on the high-grade Challenge LeaderBoard data set, and 0.84, 0.54, and 0.57 on the low-grade Challenge data set for the complete, core, and enhancing regions. The experimental results show that the proposed algorithm is valid and ranks 2nd compared with the state-of-the-art tumor segmentation algorithms in the MICCAI BRATS 2013 challenge. Copyright © 2016 Elsevier B.V. All rights reserved.
Disjunctive Normal Shape and Appearance Priors with Applications to Image Segmentation.
Mesadi, Fitsum; Cetin, Mujdat; Tasdizen, Tolga
2015-10-01
The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. Active shape and appearance models require landmark points and assume unimodal shape and appearance distributions. Level set based shape priors are limited to global shape similarity. In this paper, we present a novel shape and appearance priors for image segmentation based on an implicit parametric shape representation called disjunctive normal shape model (DNSM). DNSM is formed by disjunction of conjunctions of half-spaces defined by discriminants. We learn shape and appearance statistics at varying spatial scales using nonparametric density estimation. Our method can generate a rich set of shape variations by locally combining training shapes. Additionally, by studying the intensity and texture statistics around each discriminant of our shape model, we construct a local appearance probability map. Experiments carried out on both medical and natural image datasets show the potential of the proposed method.
Esfahani, Mohammad Shahrokh; Dougherty, Edward R
2015-01-01
Phenotype classification via genomic data is hampered by small sample sizes that negatively impact classifier design. Utilization of prior biological knowledge in conjunction with training data can improve both classifier design and error estimation via the construction of the optimal Bayesian classifier. In the genomic setting, gene/protein signaling pathways provide a key source of biological knowledge. Although these pathways are neither complete, nor regulatory, with no timing associated with them, they are capable of constraining the set of possible models representing the underlying interaction between molecules. The aim of this paper is to provide a framework and the mathematical tools to transform signaling pathways to prior probabilities governing uncertainty classes of feature-label distributions used in classifier design. Structural motifs extracted from the signaling pathways are mapped to a set of constraints on a prior probability on a Multinomial distribution. Being the conjugate prior for the Multinomial distribution, we propose optimization paradigms to estimate the parameters of a Dirichlet distribution in the Bayesian setting. The performance of the proposed methods is tested on two widely studied pathways: mammalian cell cycle and a p53 pathway model.
NASA Astrophysics Data System (ADS)
Dietterich, H. R.; Stelten, M. E.; Downs, D. T.; Champion, D. E.
2017-12-01
Harrat Rahat is a predominantly mafic, 20,000 km2 volcanic field in western Saudi Arabia with an elongate volcanic axis extending 310 km north-south. Prior mapping suggests that the youngest eruptions were concentrated in northernmost Harrat Rahat, where our new geologic mapping and geochronology reveal >300 eruptive vents with ages ranging from 1.2 Ma to a historic eruption in 1256 CE. Eruption compositions and styles vary spatially and temporally within the volcanic field, where extensive alkali basaltic lavas dominate, but more evolved compositions erupted episodically as clusters of trachytic domes and small-volume pyroclastic flows. Analysis of vent locations, compositions, and eruption styles shows the evolution of the volcanic field and allows assessment of the spatio-temporal probabilities of vent opening and eruption styles. We link individual vents and fissures to eruptions and their deposits using field relations, petrography, geochemistry, paleomagnetism, and 40Ar/39Ar and 36Cl geochronology. Eruption volumes and deposit extents are derived from geologic mapping and topographic analysis. Spatial density analysis with kernel density estimation captures vent densities of up to 0.2 %/km2 along the north-south running volcanic axis, decaying quickly away to the east but reaching a second, lower high along a secondary axis to the west. Temporal trends show slight younging of mafic eruption ages to the north in the past 300 ka, as well as clustered eruptions of trachytes over the past 150 ka. Vent locations, timing, and composition are integrated through spatial probability weighted by eruption age for each compositional range to produce spatio-temporal models of vent opening probability. These show that the next mafic eruption is most probable within the north end of the main (eastern) volcanic axis, whereas more evolved compositions are most likely to erupt within the trachytic centers further to the south. These vent opening probabilities, combined with corresponding eruption properties, can be used as the basis for lava flow and tephra fall hazard maps.
2009-01-01
Background Structural Magnetic Resonance Imaging (sMRI) of the brain is employed in the assessment of a wide range of neuropsychiatric disorders. In order to improve statistical power in such studies it is desirable to pool scanning resources from multiple centres. The CaliBrain project was designed to provide for an assessment of scanner differences at three centres in Scotland, and to assess the practicality of pooling scans from multiple-centres. Methods We scanned healthy subjects twice on each of the 3 scanners in the CaliBrain project with T1-weighted sequences. The tissue classifier supplied within the Statistical Parametric Mapping (SPM5) application was used to map the grey and white tissue for each scan. We were thus able to assess within scanner variability and between scanner differences. We have sought to correct for between scanner differences by adjusting the probability mappings of tissue occupancy (tissue priors) used in SPM5 for tissue classification. The adjustment procedure resulted in separate sets of tissue priors being developed for each scanner and we refer to these as scanner specific priors. Results Voxel Based Morphometry (VBM) analyses and metric tests indicated that the use of scanner specific priors reduced tissue classification differences between scanners. However, the metric results also demonstrated that the between scanner differences were not reduced to the level of within scanner variability, the ideal for scanner harmonisation. Conclusion Our results indicate the development of scanner specific priors for SPM can assist in pooling of scan resources from different research centres. This can facilitate improvements in the statistical power of quantitative brain imaging studies. PMID:19445668
A Probabilistic Strategy for Understanding Action Selection
Kim, Byounghoon; Basso, Michele A.
2010-01-01
Brain regions involved in transforming sensory signals into movement commands are the likely sites where decisions are formed. Once formed, a decision must be read-out from the activity of populations of neurons to produce a choice of action. How this occurs remains unresolved. We recorded from four superior colliculus (SC) neurons simultaneously while monkeys performed a target selection task. We implemented three models to gain insight into the computational principles underlying population coding of action selection. We compared the population vector average (PVA), winner-takes-all (WTA) and a Bayesian model, maximum a posteriori estimate (MAP) to determine which predicted choices most often. The probabilistic model predicted more trials correctly than both the WTA and the PVA. The MAP model predicted 81.88% whereas WTA predicted 71.11% and PVA/OLE predicted the least number of trials at 55.71 and 69.47%. Recovering MAP estimates using simulated, non-uniform priors that correlated with monkeys’ choice performance, improved the accuracy of the model by 2.88%. A dynamic analysis revealed that the MAP estimate evolved over time and the posterior probability of the saccade choice reached a maximum at the time of the saccade. MAP estimates also scaled with choice performance accuracy. Although there was overlap in the prediction abilities of all the models, we conclude that movement choice from populations of neurons may be best understood by considering frameworks based on probability. PMID:20147560
Uncertainty plus prior equals rational bias: an intuitive Bayesian probability weighting function.
Fennell, John; Baddeley, Roland
2012-10-01
Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several nonexpected utility theories, including rank-dependent models and prospect theory; here, we propose a Bayesian approach to the probability weighting function and, with it, a psychological rationale. In the real world, uncertainty is ubiquitous and, accordingly, the optimal strategy is to combine probability statements with prior information using Bayes' rule. First, we show that any reasonable prior on probabilities leads to 2 of the observed effects; overweighting of low probabilities and underweighting of high probabilities. We then investigate 2 plausible kinds of priors: informative priors based on previous experience and uninformative priors of ignorance. Individually, these priors potentially lead to large problems of bias and inefficiency, respectively; however, when combined using Bayesian model comparison methods, both forms of prior can be applied adaptively, gaining the efficiency of empirical priors and the robustness of ignorance priors. We illustrate this for the simple case of generic good and bad options, using Internet blogs to estimate the relevant priors of inference. Given this combined ignorant/informative prior, the Bayesian probability weighting function is not only robust and efficient but also matches all of the major characteristics of the distortions found in empirical research. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Multivariate statistical model for 3D image segmentation with application to medical images.
John, Nigel M; Kabuka, Mansur R; Ibrahim, Mohamed O
2003-12-01
In this article we describe a statistical model that was developed to segment brain magnetic resonance images. The statistical segmentation algorithm was applied after a pre-processing stage involving the use of a 3D anisotropic filter along with histogram equalization techniques. The segmentation algorithm makes use of prior knowledge and a probability-based multivariate model designed to semi-automate the process of segmentation. The algorithm was applied to images obtained from the Center for Morphometric Analysis at Massachusetts General Hospital as part of the Internet Brain Segmentation Repository (IBSR). The developed algorithm showed improved accuracy over the k-means, adaptive Maximum Apriori Probability (MAP), biased MAP, and other algorithms. Experimental results showing the segmentation and the results of comparisons with other algorithms are provided. Results are based on an overlap criterion against expertly segmented images from the IBSR. The algorithm produced average results of approximately 80% overlap with the expertly segmented images (compared with 85% for manual segmentation and 55% for other algorithms).
MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control
NASA Astrophysics Data System (ADS)
Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming
2017-09-01
Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.
Automatic detection of anomalies in screening mammograms
2013-01-01
Background Diagnostic performance in breast screening programs may be influenced by the prior probability of disease. Since breast cancer incidence is roughly half a percent in the general population there is a large probability that the screening exam will be normal. That factor may contribute to false negatives. Screening programs typically exhibit about 83% sensitivity and 91% specificity. This investigation was undertaken to determine if a system could be developed to pre-sort screening-images into normal and suspicious bins based on their likelihood to contain disease. Wavelets were investigated as a method to parse the image data, potentially removing confounding information. The development of a classification system based on features extracted from wavelet transformed mammograms is reported. Methods In the multi-step procedure images were processed using 2D discrete wavelet transforms to create a set of maps at different size scales. Next, statistical features were computed from each map, and a subset of these features was the input for a concerted-effort set of naïve Bayesian classifiers. The classifier network was constructed to calculate the probability that the parent mammography image contained an abnormality. The abnormalities were not identified, nor were they regionalized. The algorithm was tested on two publicly available databases: the Digital Database for Screening Mammography (DDSM) and the Mammographic Images Analysis Society’s database (MIAS). These databases contain radiologist-verified images and feature common abnormalities including: spiculations, masses, geometric deformations and fibroid tissues. Results The classifier-network designs tested achieved sensitivities and specificities sufficient to be potentially useful in a clinical setting. This first series of tests identified networks with 100% sensitivity and up to 79% specificity for abnormalities. This performance significantly exceeds the mean sensitivity reported in literature for the unaided human expert. Conclusions Classifiers based on wavelet-derived features proved to be highly sensitive to a range of pathologies, as a result Type II errors were nearly eliminated. Pre-sorting the images changed the prior probability in the sorted database from 37% to 74%. PMID:24330643
Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing
2017-03-01
Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.
The planetary spatial data infrastructure for the OSIRIS-REx mission
NASA Astrophysics Data System (ADS)
DellaGiustina, D. N.; Selznick, S.; Nolan, M. C.; Enos, H. L.; Lauretta, D. S.
2017-12-01
The primary objective of the Origins, Spectral Interpretation, Resource Identification, and Security-Regolith Explorer (OSIRIS-REx) mission is to return a pristine sample of carbonaceous material from primitive asteroid (101955) Bennu. Understanding the geospatial context of Bennu is critical to choosing a sample-site and also linking the nature of the sample to the global properties of Bennu and the broader asteroid population. We established a planetary spatial data infrastructure (PSDI) support the primary objective of OSIRIS-REx. OSIRIS-REx is unique among planetary missions in that all remote sensing is performed to support the sample return objective. Prior to sampling, OSIRIS-REx will survey Bennu for nearly two years to select and document the most valuable primary and backup sample sites. During this period, the mission will combine coordinated observations from five science instruments into four thematic maps: deliverability, safety, sampleability, and scientific value. The deliverability map assesses the probability that the flight dynamics team can deliver the spacecraft to the desired location. The safety map indicates the probability that physical hazards are present at the sample-site. The sampleability map quantifies the probability that a sample can be successfully collected from the surface. Finally, the scientific value map shows the probability that the collected sample contains organics and volatiles and also places the sample site in a definitive geological context relative to Bennu's history. The OSIRIS-REx Science Processing and Operations Center (SPOC) serves as the operational PSDI for the mission. The SPOC is tasked with intake of all data from the spacecraft and other ground sources and assimilating these data into a single comprehensive system for processing and presentation. The SPOC centralizes all geographic data of Bennu in a relational database and ensures that standardization and provenance are maintained throughout proximity operations.The SPOC is a live system that handles inputs from spacecraft and science instrument telemetry, and science data producers. It includes multiple levels of validation, both automated and manual to process all data in a robust and reliable manner and eventually deliver it to the NASA Planetary Data System for archive.
Random walks with shape prior for cochlea segmentation in ex vivo μCT.
Ruiz Pujadas, Esmeralda; Kjer, Hans Martin; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel Angel
2016-09-01
Cochlear implantation is a safe and effective surgical procedure to restore hearing in deaf patients. However, the level of restoration achieved may vary due to differences in anatomy, implant type and surgical access. In order to reduce the variability of the surgical outcomes, we previously proposed the use of a high-resolution model built from [Formula: see text] images and then adapted to patient-specific clinical CT scans. As the accuracy of the model is dependent on the precision of the original segmentation, it is extremely important to have accurate [Formula: see text] segmentation algorithms. We propose a new framework for cochlea segmentation in ex vivo [Formula: see text] images using random walks where a distance-based shape prior is combined with a region term estimated by a Gaussian mixture model. The prior is also weighted by a confidence map to adjust its influence according to the strength of the image contour. Random walks is performed iteratively, and the prior mask is aligned in every iteration. We tested the proposed approach in ten [Formula: see text] data sets and compared it with other random walks-based segmentation techniques such as guided random walks (Eslami et al. in Med Image Anal 17(2):236-253, 2013) and constrained random walks (Li et al. in Advances in image and video technology. Springer, Berlin, pp 215-226, 2012). Our approach demonstrated higher accuracy results due to the probability density model constituted by the region term and shape prior information weighed by a confidence map. The weighted combination of the distance-based shape prior with a region term into random walks provides accurate segmentations of the cochlea. The experiments suggest that the proposed approach is robust for cochlea segmentation.
NASA Astrophysics Data System (ADS)
Frič, Roman; Papčo, Martin
2017-12-01
Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.
Speech processing using conditional observable maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, John; Nix, David
A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less
Cost efficient environmental survey paths for detecting continuous tracer discharges
NASA Astrophysics Data System (ADS)
Alendal, G.
2017-07-01
Designing monitoring programs for detecting potential tracer discharges from unknown locations is challenging. The high variability of the environment may camouflage the anticipated anisotropic signal from a discharge, and there are a number of discharge scenarios. Monitoring operations may also be costly, constraining the number of measurements taken. By assuming that a discharge is active, and a prior belief on the most likely seep location, a method that uses Bayes' theorem combined with discharge footprint predictions is used to update the probability map. Measurement locations with highest reduction in the overall probability of a discharge to be active can be identified. The relative cost between reallocating and measurements can be taken into account. Three different strategies are suggested to enable cost efficient paths for autonomous vessels.
Yadav, Ram Bharos; Srivastava, Subodh; Srivastava, Rajeev
2016-01-01
The proposed framework is obtained by casting the noise removal problem into a variational framework. This framework automatically identifies the various types of noise present in the magnetic resonance image and filters them by choosing an appropriate filter. This filter includes two terms: the first term is a data likelihood term and the second term is a prior function. The first term is obtained by minimizing the negative log likelihood of the corresponding probability density functions: Gaussian or Rayleigh or Rician. Further, due to the ill-posedness of the likelihood term, a prior function is needed. This paper examines three partial differential equation based priors which include total variation based prior, anisotropic diffusion based prior, and a complex diffusion (CD) based prior. A regularization parameter is used to balance the trade-off between data fidelity term and prior. The finite difference scheme is used for discretization of the proposed method. The performance analysis and comparative study of the proposed method with other standard methods is presented for brain web dataset at varying noise levels in terms of peak signal-to-noise ratio, mean square error, structure similarity index map, and correlation parameter. From the simulation results, it is observed that the proposed framework with CD based prior is performing better in comparison to other priors in consideration.
Lee, Won June; Kim, Young Kook; Jeoung, Jin Wook; Park, Ki Ho
2017-12-01
To determine the usefulness of swept-source optical coherence tomography (SS-OCT) probability maps in detecting locations with significant reduction in visual field (VF) sensitivity or predicting future VF changes, in patients with classically defined preperimetric glaucoma (PPG). Of 43 PPG patients, 43 eyes were followed-up on every 6 months for at least 2 years were analyzed in this longitudinal study. The patients underwent wide-field SS-OCT scanning and standard automated perimetry (SAP) at the time of enrollment. With this wide-scan protocol, probability maps originating from the corresponding thickness map and overlapped with SAP VF test points could be generated. We evaluated the vulnerable VF points with SS-OCT probability maps as well as the prevalence of locations with significant VF reduction or subsequent VF changes observed in the corresponding damaged areas of the probability maps. The vulnerable VF points were shown in superior and inferior arcuate patterns near the central fixation. In 19 of 43 PPG eyes (44.2%), significant reduction in baseline VF was detected within the areas of structural change on the SS-OCT probability maps. In 16 of 43 PPG eyes (37.2%), subsequent VF changes within the areas of SS-OCT probability map change were observed over the course of the follow-up. Structural changes on SS-OCT probability maps could detect or predict VF changes using SAP, in a considerable number of PPG eyes. Careful comparison of probability maps with SAP results could be useful in diagnosing and monitoring PPG patients in the clinical setting.
Bibliography of terrestrial impact structures
NASA Technical Reports Server (NTRS)
Grolier, M. J.
1985-01-01
This bibliography lists 105 terrestrial impact structures, of which 12 are proven structures, that is, structures associated with meteorites, and 93 are probable. Of the 93 probable structures, 18 are known to contain rocks with meteoritic components or to be enriched in meteoritic signature-elements, both of which enhance their probability of having originated by impact. Many of the structures investigated in the USSR to date are subsurface features that are completely or partly buried by sedimentary rocks. At least 16 buried impact structures have already been identified in North America and Europe. No proven nor probable submarine impact structure rising above the ocean floor is presently known; none has been found in Antarctica or Greenland. An attempt has been made to cite for each impact structure all literature published prior to mid-1983. The structures are presented in alphabetical order by continent, and their geographic distribution is indicated on a sketch map of each continent in which they occur. They are also listed tables in: (1) alphabetical order, (2) order of increasing latitude, (3) order of decreasing diameter, and (4) order of increasing geologic age.
NASA Astrophysics Data System (ADS)
Mahmud, Zamalia; Porter, Anne; Salikin, Masniyati; Ghani, Nor Azura Md
2015-12-01
Students' understanding of probability concepts have been investigated from various different perspectives. Competency on the other hand is often measured separately in the form of test structure. This study was set out to show that perceived understanding and competency can be calibrated and assessed together using Rasch measurement tools. Forty-four students from the STAT131 Understanding Uncertainty and Variation course at the University of Wollongong, NSW have volunteered to participate in the study. Rasch measurement which is based on a probabilistic model is used to calibrate the responses from two survey instruments and investigate the interactions between them. Data were captured from the e-learning platform Moodle where students provided their responses through an online quiz. The study shows that majority of the students perceived little understanding about conditional and independent events prior to learning about it but tend to demonstrate a slightly higher competency level afterward. Based on the Rasch map, there is indication of some increase in learning and knowledge about some probability concepts at the end of the two weeks lessons on probability concepts.
NASA Astrophysics Data System (ADS)
van Schie, Marcel A.; Steenbergen, Peter; Viet Dinh, Cuong; Ghobadi, Ghazaleh; van Houdt, Petra J.; Pos, Floris J.; Heijmink, Stijn W. T. J. P.; van der Poel, Henk G.; Renisch, Steffen; Vik, Torbjørn; van der Heide, Uulke A.
2017-07-01
Dose painting by numbers (DPBN) refers to a voxel-wise prescription of radiation dose modelled from functional image characteristics, in contrast to dose painting by contours which requires delineations to define the target for dose escalation. The direct relation between functional imaging characteristics and DPBN implies that random variations in images may propagate into the dose distribution. The stability of MR-only prostate cancer treatment planning based on DPBN with respect to these variations is as yet unknown. We conducted a test-retest study to investigate the stability of DPBN for prostate cancer in a semi-automated MR-only treatment planning workflow. Twelve patients received a multiparametric MRI on two separate days prior to prostatectomy. The tumor probability (TP) within the prostate was derived from image features with a logistic regression model. Dose mapping functions were applied to acquire a DPBN prescription map that served to generate an intensity modulated radiation therapy (IMRT) treatment plan. Dose calculations were done on a pseudo-CT derived from the MRI. The TP and DPBN map and the IMRT dose distribution were compared between both MRI sessions, using the intraclass correlation coefficient (ICC) to quantify repeatability of the planning pipeline. The quality of each treatment plan was measured with a quality factor (QF). Median ICC values for the TP and DPBN map and the IMRT dose distribution were 0.82, 0.82 and 0.88, respectively, for linear dose mapping and 0.82, 0.84 and 0.94 for square root dose mapping. A median QF of 3.4% was found among all treatment plans. We demonstrated the stability of DPBN radiotherapy treatment planning in prostate cancer, with excellent overall repeatability and acceptable treatment plan quality. Using validated tumor probability modelling and simple dose mapping techniques it was shown that despite day-to-day variations in imaging data still consistent treatment plans were obtained.
Wilhelms, D.E.; Davis, D.E.
1971-01-01
Systematic geologic mapping of the lunar near side has resulted in the assignment of relative ages to most visible features. As a derivative of this work, geologic and artistic interpretations have been combined to produce reconstructions of the Moon's appearance at two significant points in its history. The reconstructions, although generalized, show the Moon (1) as it probably appeared about 3.3 billion years ago after most of the mare materials had accumulated, and (2) about 4.0 billion years ago after formation of the youngest of the large multiringed basins, but prior to appreciable flooding by mare material.
NASA Astrophysics Data System (ADS)
Cimermanová, K.
2009-01-01
In this paper we illustrate the influence of prior probabilities of diseases on diagnostic reasoning. For various prior probabilities of classified groups characterized by volatile organic compounds of breath profile, smokers and non-smokers, we constructed the ROC curve and the Youden index with related asymptotic pointwise confidence intervals.
Pig Data and Bayesian Inference on Multinomial Probabilities
ERIC Educational Resources Information Center
Kern, John C.
2006-01-01
Bayesian inference on multinomial probabilities is conducted based on data collected from the game Pass the Pigs[R]. Prior information on these probabilities is readily available from the instruction manual, and is easily incorporated in a Dirichlet prior. Posterior analysis of the scoring probabilities quantifies the discrepancy between empirical…
Compositional cokriging for mapping the probability risk of groundwater contamination by nitrates.
Pardo-Igúzquiza, Eulogio; Chica-Olmo, Mario; Luque-Espinar, Juan A; Rodríguez-Galiano, Víctor
2015-11-01
Contamination by nitrates is an important cause of groundwater pollution and represents a potential risk to human health. Management decisions must be made using probability maps that assess the nitrate concentration potential of exceeding regulatory thresholds. However these maps are obtained with only a small number of sparse monitoring locations where the nitrate concentrations have been measured. It is therefore of great interest to have an efficient methodology for obtaining those probability maps. In this paper, we make use of the fact that the discrete probability density function is a compositional variable. The spatial discrete probability density function is estimated by compositional cokriging. There are several advantages in using this approach: (i) problems of classical indicator cokriging, like estimates outside the interval (0,1) and order relations, are avoided; (ii) secondary variables (e.g. aquifer parameters) can be included in the estimation of the probability maps; (iii) uncertainty maps of the probability maps can be obtained; (iv) finally there are modelling advantages because the variograms and cross-variograms of real variables that do not have the restrictions of indicator variograms and indicator cross-variograms. The methodology was applied to the Vega de Granada aquifer in Southern Spain and the advantages of the compositional cokriging approach were demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.
Elapsed decision time affects the weighting of prior probability in a perceptual decision task
Hanks, Timothy D.; Mazurek, Mark E.; Kiani, Roozbeh; Hopp, Elizabeth; Shadlen, Michael N.
2012-01-01
Decisions are often based on a combination of new evidence with prior knowledge of the probable best choice. Optimal combination requires knowledge about the reliability of evidence, but in many realistic situations, this is unknown. Here we propose and test a novel theory: the brain exploits elapsed time during decision formation to combine sensory evidence with prior probability. Elapsed time is useful because (i) decisions that linger tend to arise from less reliable evidence, and (ii) the expected accuracy at a given decision time depends on the reliability of the evidence gathered up to that point. These regularities allow the brain to combine prior information with sensory evidence by weighting the latter in accordance with reliability. To test this theory, we manipulated the prior probability of the rewarded choice while subjects performed a reaction-time discrimination of motion direction using a range of stimulus reliabilities that varied from trial to trial. The theory explains the effect of prior probability on choice and reaction time over a wide range of stimulus strengths. We found that prior probability was incorporated into the decision process as a dynamic bias signal that increases as a function of decision time. This bias signal depends on the speed-accuracy setting of human subjects, and it is reflected in the firing rates of neurons in the lateral intraparietal cortex (LIP) of rhesus monkeys performing this task. PMID:21525274
Elapsed decision time affects the weighting of prior probability in a perceptual decision task.
Hanks, Timothy D; Mazurek, Mark E; Kiani, Roozbeh; Hopp, Elisabeth; Shadlen, Michael N
2011-04-27
Decisions are often based on a combination of new evidence with prior knowledge of the probable best choice. Optimal combination requires knowledge about the reliability of evidence, but in many realistic situations, this is unknown. Here we propose and test a novel theory: the brain exploits elapsed time during decision formation to combine sensory evidence with prior probability. Elapsed time is useful because (1) decisions that linger tend to arise from less reliable evidence, and (2) the expected accuracy at a given decision time depends on the reliability of the evidence gathered up to that point. These regularities allow the brain to combine prior information with sensory evidence by weighting the latter in accordance with reliability. To test this theory, we manipulated the prior probability of the rewarded choice while subjects performed a reaction-time discrimination of motion direction using a range of stimulus reliabilities that varied from trial to trial. The theory explains the effect of prior probability on choice and reaction time over a wide range of stimulus strengths. We found that prior probability was incorporated into the decision process as a dynamic bias signal that increases as a function of decision time. This bias signal depends on the speed-accuracy setting of human subjects, and it is reflected in the firing rates of neurons in the lateral intraparietal area (LIP) of rhesus monkeys performing this task.
Probabilistic models for neural populations that naturally capture global coupling and criticality
2017-01-01
Advances in multi-unit recordings pave the way for statistical modeling of activity patterns in large neural populations. Recent studies have shown that the summed activity of all neurons strongly shapes the population response. A separate recent finding has been that neural populations also exhibit criticality, an anomalously large dynamic range for the probabilities of different population activity patterns. Motivated by these two observations, we introduce a class of probabilistic models which takes into account the prior knowledge that the neural population could be globally coupled and close to critical. These models consist of an energy function which parametrizes interactions between small groups of neurons, and an arbitrary positive, strictly increasing, and twice differentiable function which maps the energy of a population pattern to its probability. We show that: 1) augmenting a pairwise Ising model with a nonlinearity yields an accurate description of the activity of retinal ganglion cells which outperforms previous models based on the summed activity of neurons; 2) prior knowledge that the population is critical translates to prior expectations about the shape of the nonlinearity; 3) the nonlinearity admits an interpretation in terms of a continuous latent variable globally coupling the system whose distribution we can infer from data. Our method is independent of the underlying system’s state space; hence, it can be applied to other systems such as natural scenes or amino acid sequences of proteins which are also known to exhibit criticality. PMID:28926564
Let your fingers do the walking: A simple spectral signature model for "remote" fossil prospecting.
Conroy, Glenn C; Emerson, Charles W; Anemone, Robert L; Townsend, K E Beth
2012-07-01
Even with the most meticulous planning, and utilizing the most experienced fossil-hunters, fossil prospecting in remote and/or extensive areas can be time-consuming, expensive, logistically challenging, and often hit or miss. While nothing can predict or guarantee with 100% assurance that fossils will be found in any particular location, any procedures or techniques that might increase the odds of success would be a major benefit to the field. Here we describe, and test, one such technique that we feel has great potential for increasing the probability of finding fossiliferous sediments - a relatively simple spectral signature model using the spatial analysis and image classification functions of ArcGIS(®)10 that creates interactive thematic land cover maps that can be used for "remote" fossil prospecting. Our test case is the extensive Eocene sediments of the Uinta Basin, Utah - a fossil prospecting area encompassing ∼1200 square kilometers. Using Landsat 7 ETM+ satellite imagery, we "trained" the spatial analysis and image classification algorithms using the spectral signatures of known fossil localities discovered in the Uinta Basin prior to 2005 and then created interactive probability models highlighting other regions in the Basin having a high probability of containing fossiliferous sediments based on their spectral signatures. A fortuitous "post-hoc" validation of our model presented itself. Our model identified several paleontological "hotspots", regions that, while not producing any fossil localities prior to 2005, had high probabilities of being fossiliferous based on the similarities of their spectral signatures to those of previously known fossil localities. Subsequent fieldwork found fossils in all the regions predicted by the model. Copyright © 2012 Elsevier Ltd. All rights reserved.
Probability mapping of scarred myocardium using texture and intensity features in CMR images
2013-01-01
Background The myocardium exhibits heterogeneous nature due to scarring after Myocardial Infarction (MI). In Cardiac Magnetic Resonance (CMR) imaging, Late Gadolinium (LG) contrast agent enhances the intensity of scarred area in the myocardium. Methods In this paper, we propose a probability mapping technique using Texture and Intensity features to describe heterogeneous nature of the scarred myocardium in Cardiac Magnetic Resonance (CMR) images after Myocardial Infarction (MI). Scarred tissue and non-scarred tissue are represented with high and low probabilities, respectively. Intermediate values possibly indicate areas where the scarred and healthy tissues are interwoven. The probability map of scarred myocardium is calculated by using a probability function based on Bayes rule. Any set of features can be used in the probability function. Results In the present study, we demonstrate the use of two different types of features. One is based on the mean intensity of pixel and the other on underlying texture information of the scarred and non-scarred myocardium. Examples of probability maps computed using the mean intensity of pixel and the underlying texture information are presented. We hypothesize that the probability mapping of myocardium offers alternate visualization, possibly showing the details with physiological significance difficult to detect visually in the original CMR image. Conclusion The probability mapping obtained from the two features provides a way to define different cardiac segments which offer a way to identify areas in the myocardium of diagnostic importance (like core and border areas in scarred myocardium). PMID:24053280
Overview of refinement procedures within REFMAC5: utilizing data from different sources.
Kovalevskiy, Oleg; Nicholls, Robert A; Long, Fei; Carlon, Azzurra; Murshudov, Garib N
2018-03-01
Refinement is a process that involves bringing into agreement the structural model, available prior knowledge and experimental data. To achieve this, the refinement procedure optimizes a posterior conditional probability distribution of model parameters, including atomic coordinates, atomic displacement parameters (B factors), scale factors, parameters of the solvent model and twin fractions in the case of twinned crystals, given observed data such as observed amplitudes or intensities of structure factors. A library of chemical restraints is typically used to ensure consistency between the model and the prior knowledge of stereochemistry. If the observation-to-parameter ratio is small, for example when diffraction data only extend to low resolution, the Bayesian framework implemented in REFMAC5 uses external restraints to inject additional information extracted from structures of homologous proteins, prior knowledge about secondary-structure formation and even data obtained using different experimental methods, for example NMR. The refinement procedure also generates the `best' weighted electron-density maps, which are useful for further model (re)building. Here, the refinement of macromolecular structures using REFMAC5 and related tools distributed as part of the CCP4 suite is discussed.
Studying the effects of fuel treatment based on burn probability on a boreal forest landscape.
Liu, Zhihua; Yang, Jian; He, Hong S
2013-01-30
Fuel treatment is assumed to be a primary tactic to mitigate intense and damaging wildfires. However, how to place treatment units across a landscape and assess its effectiveness is difficult for landscape-scale fuel management planning. In this study, we used a spatially explicit simulation model (LANDIS) to conduct wildfire risk assessments and optimize the placement of fuel treatments at the landscape scale. We first calculated a baseline burn probability map from empirical data (fuel, topography, weather, and fire ignition and size data) to assess fire risk. We then prioritized landscape-scale fuel treatment based on maps of burn probability and fuel loads (calculated from the interactions among tree composition, stand age, and disturbance history), and compared their effects on reducing fire risk. The burn probability map described the likelihood of burning on a given location; the fuel load map described the probability that a high fuel load will accumulate on a given location. Fuel treatment based on the burn probability map specified that stands with high burn probability be treated first, while fuel treatment based on the fuel load map specified that stands with high fuel loads be treated first. Our results indicated that fuel treatment based on burn probability greatly reduced the burned area and number of fires of different intensities. Fuel treatment based on burn probability also produced more dispersed and smaller high-risk fire patches and therefore can improve efficiency of subsequent fire suppression. The strength of our approach is that more model components (e.g., succession, fuel, and harvest) can be linked into LANDIS to map the spatially explicit wildfire risk and its dynamics to fuel management, vegetation dynamics, and harvesting. Copyright © 2012 Elsevier Ltd. All rights reserved.
Inferring the most probable maps of underground utilities using Bayesian mapping model
NASA Astrophysics Data System (ADS)
Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony
2018-03-01
Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.
PARTS: Probabilistic Alignment for RNA joinT Secondary structure prediction
Harmanci, Arif Ozgun; Sharma, Gaurav; Mathews, David H.
2008-01-01
A novel method is presented for joint prediction of alignment and common secondary structures of two RNA sequences. The joint consideration of common secondary structures and alignment is accomplished by structural alignment over a search space defined by the newly introduced motif called matched helical regions. The matched helical region formulation generalizes previously employed constraints for structural alignment and thereby better accommodates the structural variability within RNA families. A probabilistic model based on pseudo free energies obtained from precomputed base pairing and alignment probabilities is utilized for scoring structural alignments. Maximum a posteriori (MAP) common secondary structures, sequence alignment and joint posterior probabilities of base pairing are obtained from the model via a dynamic programming algorithm called PARTS. The advantage of the more general structural alignment of PARTS is seen in secondary structure predictions for the RNase P family. For this family, the PARTS MAP predictions of secondary structures and alignment perform significantly better than prior methods that utilize a more restrictive structural alignment model. For the tRNA and 5S rRNA families, the richer structural alignment model of PARTS does not offer a benefit and the method therefore performs comparably with existing alternatives. For all RNA families studied, the posterior probability estimates obtained from PARTS offer an improvement over posterior probability estimates from a single sequence prediction. When considering the base pairings predicted over a threshold value of confidence, the combination of sensitivity and positive predictive value is superior for PARTS than for the single sequence prediction. PARTS source code is available for download under the GNU public license at http://rna.urmc.rochester.edu. PMID:18304945
Biedermann, Alex; Taroni, Franco; Margot, Pierre
2012-01-31
Prior probabilities represent a core element of the Bayesian probabilistic approach to relatedness testing. This letter opinions on the commentary Use of prior odds for missing persons identifications by Budowle et al., published recently in this journal. Contrary to Budowle et al., we argue that the concept of prior probabilities (i) is not endowed with the notion of objectivity, (ii) is not a case for computation, and (iii) does not require new guidelines edited by the forensic DNA community--as long as probability is properly considered as an expression of personal belief.
Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-07-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.
Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.
2015-01-01
In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152
Butenschön, Vicki M; Ille, Sebastian; Sollmann, Nico; Meyer, Bernhard; Krieg, Sandro M
2018-06-01
OBJECTIVE Navigated transcranial magnetic stimulation (nTMS) is used to identify the motor cortex prior to surgery. Yet, there has, until now, been no published evidence on the economic impact of nTMS. This study aims to analyze the cost-effectiveness of nTMS, evaluating the incremental costs of nTMS motor mapping per additional quality-adjusted life year (QALY). By doing so, this study also provides a model allowing for future analysis of general cost-effectiveness of new neuro-oncological treatment options. METHODS The authors used a microsimulation model based on their cohort population sampled for 1000 patients over the time horizon of 2 years. A health care provider perspective was used to assemble direct costs of total treatment. Transition probabilities and health utilities were based on published literature. Effects were stated in QALYs and established for health state subgroups. RESULTS In all scenarios, preoperative mapping was considered cost-effective with a willingness-to-pay threshold < 3*per capita GDP (gross domestic product). The incremental cost-effectiveness ratio (ICER) of nTMS versus no nTMS was 45,086 Euros/QALY. Sensitivity analyses showed robust results with a high impact of total treatment costs and utility of progression-free survival. Comparing the incremental costs caused by nTMS implementation only, the ICER decreased to 1967 Euros/QALY. CONCLUSIONS Motor mapping prior to surgery provides a cost-effective tool to improve the clinical outcome and overall survival of high-grade glioma patients in a resource-limited setting. Moreover, the model used in this study can be used in the future to analyze new treatment options in neuro-oncology in terms of their general cost-effectiveness.
Time-dependent landslide probability mapping
Campbell, Russell H.; Bernknopf, Richard L.; ,
1993-01-01
Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.
Commowick, Olivier; Akhondi-Asl, Alireza; Warfield, Simon K.
2012-01-01
We present a new algorithm, called local MAP STAPLE, to estimate from a set of multi-label segmentations both a reference standard segmentation and spatially varying performance parameters. It is based on a sliding window technique to estimate the segmentation and the segmentation performance parameters for each input segmentation. In order to allow for optimal fusion from the small amount of data in each local region, and to account for the possibility of labels not being observed in a local region of some (or all) input segmentations, we introduce prior probabilities for the local performance parameters through a new Maximum A Posteriori formulation of STAPLE. Further, we propose an expression to compute confidence intervals in the estimated local performance parameters. We carried out several experiments with local MAP STAPLE to characterize its performance and value for local segmentation evaluation. First, with simulated segmentations with known reference standard segmentation and spatially varying performance, we show that local MAP STAPLE performs better than both STAPLE and majority voting. Then we present evaluations with data sets from clinical applications. These experiments demonstrate that spatial adaptivity in segmentation performance is an important property to capture. We compared the local MAP STAPLE segmentations to STAPLE, and to previously published fusion techniques and demonstrate the superiority of local MAP STAPLE over other state-of-the- art algorithms. PMID:22562727
Oregon Cascades Play Fairway Analysis: Faults and Heat Flow maps
Adam Brandt
2015-11-15
This submission includes a fault map of the Oregon Cascades and backarc, a probability map of heat flow, and a fault density probability layer. More extensive metadata can be found within each zip file.
A pitfall of piecewise-polytropic equation of state inference
NASA Astrophysics Data System (ADS)
Raaijmakers, Geert; Riley, Thomas E.; Watts, Anna L.
2018-05-01
The only messenger radiation in the Universe which one can use to statistically probe the Equation of State (EOS) of cold dense matter is that originating from the near-field vicinities of compact stars. Constraining gravitational masses and equatorial radii of rotating compact stars is a major goal for current and future telescope missions, with a primary purpose of constraining the EOS. From a Bayesian perspective it is necessary to carefully discuss prior definition; in this context a complicating issue is that in practice there exist pathologies in the general relativistic mapping between spaces of local (interior source matter) and global (exterior spacetime) parameters. In a companion paper, these issues were raised on a theoretical basis. In this study we reproduce a probability transformation procedure from the literature in order to map a joint posterior distribution of Schwarzschild gravitational masses and radii into a joint posterior distribution of EOS parameters. We demonstrate computationally that EOS parameter inferences are sensitive to the choice to define a prior on a joint space of these masses and radii, instead of on a joint space interior source matter parameters. We focus on the piecewise-polytropic EOS model, which is currently standard in the field of astrophysical dense matter study. We discuss the implications of this issue for the field.
Law, Jane
2016-01-01
Intrinsic conditional autoregressive modeling in a Bayeisan hierarchical framework has been increasingly applied in small-area ecological studies. This study explores the specifications of spatial structure in this Bayesian framework in two aspects: adjacency, i.e., the set of neighbor(s) for each area; and (spatial) weight for each pair of neighbors. Our analysis was based on a small-area study of falling injuries among people age 65 and older in Ontario, Canada, that was aimed to estimate risks and identify risk factors of such falls. In the case study, we observed incorrect adjacencies information caused by deficiencies in the digital map itself. Further, when equal weights was replaced by weights based on a variable of expected count, the range of estimated risks increased, the number of areas with probability of estimated risk greater than one at different probability thresholds increased, and model fit improved. More importantly, significance of a risk factor diminished. Further research to thoroughly investigate different methods of variable weights; quantify the influence of specifications of spatial weights; and develop strategies for better defining spatial structure of a map in small-area analysis in Bayesian hierarchical spatial modeling is recommended. PMID:29546147
MacRoy-Higgins, Michelle; Dalton, Kevin Patrick
2015-12-01
The purpose of this study was to examine the influence of phonotactic probability on sublexical (phonological) and lexical representations in 3-year-olds who had a history of being late talkers in comparison with their peers with typical language development. Ten 3-year-olds who were late talkers and 10 age-matched typically developing controls completed nonword repetition and fast mapping tasks; stimuli for both experimental procedures differed in phonotactic probability. Both participant groups repeated nonwords containing high phonotactic probability sequences more accurately than nonwords containing low phonotactic probability sequences. Participants with typical language showed an early advantage for fast mapping high phonotactic probability words; children who were late talkers required more exposures to the novel words to show the same advantage for fast mapping high phonotactic probability words. Children who were late talkers showed similar sensitivities to phonotactic probability in nonword repetition and word learning when compared with their peers with no history of language delay. However, word learning in children who were late talkers appeared to be slower when compared with their peers.
Revision of Time-Independent Probabilistic Seismic Hazard Maps for Alaska
Wesson, Robert L.; Boyd, Oliver S.; Mueller, Charles S.; Bufe, Charles G.; Frankel, Arthur D.; Petersen, Mark D.
2007-01-01
We present here time-independent probabilistic seismic hazard maps of Alaska and the Aleutians for peak ground acceleration (PGA) and 0.1, 0.2, 0.3, 0.5, 1.0 and 2.0 second spectral acceleration at probability levels of 2 percent in 50 years (annual probability of 0.000404), 5 percent in 50 years (annual probability of 0.001026) and 10 percent in 50 years (annual probability of 0.0021). These maps represent a revision of existing maps based on newly obtained data and assumptions reflecting best current judgments about methodology and approach. These maps have been prepared following the procedures and assumptions made in the preparation of the 2002 National Seismic Hazard Maps for the lower 48 States. A significant improvement relative to the 2002 methodology is the ability to include variable slip rate along a fault where appropriate. These maps incorporate new data, the responses to comments received at workshops held in Fairbanks and Anchorage, Alaska, in May, 2005, and comments received after draft maps were posted on the National Seismic Hazard Mapping Web Site. These maps will be proposed for adoption in future revisions to the International Building Code. In this documentation we describe the maps and in particular explain and justify changes that have been made relative to the 1999 maps. We are also preparing a series of experimental maps of time-dependent hazard that will be described in future documents.
The Dunhuang Chinese sky: A comprehensive study of the oldest known star atlas
NASA Astrophysics Data System (ADS)
Bonnet-Bidaud, Jean-Marc; Praderie, Françoise; Whitfield, Susan
2009-03-01
This paper presents an analysis of the star atlas included in the medieval Chinese manuscript Or.8210/S.3326 discovered in 1907 by the archaeologist Aurel Stein at the Silk Road town of Dunhuang and now housed in the British Library. Although partially studied by a few Chinese scholars, it has never been fully displayed and discussed in the Western world. This set of sky maps (12 hour-angle maps in quasi-cylindrical projection and a circumpolar map in azimuthal projection), displaying the full sky visible from the Northern Hemisphere, is up to now the oldest complete preserved star atlas known from any civilisation. It is also the earliest known pictorial representation of the quasi-totality of Chinese constellations. This paper describes the history of the physical object - a roll of thin paper drawn with ink. We analyse the stellar content of each map (1,339 stars, 257 asterisms) and the texts associated with the maps. We establish the precision with which the maps were drawn (1.5-4° for the brightest stars) and examine the type of projections used. We conclude that precise mathematical methods were used to produce the Atlas. We also discuss the dating of the manuscript and its possible author, and we confirm the date +649-684 (early Tang Dynasty) as most probable based on the available evidence. This is at variance with a prior estimate of around +940. Finally, we present a brief comparison with later sky maps, both from China and Europe.
NASA Astrophysics Data System (ADS)
McKague, Darren Shawn
2001-12-01
The statistical properties of clouds and precipitation on a global scale are important to our understanding of climate. Inversion methods exist to retrieve the needed cloud and precipitation properties from satellite data pixel-by-pixel that can then be summarized over large data sets to obtain the desired statistics. These methods can be quite computationally expensive, and typically don't provide errors on the statistics. A new method is developed to directly retrieve probability distributions of parameters from the distribution of measured radiances. The method also provides estimates of the errors on the retrieved distributions. The method can retrieve joint distributions of parameters that allows for the study of the connection between parameters. A forward radiative transfer model creates a mapping from retrieval parameter space to radiance space. A Monte Carlo procedure uses the mapping to transform probability density from the observed radiance histogram to a two- dimensional retrieval property probability distribution function (PDF). An estimate of the uncertainty in the retrieved PDF is calculated from random realizations of the radiance to retrieval parameter PDF transformation given the uncertainty of the observed radiances, the radiance PDF, the forward radiative transfer, the finite number of prior state vectors, and the non-unique mapping to retrieval parameter space. The retrieval method is also applied to the remote sensing of precipitation from SSM/I microwave data. A method of stochastically generating hydrometeor fields based on the fields from a numerical cloud model is used to create the precipitation parameter radiance space transformation. The impact of vertical and horizontal variability within the hydrometeor fields has a significant impact on algorithm performance. Beamfilling factors are computed from the simulated hydrometeor fields. The beamfilling factors vary quite a bit depending upon the horizontal structure of the rain. The algorithm is applied to SSM/I images from the eastern tropical Pacific and is compared to PDFs of rain rate computed using pixel-by-pixel retrievals from Wilheit and from Liu and Curry. Differences exist between the three methods, but good general agreement is seen between the PDF retrieval algorithm and the algorithm of Liu and Curry. (Abstract shortened by UMI.)
Spencer, Amy V; Cox, Angela; Lin, Wei-Yu; Easton, Douglas F; Michailidou, Kyriaki; Walters, Kevin
2015-05-01
Bayes factors (BFs) are becoming increasingly important tools in genetic association studies, partly because they provide a natural framework for including prior information. The Wakefield BF (WBF) approximation is easy to calculate and assumes a normal prior on the log odds ratio (logOR) with a mean of zero. However, the prior variance (W) must be specified. Because of the potentially high sensitivity of the WBF to the choice of W, we propose several new BF approximations with logOR ∼N(0,W), but allow W to take a probability distribution rather than a fixed value. We provide several prior distributions for W which lead to BFs that can be calculated easily in freely available software packages. These priors allow a wide range of densities for W and provide considerable flexibility. We examine some properties of the priors and BFs and show how to determine the most appropriate prior based on elicited quantiles of the prior odds ratio (OR). We show by simulation that our novel BFs have superior true-positive rates at low false-positive rates compared to those from both P-value and WBF analyses across a range of sample sizes and ORs. We give an example of utilizing our BFs to fine-map the CASP8 region using genotype data on approximately 46,000 breast cancer case and 43,000 healthy control samples from the Collaborative Oncological Gene-environment Study (COGS) Consortium, and compare the single-nucleotide polymorphism ranks to those obtained using WBFs and P-values from univariate logistic regression. © 2015 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.
Optimal Mass Transport for Shape Matching and Comparison
Su, Zhengyu; Wang, Yalin; Shi, Rui; Zeng, Wei; Sun, Jian; Luo, Feng; Gu, Xianfeng
2015-01-01
Surface based 3D shape analysis plays a fundamental role in computer vision and medical imaging. This work proposes to use optimal mass transport map for shape matching and comparison, focusing on two important applications including surface registration and shape space. The computation of the optimal mass transport map is based on Monge-Brenier theory, in comparison to the conventional method based on Monge-Kantorovich theory, this method significantly improves the efficiency by reducing computational complexity from O(n2) to O(n). For surface registration problem, one commonly used approach is to use conformal map to convert the shapes into some canonical space. Although conformal mappings have small angle distortions, they may introduce large area distortions which are likely to cause numerical instability thus resulting failures of shape analysis. This work proposes to compose the conformal map with the optimal mass transport map to get the unique area-preserving map, which is intrinsic to the Riemannian metric, unique, and diffeomorphic. For shape space study, this work introduces a novel Riemannian framework, Conformal Wasserstein Shape Space, by combing conformal geometry and optimal mass transport theory. In our work, all metric surfaces with the disk topology are mapped to the unit planar disk by a conformal mapping, which pushes the area element on the surface to a probability measure on the disk. The optimal mass transport provides a map from the shape space of all topological disks with metrics to the Wasserstein space of the disk and the pullback Wasserstein metric equips the shape space with a Riemannian metric. We validate our work by numerous experiments and comparisons with prior approaches and the experimental results demonstrate the efficiency and efficacy of our proposed approach. PMID:26440265
Prior probability modulates anticipatory activity in category-specific areas.
Trapp, Sabrina; Lepsien, Jöran; Kotz, Sonja A; Bar, Moshe
2016-02-01
Bayesian models are currently a dominant framework for describing human information processing. However, it is not clear yet how major tenets of this framework can be translated to brain processes. In this study, we addressed the neural underpinning of prior probability and its effect on anticipatory activity in category-specific areas. Before fMRI scanning, participants were trained in two behavioral sessions to learn the prior probability and correct order of visual events within a sequence. The events of each sequence included two different presentations of a geometric shape and one picture of either a house or a face, which appeared with either a high or a low likelihood. Each sequence was preceded by a cue that gave participants probabilistic information about which items to expect next. This allowed examining cue-related anticipatory modulation of activity as a function of prior probability in category-specific areas (fusiform face area and parahippocampal place area). Our findings show that activity in the fusiform face area was higher when faces had a higher prior probability. The finding of a difference between levels of expectations is consistent with graded, probabilistically modulated activity, but the data do not rule out the alternative explanation of a categorical neural response. Importantly, these differences were only visible during anticipation, and vanished at the time of stimulus presentation, calling for a functional distinction when considering the effects of prior probability. Finally, there were no anticipatory effects for houses in the parahippocampal place area, suggesting sensitivity to stimulus material when looking at effects of prediction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoegele, W.; Loeschel, R.; Dobler, B.
2011-02-15
Purpose: In this work, a novel stochastic framework for patient positioning based on linac-mounted CB projections is introduced. Based on this formulation, the most probable shifts and rotations of the patient are estimated, incorporating interfractional deformations of patient anatomy and other uncertainties associated with patient setup. Methods: The target position is assumed to be defined by and is stochastically determined from positions of various features such as anatomical landmarks or markers in CB projections, i.e., radiographs acquired with a CB-CT system. The patient positioning problem of finding the target location from CB projections is posed as an inverse problem withmore » prior knowledge and is solved using a Bayesian maximum a posteriori (MAP) approach. The prior knowledge is three-fold and includes the accuracy of an initial patient setup (such as in-room laser and skin marks), the plasticity of the body (relative shifts between target and features), and the feature detection error in CB projections (which may vary depending on specific detection algorithm and feature type). For this purpose, MAP estimators are derived and a procedure of using them in clinical practice is outlined. Furthermore, a rule of thumb is theoretically derived, relating basic parameters of the prior knowledge (initial setup accuracy, plasticity of the body, and number of features) and the parameters of CB data acquisition (number of projections and accuracy of feature detection) to the expected estimation accuracy. Results: MAP estimation can be applied to arbitrary features and detection algorithms. However, to experimentally demonstrate its applicability and to perform the validation of the algorithm, a water-equivalent, deformable phantom with features represented by six 1 mm chrome balls were utilized. These features were detected in the cone beam projections (XVI, Elekta Synergy) by a local threshold method for demonstration purposes only. The accuracy of estimation (strongly varying for different plasticity parameters of the body) agreed with the rule of thumb formula. Moreover, based on this rule of thumb formula, about 20 projections for 6 detectable features seem to be sufficient for a target estimation accuracy of 0.2 cm, even for relatively large feature detection errors with standard deviation of 0.5 cm and spatial displacements of the features with standard deviation of 0.5 cm. Conclusions: The authors have introduced a general MAP-based patient setup algorithm accounting for different sources of uncertainties, which are utilized as the prior knowledge in a transparent way. This new framework can be further utilized for different clinical sites, as well as theoretical developments in the field of patient positioning for radiotherapy.« less
Neutrino mass priors for cosmology from random matrices
NASA Astrophysics Data System (ADS)
Long, Andrew J.; Raveri, Marco; Hu, Wayne; Dodelson, Scott
2018-02-01
Cosmological measurements of structure are placing increasingly strong constraints on the sum of the neutrino masses, Σ mν, through Bayesian inference. Because these constraints depend on the choice for the prior probability π (Σ mν), we argue that this prior should be motivated by fundamental physical principles rather than the ad hoc choices that are common in the literature. The first step in this direction is to specify the prior directly at the level of the neutrino mass matrix Mν, since this is the parameter appearing in the Lagrangian of the particle physics theory. Thus by specifying a probability distribution over Mν, and by including the known squared mass splittings, we predict a theoretical probability distribution over Σ mν that we interpret as a Bayesian prior probability π (Σ mν). Assuming a basis-invariant probability distribution on Mν, also known as the anarchy hypothesis, we find that π (Σ mν) peaks close to the smallest Σ mν allowed by the measured mass splittings, roughly 0.06 eV (0.1 eV) for normal (inverted) ordering, due to the phenomenon of eigenvalue repulsion in random matrices. We consider three models for neutrino mass generation: Dirac, Majorana, and Majorana via the seesaw mechanism; differences in the predicted priors π (Σ mν) allow for the possibility of having indications about the physical origin of neutrino masses once sufficient experimental sensitivity is achieved. We present fitting functions for π (Σ mν), which provide a simple means for applying these priors to cosmological constraints on the neutrino masses or marginalizing over their impact on other cosmological parameters.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
Deriving photometric redshifts using fuzzy archetypes and self-organizing maps - I. Methodology
NASA Astrophysics Data System (ADS)
Speagle, Joshua S.; Eisenstein, Daniel J.
2017-07-01
We propose a method to substantially increase the flexibility and power of template fitting-based photometric redshifts by transforming a large number of galaxy spectral templates into a corresponding collection of 'fuzzy archetypes' using a suitable set of perturbative priors designed to account for empirical variation in dust attenuation and emission-line strengths. To bypass widely separated degeneracies in parameter space (e.g. the redshift-reddening degeneracy), we train self-organizing maps (SOMs) on large 'model catalogues' generated from Monte Carlo sampling of our fuzzy archetypes to cluster the predicted observables in a topologically smooth fashion. Subsequent sampling over the SOM then allows full reconstruction of the relevant probability distribution functions (PDFs). This combined approach enables the multimodal exploration of known variation among galaxy spectral energy distributions with minimal modelling assumptions. We demonstrate the power of this approach to recover full redshift PDFs using discrete Markov chain Monte Carlo sampling methods combined with SOMs constructed from Large Synoptic Survey Telescope ugrizY and Euclid YJH mock photometry.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction.
Prabhat, K C; Aditya Mohan, K; Phatak, Charudatta; Bouman, Charles; De Graef, Marc
2017-11-01
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. A comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach. Copyright © 2017 Elsevier B.V. All rights reserved.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta; ...
2017-07-03
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
Use of uninformative priors to initialize state estimation for dynamical systems
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.
2017-10-01
The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.
ERIC Educational Resources Information Center
Gurlitt, Johannes; Renkl, Alexander
2010-01-01
Two experiments investigated the effects of characteristic features of concept mapping used for prior knowledge activation. Characteristic demands of concept mapping include connecting lines representing the relationships between concepts and labeling these lines, specifying the type of the semantic relationships. In the first experiment,…
Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.
Farsani, Zahra Amini; Schmid, Volker J
2017-01-01
In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.
Effect of Map-vaccination in ewes on body condition score, weight and Map-shedding.
Hüttner, Klim; Krämer, Ulla; Kleist, Petra
2012-01-01
Vaccination against Mycobacterium avium subspecies paratuberculosis (Map) in sheep receives growing attention worldwide, particularly in countries with national Map control strategies. A field study was conducted, investigating the effect of GUDAIR on body condition, weight and Map-shedding in a professionally managed but largely Map-affected suffolk flock prior and after vaccination. For this, 80 ewes out of 1000 animals were randomly sampled. In the univariate analysis body condition scores of ewes twelve months after vaccination improved significantly compared to those sampled prior to vaccination. At the same time the rate of ewes shedding Map was reduced by 37%.
Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-04-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.
NASA Astrophysics Data System (ADS)
Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.
2017-06-01
In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.
United States Geological Survey fire science: fire danger monitoring and forecasting
Eidenshink, Jeff C.; Howard, Stephen M.
2012-01-01
Each day, the U.S. Geological Survey produces 7-day forecasts for all Federal lands of the distributions of number of ignitions, number of fires above a given size, and conditional probabilities of fires growing larger than a specified size. The large fire probability map is an estimate of the likelihood that ignitions will become large fires. The large fire forecast map is a probability estimate of the number of fires on federal lands exceeding 100 acres in the forthcoming week. The ignition forecast map is a probability estimate of the number of fires on Federal land greater than 1 acre in the forthcoming week. The extreme event forecast is the probability estimate of the number of fires on Federal land that may exceed 5,000 acres in the forthcoming week.
Tsunami probability in the Caribbean Region
Parsons, T.; Geist, E.L.
2008-01-01
We calculated tsunami runup probability (in excess of 0.5 m) at coastal sites throughout the Caribbean region. We applied a Poissonian probability model because of the variety of uncorrelated tsunami sources in the region. Coastlines were discretized into 20 km by 20 km cells, and the mean tsunami runup rate was determined for each cell. The remarkable ???500-year empirical record compiled by O'Loughlin and Lander (2003) was used to calculate an empirical tsunami probability map, the first of three constructed for this study. However, it is unclear whether the 500-year record is complete, so we conducted a seismic moment-balance exercise using a finite-element model of the Caribbean-North American plate boundaries and the earthquake catalog, and found that moment could be balanced if the seismic coupling coefficient is c = 0.32. Modeled moment release was therefore used to generate synthetic earthquake sequences to calculate 50 tsunami runup scenarios for 500-year periods. We made a second probability map from numerically-calculated runup rates in each cell. Differences between the first two probability maps based on empirical and numerical-modeled rates suggest that each captured different aspects of tsunami generation; the empirical model may be deficient in primary plate-boundary events, whereas numerical model rates lack backarc fault and landslide sources. We thus prepared a third probability map using Bayesian likelihood functions derived from the empirical and numerical rate models and their attendant uncertainty to weight a range of rates at each 20 km by 20 km coastal cell. Our best-estimate map gives a range of 30-year runup probability from 0 - 30% regionally. ?? irkhaueser 2008.
Donato, Mary M.
2000-01-01
As ground water continues to provide an ever-growing proportion of Idaho?s drinking water, concerns about the quality of that resource are increasing. Pesticides (most commonly, atrazine/desethyl-atrazine, hereafter referred to as atrazine) and nitrite plus nitrate as nitrogen (hereafter referred to as nitrate) have been detected in many aquifers in the State. To provide a sound hydrogeologic basis for atrazine and nitrate management in southern Idaho—the largest region of land and water use in the State—the U.S. Geological Survey produced maps showing the probability of detecting these contaminants in ground water in the upper Snake River Basin (published in a 1998 report) and the western Snake River Plain (published in this report). The atrazine probability map for the western Snake River Plain was constructed by overlaying ground-water quality data with hydrogeologic and anthropogenic data in a geographic information system (GIS). A data set was produced in which each well had corresponding information on land use, geology, precipitation, soil characteristics, regional depth to ground water, well depth, water level, and atrazine use. These data were analyzed by logistic regression using a statistical software package. Several preliminary multivariate models were developed and those that best predicted the detection of atrazine were selected. The multivariate models then were entered into a GIS and the probability maps were produced. Land use, precipitation, soil hydrologic group, and well depth were significantly correlated with atrazine detections in the western Snake River Plain. These variables also were important in the 1998 probability study of the upper Snake River Basin. The effectiveness of the probability models for atrazine might be improved if more detailed data were available for atrazine application. A preliminary atrazine probability map for the entire Snake River Plain in Idaho, based on a data set representing that region, also was produced. In areas where this map overlaps the 1998 map of the upper Snake River Basin, the two maps show broadly similar probabilities of detecting atrazine. Logistic regression also was used to develop a preliminary statistical model that predicts the probability of detecting elevated nitrate in the western Snake River Plain. A nitrate probability map was produced from this model. Results showed that elevated nitrate concentrations were correlated with land use, soil organic content, well depth, and water level. Detailed information on nitrate input, specifically fertilizer application, might have improved the effectiveness of this model.
Chang, Edward F; Breshears, Jonathan D; Raygor, Kunal P; Lau, Darryl; Molinaro, Annette M; Berger, Mitchel S
2017-01-01
OBJECTIVE Functional mapping using direct cortical stimulation is the gold standard for the prevention of postoperative morbidity during resective surgery in dominant-hemisphere perisylvian regions. Its role is necessitated by the significant interindividual variability that has been observed for essential language sites. The aim in this study was to determine the statistical probability distribution of eliciting aphasic errors for any given stereotactically based cortical position in a patient cohort and to quantify the variability at each cortical site. METHODS Patients undergoing awake craniotomy for dominant-hemisphere primary brain tumor resection between 1999 and 2014 at the authors' institution were included in this study, which included counting and picture-naming tasks during dense speech mapping via cortical stimulation. Positive and negative stimulation sites were collected using an intraoperative frameless stereotactic neuronavigation system and were converted to Montreal Neurological Institute coordinates. Data were iteratively resampled to create mean and standard deviation probability maps for speech arrest and anomia. Patients were divided into groups with a "classic" or an "atypical" location of speech function, based on the resultant probability maps. Patient and clinical factors were then assessed for their association with an atypical location of speech sites by univariate and multivariate analysis. RESULTS Across 102 patients undergoing speech mapping, the overall probabilities of speech arrest and anomia were 0.51 and 0.33, respectively. Speech arrest was most likely to occur with stimulation of the posterior inferior frontal gyrus (maximum probability from individual bin = 0.025), and variance was highest in the dorsal premotor cortex and the posterior superior temporal gyrus. In contrast, stimulation within the posterior perisylvian cortex resulted in the maximum mean probability of anomia (maximum probability = 0.012), with large variance in the regions surrounding the posterior superior temporal gyrus, including the posterior middle temporal, angular, and supramarginal gyri. Patients with atypical speech localization were far more likely to have tumors in canonical Broca's or Wernicke's areas (OR 7.21, 95% CI 1.67-31.09, p < 0.01) or to have multilobar tumors (OR 12.58, 95% CI 2.22-71.42, p < 0.01), than were patients with classic speech localization. CONCLUSIONS This study provides statistical probability distribution maps for aphasic errors during cortical stimulation mapping in a patient cohort. Thus, the authors provide an expected probability of inducing speech arrest and anomia from specific 10-mm 2 cortical bins in an individual patient. In addition, they highlight key regions of interindividual mapping variability that should be considered preoperatively. They believe these results will aid surgeons in their preoperative planning of eloquent cortex resection.
Neutrino mass priors for cosmology from random matrices
Long, Andrew J.; Raveri, Marco; Hu, Wayne; ...
2018-02-13
Cosmological measurements of structure are placing increasingly strong constraints on the sum of the neutrino masses, Σm ν, through Bayesian inference. Because these constraints depend on the choice for the prior probability π(Σm ν), we argue that this prior should be motivated by fundamental physical principles rather than the ad hoc choices that are common in the literature. The first step in this direction is to specify the prior directly at the level of the neutrino mass matrix M ν, since this is the parameter appearing in the Lagrangian of the particle physics theory. Thus by specifying a probability distribution overmore » M ν, and by including the known squared mass splittings, we predict a theoretical probability distribution over Σm ν that we interpret as a Bayesian prior probability π(Σm ν). Assuming a basis-invariant probability distribution on M ν, also known as the anarchy hypothesis, we find that π(Σm ν) peaks close to the smallest Σm ν allowed by the measured mass splittings, roughly 0.06 eV (0.1 eV) for normal (inverted) ordering, due to the phenomenon of eigenvalue repulsion in random matrices. We consider three models for neutrino mass generation: Dirac, Majorana, and Majorana via the seesaw mechanism; differences in the predicted priors π(Σm ν) allow for the possibility of having indications about the physical origin of neutrino masses once sufficient experimental sensitivity is achieved. In conclusion, we present fitting functions for π(Σm ν), which provide a simple means for applying these priors to cosmological constraints on the neutrino masses or marginalizing over their impact on other cosmological parameters.« less
Neutrino mass priors for cosmology from random matrices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Andrew J.; Raveri, Marco; Hu, Wayne
Cosmological measurements of structure are placing increasingly strong constraints on the sum of the neutrino masses, Σm ν, through Bayesian inference. Because these constraints depend on the choice for the prior probability π(Σm ν), we argue that this prior should be motivated by fundamental physical principles rather than the ad hoc choices that are common in the literature. The first step in this direction is to specify the prior directly at the level of the neutrino mass matrix M ν, since this is the parameter appearing in the Lagrangian of the particle physics theory. Thus by specifying a probability distribution overmore » M ν, and by including the known squared mass splittings, we predict a theoretical probability distribution over Σm ν that we interpret as a Bayesian prior probability π(Σm ν). Assuming a basis-invariant probability distribution on M ν, also known as the anarchy hypothesis, we find that π(Σm ν) peaks close to the smallest Σm ν allowed by the measured mass splittings, roughly 0.06 eV (0.1 eV) for normal (inverted) ordering, due to the phenomenon of eigenvalue repulsion in random matrices. We consider three models for neutrino mass generation: Dirac, Majorana, and Majorana via the seesaw mechanism; differences in the predicted priors π(Σm ν) allow for the possibility of having indications about the physical origin of neutrino masses once sufficient experimental sensitivity is achieved. In conclusion, we present fitting functions for π(Σm ν), which provide a simple means for applying these priors to cosmological constraints on the neutrino masses or marginalizing over their impact on other cosmological parameters.« less
Nonlinear Spatial Inversion Without Monte Carlo Sampling
NASA Astrophysics Data System (ADS)
Curtis, A.; Nawaz, A.
2017-12-01
High-dimensional, nonlinear inverse or inference problems usually have non-unique solutions. The distribution of solutions are described by probability distributions, and these are usually found using Monte Carlo (MC) sampling methods. These take pseudo-random samples of models in parameter space, calculate the probability of each sample given available data and other information, and thus map out high or low probability values of model parameters. However, such methods would converge to the solution only as the number of samples tends to infinity; in practice, MC is found to be slow to converge, convergence is not guaranteed to be achieved in finite time, and detection of convergence requires the use of subjective criteria. We propose a method for Bayesian inversion of categorical variables such as geological facies or rock types in spatial problems, which requires no sampling at all. The method uses a 2-D Hidden Markov Model over a grid of cells, where observations represent localized data constraining the model in each cell. The data in our example application are seismic properties such as P- and S-wave impedances or rock density; our model parameters are the hidden states and represent the geological rock types in each cell. The observations at each location are assumed to depend on the facies at that location only - an assumption referred to as `localized likelihoods'. However, the facies at a location cannot be determined solely by the observation at that location as it also depends on prior information concerning its correlation with the spatial distribution of facies elsewhere. Such prior information is included in the inversion in the form of a training image which represents a conceptual depiction of the distribution of local geologies that might be expected, but other forms of prior information can be used in the method as desired. The method provides direct (pseudo-analytic) estimates of posterior marginal probability distributions over each variable, so these do not need to be estimated from samples as is required in MC methods. On a 2-D test example the method is shown to outperform previous methods significantly, and at a fraction of the computational cost. In many foreseeable applications there are therefore no serious impediments to extending the method to 3-D spatial models.
NASA Astrophysics Data System (ADS)
Park-Martinez, Jayne Irene
The purpose of this study was to assess the effects of node-link mapping on students' meaningful learning and conceptual change in a 1-semester introductory life-science course. This study used node-link mapping to integrate and apply the National Research Council's (NRC, 2005) three principles of human learning: engaging students' prior knowledge, fostering their metacognition, and supporting their formulation of a scientific conceptual framework. The study was a quasi-experimental, pretest-posttest, control group design. The sample consisted of 68 primarily freshmen non-science majors enrolled in two intact sections of the targeted course. Both groups received the same teacher-centered instruction and student-centered activities designed to promote meaningful learning and conceptual change; however, the activity format differed. Control group activities were written; treatment group activities were node-link mapped. Prior to instruction, both groups demonstrated equivalent knowledge and misconceptions associated with genetics and evolution (GE), and ecology and environmental science (EE). Mean differences, pre-to-post instruction, on the GE and EE meaningful learning exam scores and the EE conceptual change inventory scores between the writing group (control) and the node-link mapping group (treatment) were analyzed using repeated measures MANOVAs. There were no significant mean pre-to-post differences between groups with respect to meaningful learning in the GE or EE units, or conceptual change in the EE unit. However, independent of group membership, the overall mean pre-to-post increases in meaningful learning and conceptual change were significant. These findings suggest that both node-link mapping and writing, when used in conjunction with the National Research Council's (NRC, 2005) three principles of human learning, can promote meaningful learning and conceptual change. The only significant interaction found with respect to meaningful learning, conceptual change, and learning styles (Kolb, 2005) was a positive effect of node-link mapping on converger's meaningful learning. However, that result was probably an artifact of small sample size rather than a true treatment effect. No other significant interactions were found. These results suggest that all students, regardless of their learning style, can benefit from either node-link mapping or writing to promote meaningful learning and conceptual change in general life-science courses.
Segmentation and automated measurement of chronic wound images: probability map approach
NASA Astrophysics Data System (ADS)
Ahmad Fauzi, Mohammad Faizal; Khansa, Ibrahim; Catignani, Karen; Gordillo, Gayle; Sen, Chandan K.; Gurcan, Metin N.
2014-03-01
estimated 6.5 million patients in the United States are affected by chronic wounds, with more than 25 billion US dollars and countless hours spent annually for all aspects of chronic wound care. There is need to develop software tools to analyze wound images that characterize wound tissue composition, measure their size, and monitor changes over time. This process, when done manually, is time-consuming and subject to intra- and inter-reader variability. In this paper, we propose a method that can characterize chronic wounds containing granulation, slough and eschar tissues. First, we generate a Red-Yellow-Black-White (RYKW) probability map, which then guides the region growing segmentation process. The red, yellow and black probability maps are designed to handle the granulation, slough and eschar tissues, respectively found in wound tissues, while the white probability map is designed to detect the white label card for measurement calibration purpose. The innovative aspects of this work include: 1) Definition of a wound characteristics specific probability map for segmentation, 2) Computationally efficient regions growing on 4D map; 3) Auto-calibration of measurements with the content of the image. The method was applied on 30 wound images provided by the Ohio State University Wexner Medical Center, with the ground truth independently generated by the consensus of two clinicians. While the inter-reader agreement between the readers is 85.5%, the computer achieves an accuracy of 80%.
Wolf, R; Orsel, K; De Buck, J; Kanevets, U; Barkema, H W
2016-04-01
Mycobacterium avium ssp. paratuberculosis (MAP) causes Johne's disease, a production-limiting disease in cattle. Detection of infected herds is often done using environmental samples (ES) of manure, which are collected in cattle pens and manure storage areas. Disadvantages of the method are that sample accuracy is affected by cattle housing and type of manure storage area. Furthermore, some sampling locations (e.g., manure lagoons) are frequently not readily accessible. However, sampling socks (SO), as used for Salmonella spp. testing in chicken flocks, might be an easy to use and accurate alternative to ES. The objective of the study was to assess accuracy of SO for detection of MAP in dairy herds. At each of 102 participating herds, 6 ES and 2 SO were collected. In total, 45 herds had only negative samples in both methods and 29 herds had ≥1 positive ES and ≥1 positive SO. Furthermore, 27 herds with ≥1 positive ES had no positive SO, and 1 herd with no positive ES had 1 positive SO. Bayesian simulation with informative priors on sensitivity of ES and MAP herd prevalence provided a posterior sensitivity for SO of 43.5% (95% probability interval=33-58), and 78.5% (95% probability interval=62-93) for ES. Although SO were easy to use, accuracy was lower than for ES. Therefore, with improvements in the sampling protocol (e.g., more SO per farm and more frequent herd visits), as well as improvements in the laboratory protocol, perhaps SO would be a useful alternative for ES. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Nowak, Michael D.; Smith, Andrew B.; Simpson, Carl; Zwickl, Derrick J.
2013-01-01
Molecular divergence time analyses often rely on the age of fossil lineages to calibrate node age estimates. Most divergence time analyses are now performed in a Bayesian framework, where fossil calibrations are incorporated as parametric prior probabilities on node ages. It is widely accepted that an ideal parameterization of such node age prior probabilities should be based on a comprehensive analysis of the fossil record of the clade of interest, but there is currently no generally applicable approach for calculating such informative priors. We provide here a simple and easily implemented method that employs fossil data to estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade, which can be used to fit an informative parametric prior probability distribution on a node age. Specifically, our method uses the extant diversity and the stratigraphic distribution of fossil lineages confidently assigned to a clade to fit a branching model of lineage diversification. Conditioning this on a simple model of fossil preservation, we estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade. The likelihood surface of missing history can then be translated into a parametric prior probability distribution on the age of the clade of interest. We show that the method performs well with simulated fossil distribution data, but that the likelihood surface of missing history can at times be too complex for the distribution-fitting algorithm employed by our software tool. An empirical example of the application of our method is performed to estimate echinoid node ages. A simulation-based sensitivity analysis using the echinoid data set shows that node age prior distributions estimated under poor preservation rates are significantly less informative than those estimated under high preservation rates. PMID:23755303
Influence of Betaxolol on the Methamphetamine Dependence in Mice.
Kim, Byoung-Jo; Park, Jong-Il; Eun, Hun-Jeong; Yang, Jong-Chul
2016-05-01
The noradrenaline system is involved in the reward effects of various kinds of abused drugs. Betaxolol (BTX) is a highly selective β1-antagonist. In the present study, we evaluated the effect of BTX on methamphetamine (MAP)-induced conditioned place preference (CPP) and hyperactivity in mice. The mice (n=72) were treated with MAP or saline every other day for a total of 6 days (from day 3 to day 8; 3-times MAP and 3-times saline). Each mouse was given saline (1 mL/kg) or MAP (1 mg/kg, s.c.) or BTX (5 mg/kg, i.p.) or MAP with BTX (5 mg/kg, i.p.) 30 min prior to the administration of MAP (1 mg/kg, s.c.) every other day and paired with for 1 h (three-drug and three-saline sessions). We then compared the CPP score between the two groups. After the extinction of CPP, the mice were given BTX (5 mg/kg, i.p.) or saline (1 mL/kg) 24 h prior to a priming injection of MAP, and were then immediately tested to see whether the place preference was reinstated. The repeated administration of BTX 30 min prior to the exposure to MAP significantly reduced the development of MAP-induced CPP. When BTX was administered 24 h prior to the CPP-testing session on day 9, it also significantly attenuated the CPP, but did not result in any change of locomotor activity. In the drug-priming reinstatement study, the extinguished CPP was reinstated by a MAP (0.125 mg/kg, s.c.) injection and this was significantly attenuated by BTX. These findings suggest that BTX has a therapeutic and preventive effect on the development, expression, and drug-priming reinstatement of MAP-induced CPP.
Quantum Jeffreys prior for displaced squeezed thermal states
NASA Astrophysics Data System (ADS)
Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin
1999-09-01
It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.
Mestres-Missé, Anna; Trampel, Robert; Turner, Robert; Kotz, Sonja A
2016-04-01
A key aspect of optimal behavior is the ability to predict what will come next. To achieve this, we must have a fairly good idea of the probability of occurrence of possible outcomes. This is based both on prior knowledge about a particular or similar situation and on immediately relevant new information. One question that arises is: when considering converging prior probability and external evidence, is the most probable outcome selected or does the brain represent degrees of uncertainty, even highly improbable ones? Using functional magnetic resonance imaging, the current study explored these possibilities by contrasting words that differ in their probability of occurrence, namely, unbalanced ambiguous words and unambiguous words. Unbalanced ambiguous words have a strong frequency-based bias towards one meaning, while unambiguous words have only one meaning. The current results reveal larger activation in lateral prefrontal and insular cortices in response to dominant ambiguous compared to unambiguous words even when prior and contextual information biases one interpretation only. These results suggest a probability distribution, whereby all outcomes and their associated probabilities of occurrence--even if very low--are represented and maintained.
Calibration of the DRASTIC ground water vulnerability mapping method
Rupert, M.G.
2001-01-01
Ground water vulnerability maps developed using the DRASTIC method have been produced in many parts of the world. Comparisons of those maps with actual ground water quality data have shown that the DRASTIC method is typically a poor predictor of ground water contamination. This study significantly improved the effectiveness of a modified DRASTIC ground water vulnerability map by calibrating the point rating schemes to actual ground water quality data by using nonparametric statistical techniques and a geographic information system. Calibration was performed by comparing data on nitrite plus nitrate as nitrogen (NO2 + NO3-N) concentrations in ground water to land-use, soils, and depth to first-encountered ground water data. These comparisons showed clear statistical differences between NO2 + NO3-N concentrations and the various categories. Ground water probability point ratings for NO2 + NO3-N contamination were developed from the results of these comparisons, and a probability map was produced. This ground water probability map was then correlated with an independent set of NO2 + NO3-N data to demonstrate its effectiveness in predicting elevated NO2 + NO3-N concentrations in ground water. This correlation demonstrated that the probability map was effective, but a vulnerability map produced with the uncalibrated DRASTIC method in the same area and using the same data layers was not effective. Considerable time and expense have been outlaid to develop ground water vulnerability maps with the DRASTIC method. This study demonstrates a cost-effective method to improve and verify the effectiveness of ground water vulnerability maps.
SE Great Basin Play Fairway Analysis
Adam Brandt
2015-11-15
This submission includes a Na/K geothermometer probability greater than 200 deg C map, as well as two play fairway analysis (PFA) models. The probability map acts as a composite risk segment for the PFA models. The PFA models differ in their application of magnetotelluric conductors as composite risk segments. These PFA models map out the geothermal potential in the region of SE Great Basin, Utah.
optBINS: Optimal Binning for histograms
NASA Astrophysics Data System (ADS)
Knuth, Kevin H.
2018-03-01
optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.
Subcortical structure segmentation using probabilistic atlas priors
NASA Astrophysics Data System (ADS)
Gouttard, Sylvain; Styner, Martin; Joshi, Sarang; Smith, Rachel G.; Cody Hazlett, Heather; Gerig, Guido
2007-03-01
The segmentation of the subcortical structures of the brain is required for many forms of quantitative neuroanatomic analysis. The volumetric and shape parameters of structures such as lateral ventricles, putamen, caudate, hippocampus, pallidus and amygdala are employed to characterize a disease or its evolution. This paper presents a fully automatic segmentation of these structures via a non-rigid registration of a probabilistic atlas prior and alongside a comprehensive validation. Our approach is based on an unbiased diffeomorphic atlas with probabilistic spatial priors built from a training set of MR images with corresponding manual segmentations. The atlas building computes an average image along with transformation fields mapping each training case to the average image. These transformation fields are applied to the manually segmented structures of each case in order to obtain a probabilistic map on the atlas. When applying the atlas for automatic structural segmentation, an MR image is first intensity inhomogeneity corrected, skull stripped and intensity calibrated to the atlas. Then the atlas image is registered to the image using an affine followed by a deformable registration matching the gray level intensity. Finally, the registration transformation is applied to the probabilistic maps of each structures, which are then thresholded at 0.5 probability. Using manual segmentations for comparison, measures of volumetric differences show high correlation with our results. Furthermore, the dice coefficient, which quantifies the volumetric overlap, is higher than 62% for all structures and is close to 80% for basal ganglia. The intraclass correlation coefficient computed on these same datasets shows a good inter-method correlation of the volumetric measurements. Using a dataset of a single patient scanned 10 times on 5 different scanners, reliability is shown with a coefficient of variance of less than 2 percents over the whole dataset. Overall, these validation and reliability studies show that our method accurately and reliably segments almost all structures. Only the hippocampus and amygdala segmentations exhibit relative low correlation with the manual segmentation in at least one of the validation studies, whereas they still show appropriate dice overlap coefficients.
Hepatitis disease detection using Bayesian theory
NASA Astrophysics Data System (ADS)
Maseleno, Andino; Hidayati, Rohmah Zahroh
2017-02-01
This paper presents hepatitis disease diagnosis using a Bayesian theory for better understanding of the theory. In this research, we used a Bayesian theory for detecting hepatitis disease and displaying the result of diagnosis process. Bayesian algorithm theory is rediscovered and perfected by Laplace, the basic idea is using of the known prior probability and conditional probability density parameter, based on Bayes theorem to calculate the corresponding posterior probability, and then obtained the posterior probability to infer and make decisions. Bayesian methods combine existing knowledge, prior probabilities, with additional knowledge derived from new data, the likelihood function. The initial symptoms of hepatitis which include malaise, fever and headache. The probability of hepatitis given the presence of malaise, fever, and headache. The result revealed that a Bayesian theory has successfully identified the existence of hepatitis disease.
Statistics provide guidance for indigenous organic carbon detection on Mars missions.
Sephton, Mark A; Carter, Jonathan N
2014-08-01
Data from the Viking and Mars Science Laboratory missions indicate the presence of organic compounds that are not definitively martian in origin. Both contamination and confounding mineralogies have been suggested as alternatives to indigenous organic carbon. Intuitive thought suggests that we are repeatedly obtaining data that confirms the same level of uncertainty. Bayesian statistics may suggest otherwise. If an organic detection method has a true positive to false positive ratio greater than one, then repeated organic matter detection progressively increases the probability of indigeneity. Bayesian statistics also reveal that methods with higher ratios of true positives to false positives give higher overall probabilities and that detection of organic matter in a sample with a higher prior probability of indigenous organic carbon produces greater confidence. Bayesian statistics, therefore, provide guidance for the planning and operation of organic carbon detection activities on Mars. Suggestions for future organic carbon detection missions and instruments are as follows: (i) On Earth, instruments should be tested with analog samples of known organic content to determine their true positive to false positive ratios. (ii) On the mission, for an instrument with a true positive to false positive ratio above one, it should be recognized that each positive detection of organic carbon will result in a progressive increase in the probability of indigenous organic carbon being present; repeated measurements, therefore, can overcome some of the deficiencies of a less-than-definitive test. (iii) For a fixed number of analyses, the highest true positive to false positive ratio method or instrument will provide the greatest probability that indigenous organic carbon is present. (iv) On Mars, analyses should concentrate on samples with highest prior probability of indigenous organic carbon; intuitive desires to contrast samples of high prior probability and low prior probability of indigenous organic carbon should be resisted.
NASA Astrophysics Data System (ADS)
Kang, Zhizhong
2013-10-01
This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.
Tustison, Nicholas J; Shrinidhi, K L; Wintermark, Max; Durst, Christopher R; Kandel, Benjamin M; Gee, James C; Grossman, Murray C; Avants, Brian B
2015-04-01
Segmenting and quantifying gliomas from MRI is an important task for diagnosis, planning intervention, and for tracking tumor changes over time. However, this task is complicated by the lack of prior knowledge concerning tumor location, spatial extent, shape, possible displacement of normal tissue, and intensity signature. To accommodate such complications, we introduce a framework for supervised segmentation based on multiple modality intensity, geometry, and asymmetry feature sets. These features drive a supervised whole-brain and tumor segmentation approach based on random forest-derived probabilities. The asymmetry-related features (based on optimal symmetric multimodal templates) demonstrate excellent discriminative properties within this framework. We also gain performance by generating probability maps from random forest models and using these maps for a refining Markov random field regularized probabilistic segmentation. This strategy allows us to interface the supervised learning capabilities of the random forest model with regularized probabilistic segmentation using the recently developed ANTsR package--a comprehensive statistical and visualization interface between the popular Advanced Normalization Tools (ANTs) and the R statistical project. The reported algorithmic framework was the top-performing entry in the MICCAI 2013 Multimodal Brain Tumor Segmentation challenge. The challenge data were widely varying consisting of both high-grade and low-grade glioma tumor four-modality MRI from five different institutions. Average Dice overlap measures for the final algorithmic assessment were 0.87, 0.78, and 0.74 for "complete", "core", and "enhanced" tumor components, respectively.
A new prior for bayesian anomaly detection: application to biosurveillance.
Shen, Y; Cooper, G F
2010-01-01
Bayesian anomaly detection computes posterior probabilities of anomalous events by combining prior beliefs and evidence from data. However, the specification of prior probabilities can be challenging. This paper describes a Bayesian prior in the context of disease outbreak detection. The goal is to provide a meaningful, easy-to-use prior that yields a posterior probability of an outbreak that performs at least as well as a standard frequentist approach. If this goal is achieved, the resulting posterior could be usefully incorporated into a decision analysis about how to act in light of a possible disease outbreak. This paper describes a Bayesian method for anomaly detection that combines learning from data with a semi-informative prior probability over patterns of anomalous events. A univariate version of the algorithm is presented here for ease of illustration of the essential ideas. The paper describes the algorithm in the context of disease-outbreak detection, but it is general and can be used in other anomaly detection applications. For this application, the semi-informative prior specifies that an increased count over baseline is expected for the variable being monitored, such as the number of respiratory chief complaints per day at a given emergency department. The semi-informative prior is derived based on the baseline prior, which is estimated from using historical data. The evaluation reported here used semi-synthetic data to evaluate the detection performance of the proposed Bayesian method and a control chart method, which is a standard frequentist algorithm that is closest to the Bayesian method in terms of the type of data it uses. The disease-outbreak detection performance of the Bayesian method was statistically significantly better than that of the control chart method when proper baseline periods were used to estimate the baseline behavior to avoid seasonal effects. When using longer baseline periods, the Bayesian method performed as well as the control chart method. The time complexity of the Bayesian algorithm is linear in the number of the observed events being monitored, due to a novel, closed-form derivation that is introduced in the paper. This paper introduces a novel prior probability for Bayesian outbreak detection that is expressive, easy-to-apply, computationally efficient, and performs as well or better than a standard frequentist method.
Probability hazard map for future vent opening at Etna volcano (Sicily, Italy).
NASA Astrophysics Data System (ADS)
Brancato, Alfonso; Tusa, Giuseppina; Coltelli, Mauro; Proietti, Cristina
2014-05-01
Mount Etna is a composite stratovolcano located along the Ionian coast of eastern Sicily. The frequent flank eruptions occurrence (at an interval of years, mostly concentrated along the NE, S and W rift zones) lead to a high volcanic hazard that, linked with intense urbanization, poses a high volcanic risk. A long-term volcanic hazard assessment, mainly based on the past behaviour of the Etna volcano, is the basic tool for the evaluation of this risk. Then, a reliable forecast where the next eruption will occur is needed. A computer-assisted analysis and probabilistic evaluations will provide the relative map, thus allowing identification of the areas prone to the highest hazard. Based on these grounds, the use of a code such BET_EF (Bayesian Event Tree_Eruption Forecasting) showed that a suitable analysis can be explored (Selva et al., 2012). Following an analysis we are performing, a total of 6886 point-vents referring to the last 4.0 ka of Etna flank activity, and spread over an area of 744 km2 (divided into N=2976 squared cell, with side of 500 m), allowed us to estimate a pdf by applying a Gaussian kernel. The probability values represent a complete set of outcomes mutually exclusive and the relative sum is normalized to one over the investigated area; then, the basic assumptions of a Dirichlet distribution (the prior distribution set in the BET_EF code (Marzocchi et al., 2004, 2008)) still hold. One fundamental parameter is the the equivalent number of data, that depicts our confidence on the best guess probability. The BET_EF code also works with a likelihood function. This is modelled by a Multinomial distribution, with parameters representing the number of vents in each cell and the total number of past data (i.e. the 6886 point-vents). Given the grid of N cells, the final posterior distribution will be evaluated by multiplying the a priori Dirichlet probability distribution with the past data in each cell through the likelihood. The probability hazard map shows a tendency to concentrate along the NE and S rifts, as well as Valle del Bove, increasing the difference in probability between these areas and the rest of the volcano edifice. It is worthy notice that a higher significance is still evident along the W rift, even if not comparable with the ones of the above mentioned areas. References Marzocchi W., Sandri L., Gasparini P., Newhall C. and Boschi E.; 2004: Quantifying probabilities of volcanic events: The example of volcanic hazard at Mount Vesuvius, J. Geophys. Res., 109, B11201, doi:10.1029/2004JB00315U. Marzocchi W., Sandri, L. and Selva, J.; 2008: BET_EF: a probabilistic tool for long- and short-term eruption forecasting, Bull. Volcanol., 70, 623 - 632, doi: 10.1007/s00445-007-0157-y. Selva J., Orsi G., Di Vito M.A., Marzocchi W. And Sandri L.; 2012: Probability hazard mapfor future vent opening atthe Campi Flegrei caldera, Italy, Bull. Volcanol., 74, 497 - 510, doi: 10.1007/s00445-011-0528-2.
Fanshawe, T. R.
2015-01-01
There are many examples from the scientific literature of visual search tasks in which the length, scope and success rate of the search have been shown to vary according to the searcher's expectations of whether the search target is likely to be present. This phenomenon has major practical implications, for instance in cancer screening, when the prevalence of the condition is low and the consequences of a missed disease diagnosis are severe. We consider this problem from an empirical Bayesian perspective to explain how the effect of a low prior probability, subjectively assessed by the searcher, might impact on the extent of the search. We show how the searcher's posterior probability that the target is present depends on the prior probability and the proportion of possible target locations already searched, and also consider the implications of imperfect search, when the probability of false-positive and false-negative decisions is non-zero. The theoretical results are applied to two studies of radiologists' visual assessment of pulmonary lesions on chest radiographs. Further application areas in diagnostic medicine and airport security are also discussed. PMID:26587267
Topics in inference and decision-making with partial knowledge
NASA Technical Reports Server (NTRS)
Safavian, S. Rasoul; Landgrebe, David
1990-01-01
Two essential elements needed in the process of inference and decision-making are prior probabilities and likelihood functions. When both of these components are known accurately and precisely, the Bayesian approach provides a consistent and coherent solution to the problems of inference and decision-making. In many situations, however, either one or both of the above components may not be known, or at least may not be known precisely. This problem of partial knowledge about prior probabilities and likelihood functions is addressed. There are at least two ways to cope with this lack of precise knowledge: robust methods, and interval-valued methods. First, ways of modeling imprecision and indeterminacies in prior probabilities and likelihood functions are examined; then how imprecision in the above components carries over to the posterior probabilities is examined. Finally, the problem of decision making with imprecise posterior probabilities and the consequences of such actions are addressed. Application areas where the above problems may occur are in statistical pattern recognition problems, for example, the problem of classification of high-dimensional multispectral remote sensing image data.
Geostatistical risk estimation at waste disposal sites in the presence of hot spots.
Komnitsas, Kostas; Modis, Kostas
2009-05-30
The present paper aims to estimate risk by using geostatistics at the wider coal mining/waste disposal site of Belkovskaya, Tula region, in Russia. In this area the presence of hot spots causes a spatial trend in the mean value of the random field and a non-Gaussian data distribution. Prior to application of geostatistics, subtraction of trend and appropriate smoothing and transformation of the data into a Gaussian form were carried out; risk maps were then generated for the wider study area in order to assess the probability of exceeding risk thresholds. Finally, the present paper discusses the need for homogenization of soil risk thresholds regarding hazardous elements that will enhance reliability of risk estimation and enable application of appropriate rehabilitation actions in contaminated areas.
Improved Neuroimaging Atlas of the Dentate Nucleus.
He, Naying; Langley, Jason; Huddleston, Daniel E; Ling, Huawei; Xu, Hongmin; Liu, Chunlei; Yan, Fuhua; Hu, Xiaoping P
2017-12-01
The dentate nucleus (DN) of the cerebellum is the major output nucleus of the cerebellum and is rich in iron. Quantitative susceptibility mapping (QSM) provides better iron-sensitive MRI contrast to delineate the boundary of the DN than either T 2 -weighted images or susceptibility-weighted images. Prior DN atlases used T 2 -weighted or susceptibility-weighted images to create DN atlases. Here, we employ QSM images to develop an improved dentate nucleus atlas for use in imaging studies. The DN was segmented in QSM images from 38 healthy volunteers. The resulting DN masks were transformed to a common space and averaged to generate the DN atlas. The center of mass of the left and right sides of the QSM-based DN atlas in the Montreal Neurological Institute space was -13.8, -55.8, and -36.4 mm, and 13.8, -55.7, and -36.4 mm, respectively. The maximal probability and mean probability of the DN atlas with the individually segmented DNs in this cohort were 100 and 39.3%, respectively, in contrast to the maximum probability of approximately 75% and the mean probability of 23.4 to 33.7% with earlier DN atlases. Using QSM, which provides superior iron-sensitive MRI contrast for delineating iron-rich structures, an improved atlas for the dentate nucleus has been generated. The atlas can be applied to investigate the role of the DN in both normal cortico-cerebellar physiology and the variety of disease states in which it is implicated.
Middle-School Students' Map Construction: Understanding Complex Spatial Displays.
ERIC Educational Resources Information Center
Bausmith, Jennifer Merriman; Leinhardt, Gaea
1998-01-01
Examines the map-making process of middle-school students to determine which actions influence their accuracy, how prior knowledge helps their map construction, and what lessons can be learned from map making. Indicates that instruction that focuses on recognition of interconnections between map elements can promote map reasoning skills. (DSK)
Anamolous ionospheric TEC variations prior to the Indonesian earthquake (M 7.1) of November 15, 2014
NASA Astrophysics Data System (ADS)
Alcay, Salih
2017-05-01
This paper investigates preearthquake ionospheric variations with the use of TEC of Global Ionospheric Maps (GIMs) and regional maps based on Precise Point Positioning (PPP) during the 7.1-M Indonesian earthquake that occurred on November 15, 2014. TEC maps corresponding to 10 days before to 4 days after the event were examined. In addition, a time series of TEC values according to the PPP maps were also evaluated. In addition to GIMs, it was possible to detect TEC variations with PPP maps. The results showed that ionospheric TEC decreased strikingly 4 days prior to the earthquake. This TEC variation was highly likely related to seismic activity.
2013-09-01
partner agencies and nations, detects, tracks, and interdicts illegal drug-trafficking in this region. In this thesis, we develop a probability model based...trafficking in this region. In this thesis, we develop a probability model based on intelligence inputs to generate a spatial temporal heat map specifying the...complement and vet such complicated simulation by developing more analytically tractable models. We develop probability models to generate a heat map
NASA Technical Reports Server (NTRS)
Backus, George E.
1999-01-01
The purpose of the grant was to study how prior information about the geomagnetic field can be used to interpret surface and satellite magnetic measurements, to generate quantitative descriptions of prior information that might be so used, and to use this prior information to obtain from satellite data a model of the core field with statistically justifiable error estimates. The need for prior information in geophysical inversion has long been recognized. Data sets are finite, and faithful descriptions of aspects of the earth almost always require infinite-dimensional model spaces. By themselves, the data can confine the correct earth model only to an infinite-dimensional subset of the model space. Earth properties other than direct functions of the observed data cannot be estimated from those data without prior information about the earth. Prior information is based on what the observer already knows before the data become available. Such information can be "hard" or "soft". Hard information is a belief that the real earth must lie in some known region of model space. For example, the total ohmic dissipation in the core is probably less that the total observed geothermal heat flow out of the earth's surface. (In principle, ohmic heat in the core can be recaptured to help drive the dynamo, but this effect is probably small.) "Soft" information is a probability distribution on the model space, a distribution that the observer accepts as a quantitative description of her/his beliefs about the earth. The probability distribution can be a subjective prior in the sense of Bayes or the objective result of a statistical study of previous data or relevant theories.
Sparsity-constrained PET image reconstruction with learned dictionaries
NASA Astrophysics Data System (ADS)
Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie
2016-09-01
PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.
NASA Astrophysics Data System (ADS)
Tachibana, Tomihisa; Tanahashi, Katsuto; Mochizuki, Toshimitsu; Shirasawa, Katsuhiko; Takato, Hidetaka
2018-04-01
Bifacial interdigitated-back-contact (IBC) silicon solar cells with a high bifaciality of 0.91 were fabricated. Screen printing and firing technology were used to reduce the production cost. For the first time, the relationship between the rear side structure and carrier collection probability was evaluated using internal quantum efficiency (IQE) mapping. The measurement results showed that the screen-printed electrode and back surface field (BSF) area led to low IQE. The low carrier collection probability by BSF area can be explained by electrical shading effects. Thus, it is clear that the IQE mapping system is useful to evaluate the IBC cell.
Minimization for conditional simulation: Relationship to optimal transport
NASA Astrophysics Data System (ADS)
Oliver, Dean S.
2014-05-01
In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.
Entanglement-enhanced Neyman-Pearson target detection using quantum illumination
NASA Astrophysics Data System (ADS)
Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.
2017-08-01
Quantum illumination (QI) provides entanglement-based target detection---in an entanglement-breaking environment---whose performance is significantly better than that of optimum classical-illumination target detection. QI's performance advantage was established in a Bayesian setting with the target presumed equally likely to be absent or present and error probability employed as the performance metric. Radar theory, however, eschews that Bayesian approach, preferring the Neyman-Pearson performance criterion to avoid the difficulties of accurately assigning prior probabilities to target absence and presence and appropriate costs to false-alarm and miss errors. We have recently reported an architecture---based on sum-frequency generation (SFG) and feedforward (FF) processing---for minimum error-probability QI target detection with arbitrary prior probabilities for target absence and presence. In this paper, we use our results for FF-SFG reception to determine the receiver operating characteristic---detection probability versus false-alarm probability---for optimum QI target detection under the Neyman-Pearson criterion.
Wu, Zhiwei; He, Hong S; Liu, Zhihua; Liang, Yu
2013-06-01
Fuel load is often used to prioritize stands for fuel reduction treatments. However, wildfire size and intensity are not only related to fuel loads but also to a wide range of other spatially related factors such as topography, weather and human activity. In prioritizing fuel reduction treatments, we propose using burn probability to account for the effects of spatially related factors that can affect wildfire size and intensity. Our burn probability incorporated fuel load, ignition probability, and spread probability (spatial controls to wildfire) at a particular location across a landscape. Our goal was to assess differences in reducing wildfire size and intensity using fuel-load and burn-probability based treatment prioritization approaches. Our study was conducted in a boreal forest in northeastern China. We derived a fuel load map from a stand map and a burn probability map based on historical fire records and potential wildfire spread pattern. The burn probability map was validated using historical records of burned patches. We then simulated 100 ignitions and six fuel reduction treatments to compare fire size and intensity under two approaches of fuel treatment prioritization. We calibrated and validated simulated wildfires against historical wildfire data. Our results showed that fuel reduction treatments based on burn probability were more effective at reducing simulated wildfire size, mean and maximum rate of spread, and mean fire intensity, but less effective at reducing maximum fire intensity across the burned landscape than treatments based on fuel load. Thus, contributions from both fuels and spatially related factors should be considered for each fuel reduction treatment. Published by Elsevier B.V.
Mendes, Maria Paula; Ribeiro, Luís
2010-02-01
The Water Framework Directive and its daughter directives recognize the urgent need to adopt specific measures against the contamination of water by individual pollutants or a group of pollutants that present a significant risk to the quality of water. Probability maps showing that the nitrate concentrations exceed a legal threshold value in any location of the aquifer are used to assess risk of groundwater quality degradation from intensive agricultural activity in aquifers. In this paper we use Disjunctive Kriging to map the probability that the Nitrates Directive limit (91/676/EEC) is exceeded for the Nitrate Vulnerable Zone of the River Tagus alluvium aquifer. The Tagus alluvial aquifer system belongs to one of the most productive hydrogeological unit of continental Portugal and it is used to irrigate crops. Several groundwater monitoring campaigns were carried out from 2004 to 2006 according to the summer crops cycle. The study reveals more areas on the west bank with higher probabilities of contamination by nitrates (nitrate concentration values above 50mg/L) than on the east bank. The analysis of synthetic temporal probability map shows the areas where there is an increase of nitrates concentration during the summers. Copyright 2009 Elsevier B.V. All rights reserved.
Wang, Jinke; Cheng, Yuanzhi; Guo, Changyong; Wang, Yadong; Tamura, Shinichi
2016-05-01
Propose a fully automatic 3D segmentation framework to segment liver on challenging cases that contain the low contrast of adjacent organs and the presence of pathologies from abdominal CT images. First, all of the atlases are weighted in the selected training datasets by calculating the similarities between the atlases and the test image to dynamically generate a subject-specific probabilistic atlas for the test image. The most likely liver region of the test image is further determined based on the generated atlas. A rough segmentation is obtained by a maximum a posteriori classification of probability map, and the final liver segmentation is produced by a shape-intensity prior level set in the most likely liver region. Our method is evaluated and demonstrated on 25 test CT datasets from our partner site, and its results are compared with two state-of-the-art liver segmentation methods. Moreover, our performance results on 10 MICCAI test datasets are submitted to the organizers for comparison with the other automatic algorithms. Using the 25 test CT datasets, average symmetric surface distance is [Formula: see text] mm (range 0.62-2.12 mm), root mean square symmetric surface distance error is [Formula: see text] mm (range 0.97-3.01 mm), and maximum symmetric surface distance error is [Formula: see text] mm (range 12.73-26.67 mm) by our method. Our method on 10 MICCAI test data sets ranks 10th in all the 47 automatic algorithms on the site as of July 2015. Quantitative results, as well as qualitative comparisons of segmentations, indicate that our method is a promising tool to improve the efficiency of both techniques. The applicability of the proposed method to some challenging clinical problems and the segmentation of the liver are demonstrated with good results on both quantitative and qualitative experimentations. This study suggests that the proposed framework can be good enough to replace the time-consuming and tedious slice-by-slice manual segmentation approach.
Uncertainty plus Prior Equals Rational Bias: An Intuitive Bayesian Probability Weighting Function
ERIC Educational Resources Information Center
Fennell, John; Baddeley, Roland
2012-01-01
Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several…
Rufibach, Kaspar; Burger, Hans Ulrich; Abt, Markus
2016-09-01
Bayesian predictive power, the expectation of the power function with respect to a prior distribution for the true underlying effect size, is routinely used in drug development to quantify the probability of success of a clinical trial. Choosing the prior is crucial for the properties and interpretability of Bayesian predictive power. We review recommendations on the choice of prior for Bayesian predictive power and explore its features as a function of the prior. The density of power values induced by a given prior is derived analytically and its shape characterized. We find that for a typical clinical trial scenario, this density has a u-shape very similar, but not equal, to a β-distribution. Alternative priors are discussed, and practical recommendations to assess the sensitivity of Bayesian predictive power to its input parameters are provided. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Historical emissions critical for mapping decarbonization pathways
NASA Astrophysics Data System (ADS)
Majkut, J.; Kopp, R. E.; Sarmiento, J. L.; Oppenheimer, M.
2016-12-01
Policymakers have set a goal of limiting temperature increase from human influence on the climate. This motivates the identification of decarbonization pathways to stabilize atmospheric concentrations of CO2. In this context, the future behavior of CO2 sources and sinks define the CO2 emissions necessary to meet warming thresholds with specified probabilities. We adopt a simple model of the atmosphere-land-ocean carbon balance to reflect uncertainty in how natural CO2 sinks will respond to increasing atmospheric CO2 and temperature. Bayesian inversion is used to estimate the probability distributions of selected parameters of the carbon model. Prior probability distributions are chosen to reflect the behavior of CMIP5 models. We then update these prior distributions by running historical simulations of the global carbon cycle and inverting with observationally-based inventories and fluxes of anthropogenic carbon in the ocean and atmosphere. The result is a best-estimate of historical CO2 sources and sinks and a model of how CO2 sources and sinks will vary in the future under various emissions scenarios, with uncertainty. By linking the carbon model to a simple climate model, we calculate emissions pathways and carbon budgets consistent with meeting specific temperature thresholds and identify key factors that contribute to remaining uncertainty. In particular, we show how the assumed history of CO2 emissions from land use change (LUC) critically impacts estimates of the strength of the land CO2 sink via CO2 fertilization. Different estimates of historical LUC emissions taken from the literature lead to significantly different parameterizations of the carbon system. High historical CO2 emissions from LUC lead to a more robust CO2 fertilization effect, significantly lower future atmospheric CO2 concentrations, and an increased amount of CO2 that can be emitted to satisfy temperature stabilization targets. Thus, in our model, historical LUC emissions have a significant impact on allowable carbon budgets under temperture targets.
Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2010-01-01
When facing a conjunction between space objects, decision makers must chose whether to maneuver for collision avoidance or not. We apply a well-known decision procedure, the sequential probability ratio test, to this problem. We propose two approaches to the problem solution, one based on a frequentist method, and the other on a Bayesian method. The frequentist method does not require any prior knowledge concerning the conjunction, while the Bayesian method assumes knowledge of prior probability densities. Our results show that both methods achieve desired missed detection rates, but the frequentist method's false alarm performance is inferior to the Bayesian method's
Van Horn, Richard; Fields, F.K.
1974-01-01
In the past man has built on land that might be covered by floodwaters, with little consideration of the consequences. The result has been disastrous to those in the path of floodwaters and has cost the loss of thousands of lives and untold billions of dollars in property damage in the United States. Salt Lake County, of which the Sugar House quadrangle is a part, has had many floods in the past and can be expected to have more in the future. Construction has taken place in filled or dried-up marshes and lakes, in spring areas, and even in stream channels. Lack of prior knowledge of these and other forms of surface water (water at the surface of the ground) can increase construction and maintenance costs significantly.The map shows the area that probably will be covered by floods at least once in every 100 years on the long-term average (unit IRF, intermediate regional flood), the area that probably will be covered by floods from the worst possible combination of very wet weather and high streamflow reasonably expected of the area (unit SPF, standard project flood), the mapped extent of streamflow by channel shifting or flooding in the past 5,000 years (unit fa), and the probable maximum extent of damaging flash floods and mudflows from small valleys in the Wasatch Range. The map also shows the location of water at the surface of the ground: lakes, streams, springs, weep holes, canals, and reservoirs. Lakes and marshes that existed within the past 100 years, but now are drained, filled, or dried up, are also shown.The following examples show that the presence of water can be desirable or undesirable, depending on how the water occurs. Floods, the most spectacular form of surface water, may result in great property damage and loss of life. Lakes normally are beneficial, in that they may support plant growth and provide habitats for fish and other wildlife, provide water for livestock, and can be used for recreation. Springs may or may not be desirable: they may provide a source of water for domestic or stock use but are undesirable if they appear in a foundation excavation for a building. Thus, the location of areas that may be affected by floods and other surface water is important to people concerned with land-use planning, zoning, and legislation, and with the environment in which we must live.
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
Wang, Liang-Jen; Lin, Shih-Ku; Chen, Yi-Chih; Huang, Ming-Chyi; Chen, Tzu-Ting; Ree, Shao-Chun; Chen, Chih-Ken
Methamphetamine exerts neurotoxic effects and elicits psychotic symptoms. This study attempted to compare clinical differences between methamphetamine users with persistent psychosis (MAP) and patients with schizophrenia. In addition, we examined the discrimination validity by using symptom clusters to differentiate between MAP and schizophrenia. We enrolled 53 MAP patients and 53 patients with schizophrenia. The psychopathology of participants was assessed using the Chinese version of the Diagnostic Interview for Genetic Studies and the 18-item Brief Psychiatric Rating Scale. Logistic regression was used to examine the predicted probability scores of different symptom combinations on discriminating between MAP and schizophrenia. The receiver operating characteristic (ROC) analyses and area under the curve (AUC) were further applied to examine the discrimination validity of the predicted probability scores on differentiating between MAP and schizophrenia. We found that MAP and schizophrenia demonstrated similar patterns of delusions. Compared to patients with schizophrenia, MAP experienced significantly higher proportions of visual hallucinations and of somatic or tactile hallucinations. However, MAP exhibited significantly lower severity in conceptual disorganization, mannerism/posturing, blunted affect, emotional withdrawal, and motor retardation compared to patients with schizophrenia. The ROC analysis showed that a predicted probability score combining the aforementioned 7 items of symptoms could significantly differentiate between MAP and schizophrenia (AUC = 0.77). Findings in the current study suggest that nuanced differences might exist in the clinical presentation of secondary psychosis (MAP) and primary psychosis (schizophrenia). Combining the symptoms as a whole may help with differential diagnosis for MAP and schizophrenia. © 2016 S. Karger AG, Basel.
Eulerian Mapping Closure Approach for Probability Density Function of Concentration in Shear Flows
NASA Technical Reports Server (NTRS)
He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The Eulerian mapping closure approach is developed for uncertainty propagation in computational fluid mechanics. The approach is used to study the Probability Density Function (PDF) for the concentration of species advected by a random shear flow. An analytical argument shows that fluctuation of the concentration field at one point in space is non-Gaussian and exhibits stretched exponential form. An Eulerian mapping approach provides an appropriate approximation to both convection and diffusion terms and leads to a closed mapping equation. The results obtained describe the evolution of the initial Gaussian field, which is in agreement with direct numerical simulations.
Effects of Prior Knowledge and Concept-Map Structure on Disorientation, Cognitive Load, and Learning
ERIC Educational Resources Information Center
Amadieu, Franck; van Gog, Tamara; Paas, Fred; Tricot, Andre; Marine, Claudette
2009-01-01
This study explored the effects of prior knowledge (high vs. low; HPK and LPK) and concept-map structure (hierarchical vs. network; HS and NS) on disorientation, cognitive load, and learning from non-linear documents on "the infection process of a retrograde virus (HIV)". Participants in the study were 24 adults. Overall subjective ratings of…
NASA Astrophysics Data System (ADS)
Yilmaz, Isik; Keskin, Inan; Marschalko, Marian; Bednarik, Martin
2010-05-01
This study compares the GIS based collapse susceptibility mapping methods such as; conditional probability (CP), logistic regression (LR) and artificial neural networks (ANN) applied in gypsum rock masses in Sivas basin (Turkey). Digital Elevation Model (DEM) was first constructed using GIS software. Collapse-related factors, directly or indirectly related to the causes of collapse occurrence, such as distance from faults, slope angle and aspect, topographical elevation, distance from drainage, topographic wetness index- TWI, stream power index- SPI, Normalized Difference Vegetation Index (NDVI) by means of vegetation cover, distance from roads and settlements were used in the collapse susceptibility analyses. In the last stage of the analyses, collapse susceptibility maps were produced from CP, LR and ANN models, and they were then compared by means of their validations. Area Under Curve (AUC) values obtained from all three methodologies showed that the map obtained from ANN model looks like more accurate than the other models, and the results also showed that the artificial neural networks is a usefull tool in preparation of collapse susceptibility map and highly compatible with GIS operating features. Key words: Collapse; doline; susceptibility map; gypsum; GIS; conditional probability; logistic regression; artificial neural networks.
Probabilistic mapping of flood-induced backscatter changes in SAR time series
NASA Astrophysics Data System (ADS)
Schlaffer, Stefan; Chini, Marco; Giustarini, Laura; Matgen, Patrick
2017-04-01
The information content of flood extent maps can be increased considerably by including information on the uncertainty of the flood area delineation. This additional information can be of benefit in flood forecasting and monitoring. Furthermore, flood probability maps can be converted to binary maps showing flooded and non-flooded areas by applying a threshold probability value pF = 0.5. In this study, a probabilistic change detection approach for flood mapping based on synthetic aperture radar (SAR) time series is proposed. For this purpose, conditional probability density functions (PDFs) for land and open water surfaces were estimated from ENVISAT ASAR Wide Swath (WS) time series containing >600 images using a reference mask of permanent water bodies. A pixel-wise harmonic model was used to account for seasonality in backscatter from land areas caused by soil moisture and vegetation dynamics. The approach was evaluated for a large-scale flood event along the River Severn, United Kingdom. The retrieved flood probability maps were compared to a reference flood mask derived from high-resolution aerial imagery by means of reliability diagrams. The obtained performance measures indicate both high reliability and confidence although there was a slight under-estimation of the flood extent, which may in part be attributed to topographically induced radar shadows along the edges of the floodplain. Furthermore, the results highlight the importance of local incidence angle for the separability between flooded and non-flooded areas as specular reflection properties of open water surfaces increase with a more oblique viewing geometry.
Using known map category marginal frequencies to improve estimates of thematic map accuracy
NASA Technical Reports Server (NTRS)
Card, D. H.
1982-01-01
By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.
Challenges in making a seismic hazard map for Alaska and the Aleutians
Wesson, R.L.; Boyd, O.S.; Mueller, C.S.; Frankel, A.D.; Freymueller, J.T.
2008-01-01
We present a summary of the data and analyses leading to the revision of the time-independent probabilistic seismic hazard maps of Alaska and the Aleutians. These maps represent a revision of existing maps based on newly obtained data, and reflect best current judgments about methodology and approach. They have been prepared following the procedures and assumptions made in the preparation of the 2002 National Seismic Hazard Maps for the lower 48 States, and will be proposed for adoption in future revisions to the International Building Code. We present example maps for peak ground acceleration, 0.2 s spectral amplitude (SA), and 1.0 s SA at a probability level of 2% in 50 years (annual probability of 0.000404). In this summary, we emphasize issues encountered in preparation of the maps that motivate or require future investigation and research.
ERIC Educational Resources Information Center
Satake, Eiki; Amato, Philip P.
2008-01-01
This paper presents an alternative version of formulas of conditional probabilities and Bayes' rule that demonstrate how the truth table of elementary mathematical logic applies to the derivations of the conditional probabilities of various complex, compound statements. This new approach is used to calculate the prior and posterior probabilities…
Physician Bayesian updating from personal beliefs about the base rate and likelihood ratio.
Rottman, Benjamin Margolin
2017-02-01
Whether humans can accurately make decisions in line with Bayes' rule has been one of the most important yet contentious topics in cognitive psychology. Though a number of paradigms have been used for studying Bayesian updating, rarely have subjects been allowed to use their own preexisting beliefs about the prior and the likelihood. A study is reported in which physicians judged the posttest probability of a diagnosis for a patient vignette after receiving a test result, and the physicians' posttest judgments were compared to the normative posttest calculated from their own beliefs in the sensitivity and false positive rate of the test (likelihood ratio) and prior probability of the diagnosis. On the one hand, the posttest judgments were strongly related to the physicians' beliefs about both the prior probability as well as the likelihood ratio, and the priors were used considerably more strongly than in previous research. On the other hand, both the prior and the likelihoods were still not used quite as much as they should have been, and there was evidence of other nonnormative aspects to the updating, such as updating independent of the likelihood beliefs. By focusing on how physicians use their own prior beliefs for Bayesian updating, this study provides insight into how well experts perform probabilistic inference in settings in which they rely upon their own prior beliefs rather than experimenter-provided cues. It suggests that there is reason to be optimistic about experts' abilities, but that there is still considerable need for improvement.
An improved image non-blind image deblurring method based on FoEs
NASA Astrophysics Data System (ADS)
Zhu, Qidan; Sun, Lei
2013-03-01
Traditional non-blind image deblurring algorithms always use maximum a posterior(MAP). MAP estimates involving natural image priors can reduce the ripples effectively in contrast to maximum likelihood(ML). However, they have been found lacking in terms of restoration performance. Based on this issue, we utilize MAP with KL penalty to replace traditional MAP. We develop an image reconstruction algorithm that minimizes the KL divergence between the reference distribution and the prior distribution. The approximate KL penalty can restrain over-smooth caused by MAP. We use three groups of images and Harris corner detection to prove our method. The experimental results show that our algorithm of non-blind image restoration can effectively reduce the ringing effect and exhibit the state-of-the-art deblurring results.
Utility-based designs for randomized comparative trials with categorical outcomes
Murray, Thomas A.; Thall, Peter F.; Yuan, Ying
2016-01-01
A general utility-based testing methodology for design and conduct of randomized comparative clinical trials with categorical outcomes is presented. Numerical utilities of all elementary events are elicited to quantify their desirabilities. These numerical values are used to map the categorical outcome probability vector of each treatment to a mean utility, which is used as a one-dimensional criterion for constructing comparative tests. Bayesian tests are presented, including fixed sample and group sequential procedures, assuming Dirichlet-multinomial models for the priors and likelihoods. Guidelines are provided for establishing priors, eliciting utilities, and specifying hypotheses. Efficient posterior computation is discussed, and algorithms are provided for jointly calibrating test cutoffs and sample size to control overall type I error and achieve specified power. Asymptotic approximations for the power curve are used to initialize the algorithms. The methodology is applied to re-design a completed trial that compared two chemotherapy regimens for chronic lymphocytic leukemia, in which an ordinal efficacy outcome was dichotomized and toxicity was ignored to construct the trial’s design. The Bayesian tests also are illustrated by several types of categorical outcomes arising in common clinical settings. Freely available computer software for implementation is provided. PMID:27189672
Use of prior odds for missing persons identifications.
Budowle, Bruce; Ge, Jianye; Chakraborty, Ranajit; Gill-King, Harrell
2011-06-27
Identification of missing persons from mass disasters is based on evaluation of a number of variables and observations regarding the combination of features derived from these variables. DNA typing now is playing a more prominent role in the identification of human remains, and particularly so for highly decomposed and fragmented remains. The strength of genetic associations, by either direct or kinship analyses, is often quantified by calculating a likelihood ratio. The likelihood ratio can be multiplied by prior odds based on nongenetic evidence to calculate the posterior odds, that is, by applying Bayes' Theorem, to arrive at a probability of identity. For the identification of human remains, the path creating the set and intersection of variables that contribute to the prior odds needs to be appreciated and well defined. Other than considering the total number of missing persons, the forensic DNA community has been silent on specifying the elements of prior odds computations. The variables include the number of missing individuals, eyewitness accounts, anthropological features, demographics and other identifying characteristics. The assumptions, supporting data and reasoning that are used to establish a prior probability that will be combined with the genetic data need to be considered and justified. Otherwise, data may be unintentionally or intentionally manipulated to achieve a probability of identity that cannot be supported and can thus misrepresent the uncertainty with associations. The forensic DNA community needs to develop guidelines for objectively computing prior odds.
A method for producing digital probabilistic seismic landslide hazard maps
Jibson, R.W.; Harp, E.L.; Michael, J.A.
2000-01-01
The 1994 Northridge, California, earthquake is the first earthquake for which we have all of the data sets needed to conduct a rigorous regional analysis of seismic slope instability. These data sets include: (1) a comprehensive inventory of triggered landslides, (2) about 200 strong-motion records of the mainshock, (3) 1:24 000-scale geologic mapping of the region, (4) extensive data on engineering properties of geologic units, and (5) high-resolution digital elevation models of the topography. All of these data sets have been digitized and rasterized at 10 m grid spacing using ARC/INFO GIS software on a UNIX computer. Combining these data sets in a dynamic model based on Newmark's permanent-deformation (sliding-block) analysis yields estimates of coseismic landslide displacement in each grid cell from the Northridge earthquake. The modeled displacements are then compared with the digital inventory of landslides triggered by the Northridge earthquake to construct a probability curve relating predicted displacement to probability of failure. This probability function can be applied to predict and map the spatial variability in failure probability in any ground-shaking conditions of interest. We anticipate that this mapping procedure will be used to construct seismic landslide hazard maps that will assist in emergency preparedness planning and in making rational decisions regarding development and construction in areas susceptible to seismic slope failure. ?? 2000 Elsevier Science B.V. All rights reserved.
Jibson, Randall W.; Harp, Edwin L.; Michael, John A.
1998-01-01
The 1994 Northridge, California, earthquake is the first earthquake for which we have all of the data sets needed to conduct a rigorous regional analysis of seismic slope instability. These data sets include (1) a comprehensive inventory of triggered landslides, (2) about 200 strong-motion records of the mainshock, (3) 1:24,000-scale geologic mapping of the region, (4) extensive data on engineering properties of geologic units, and (5) high-resolution digital elevation models of the topography. All of these data sets have been digitized and rasterized at 10-m grid spacing in the ARC/INFO GIS platform. Combining these data sets in a dynamic model based on Newmark's permanent-deformation (sliding-block) analysis yields estimates of coseismic landslide displacement in each grid cell from the Northridge earthquake. The modeled displacements are then compared with the digital inventory of landslides triggered by the Northridge earthquake to construct a probability curve relating predicted displacement to probability of failure. This probability function can be applied to predict and map the spatial variability in failure probability in any ground-shaking conditions of interest. We anticipate that this mapping procedure will be used to construct seismic landslide hazard maps that will assist in emergency preparedness planning and in making rational decisions regarding development and construction in areas susceptible to seismic slope failure.
Strong profiling is not mathematically optimal for discovering rare malfeasors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Press, William H
2008-01-01
In a large population of individuals labeled j = 1,2,...,N, governments attempt to find the rare malfeasor j = j, (terrorist, for example) by making use of priors p{sub j} that estimate the probability of individual j being a malfeasor. Societal resources for secondary random screening such as airport search or police investigation are concentrated against individuals with the largest priors. They may call this 'strong profiling' if the concentration is at least proportional to p{sub j} for the largest values. Strong profiling often results in higher probability, but otherwise innocent, individuals being repeatedly subjected to screening. They show heremore » that, entirely apart from considerations of social policy, strong profiling is not mathematically optimal at finding malfeasors. Even if prior probabilities were accurate, their optimal use would be only as roughly the geometric mean between a strong profiling and a completely uniform sampling of the population.« less
[Micro vs. macro: structural-functional organization of avian micro- and macrochromosomes].
Rodionov, A V
1996-05-01
Karyotypes of lower vertebrates mainly consist of microchromosomes. In higher vertebrates, microchromosomes are present in each class of the most primitive orders. Birds have more microchromosomes in their karyotype than other vertebrates. Accumulation of microchromosomes in the avian karyotype probably occurred after separation of birds from reptilians in Triassic, but prior to radiation of ancestors of the modern orders (late Cretaceous-early Jurassic). In this review, the structural, molecular, and functional organization of avian macro- and microchromosomes and their participation in genetic processes are discussed. The average size of an avian microchromosome is about 12.4 Mb, which is ten times less than the size of an average macrochromosome. In contrast to macrochromosomes, medium and small avian chromosomes lack the highest level of chromosomal organization: their chromonemes do not have spiral coiling. Microchromosomal euchromatin largely consists of GC-rich R regions. More than half of the mapped avian genes are located on microchromosomes. Crossing-over frequency in microchromosomes is approximately threefold higher than in macrochromosomes. This may be caused by high GC content and recombination hot spots, which are present on each microchromosome. High recombination frequency in microchromosomes increases the probability of their correct meiotic segregation.
Continental-scale, seasonal movements of a heterothermic migratory tree bat
Cryan, Paul M.; Stricker, Craig A.; Wunder, Michael B.
2014-01-01
Long-distance migration evolved independently in bats and unique migration behaviors are likely, but because of their cryptic lifestyles, many details remain unknown. North American hoary bats (Lasiurus cinereus cinereus) roost in trees year-round and probably migrate farther than any other bats, yet we still lack basic information about their migration patterns and wintering locations or strategies. This information is needed to better understand unprecedented fatality of hoary bats at wind turbines during autumn migration and to determine whether the species could be susceptible to an emerging disease affecting hibernating bats. Our aim was to infer probable seasonal movements of individual hoary bats to better understand their migration and seasonal distribution in North America. We analyzed the stable isotope values of non-exchangeable hydrogen in the keratin of bat hair and combined isotopic results with prior distributional information to derive relative probability density surfaces for the geographic origins of individuals. We then mapped probable directions and distances of seasonal movement. Results indicate that hoary bats summer across broad areas. In addition to assumed latitudinal migration, we uncovered evidence of longitudinal movement by hoary bats from inland summering grounds to coastal regions during autumn and winter. Coastal regions with nonfreezing temperatures may be important wintering areas for hoary bats. Hoary bats migrating through any particular area, such as a wind turbine facility in autumn, are likely to have originated from a broad expanse of summering grounds from which they have traveled in no recognizable order. Better characterizing migration patterns and wintering behaviors of hoary bats sheds light on the evolution of migration and provides context for conserving these migrants.
NASA Technical Reports Server (NTRS)
Weinman, James A.; Garan, Louis
1987-01-01
A more advanced cloud pattern analysis algorithm was subsequently developed to take the shape and brightness of the various clouds into account in a manner that is more consistent with the human analyst's perception of GOES cloud imagery. The results of that classification scheme were compared with precipitation probabilities observed from ships of opportunity off the U.S. east coast to derive empirical regressions between cloud types and precipitation probability. The cloud morphology was then quantitatively and objectively used to map precipitation probabilities during two winter months during which severe cold air outbreaks were observed over the northwest Atlantic. Precipitation probabilities associated with various cloud types are summarized. Maps of precipitation probability derived from the cloud morphology analysis program for two months and the precipitation probability derived from thirty years of ship observation were observed.
Oil spill contamination probability in the southeastern Levantine basin.
Goldman, Ron; Biton, Eli; Brokovich, Eran; Kark, Salit; Levin, Noam
2015-02-15
Recent gas discoveries in the eastern Mediterranean Sea led to multiple operations with substantial economic interest, and with them there is a risk of oil spills and their potential environmental impacts. To examine the potential spatial distribution of this threat, we created seasonal maps of the probability of oil spill pollution reaching an area in the Israeli coastal and exclusive economic zones, given knowledge of its initial sources. We performed simulations of virtual oil spills using realistic atmospheric and oceanic conditions. The resulting maps show dominance of the alongshore northerly current, which causes the high probability areas to be stretched parallel to the coast, increasing contamination probability downstream of source points. The seasonal westerly wind forcing determines how wide the high probability areas are, and may also restrict these to a small coastal region near source points. Seasonal variability in probability distribution, oil state, and pollution time is also discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
NASA Astrophysics Data System (ADS)
Agapiou, Sergios; Burger, Martin; Dashti, Masoumeh; Helin, Tapio
2018-04-01
We consider the inverse problem of recovering an unknown functional parameter u in a separable Banach space, from a noisy observation vector y of its image through a known possibly non-linear map {{\\mathcal G}} . We adopt a Bayesian approach to the problem and consider Besov space priors (see Lassas et al (2009 Inverse Problems Imaging 3 87-122)), which are well-known for their edge-preserving and sparsity-promoting properties and have recently attracted wide attention especially in the medical imaging community. Our key result is to show that in this non-parametric setup the maximum a posteriori (MAP) estimates are characterized by the minimizers of a generalized Onsager-Machlup functional of the posterior. This is done independently for the so-called weak and strong MAP estimates, which as we show coincide in our context. In addition, we prove a form of weak consistency for the MAP estimators in the infinitely informative data limit. Our results are remarkable for two reasons: first, the prior distribution is non-Gaussian and does not meet the smoothness conditions required in previous research on non-parametric MAP estimates. Second, the result analytically justifies existing uses of the MAP estimate in finite but high dimensional discretizations of Bayesian inverse problems with the considered Besov priors.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
NASA Astrophysics Data System (ADS)
Kim, Hannah; Hong, Helen
2014-03-01
We propose an automatic method for nipple detection on 3D automated breast ultrasound (3D ABUS) images using coronal slab-average-projection and cumulative probability map. First, to identify coronal images that appeared remarkable distinction between nipple-areola region and skin, skewness of each coronal image is measured and the negatively skewed images are selected. Then, coronal slab-average-projection image is reformatted from selected images. Second, to localize nipple-areola region, elliptical ROI covering nipple-areola region is detected using Hough ellipse transform in coronal slab-average-projection image. Finally, to separate the nipple from areola region, 3D Otsu's thresholding is applied to the elliptical ROI and cumulative probability map in the elliptical ROI is generated by assigning high probability to low intensity region. False detected small components are eliminated using morphological opening and the center point of detected nipple region is calculated. Experimental results show that our method provides 94.4% nipple detection rate.
Rupert, Michael G.; Plummer, Niel
2009-01-01
This raster data set delineates the predicted probability of elevated volatile organic compound (VOC) concentrations in groundwater in the Eagle River watershed valley-fill aquifer, Eagle County, North-Central Colorado, 2006-2007. This data set was developed by a cooperative project between the U.S. Geological Survey, Eagle County, the Eagle River Water and Sanitation District, the Town of Eagle, the Town of Gypsum, and the Upper Eagle Regional Water Authority. This project was designed to evaluate potential land-development effects on groundwater and surface-water resources so that informed land-use and water management decisions can be made. This groundwater probability map and its associated probability maps was developed as follows: (1) A point data set of wells with groundwater quality and groundwater age data was overlaid with thematic layers of anthropogenic (related to human activities) and hydrogeologic data by using a geographic information system to assign each well values for depth to groundwater, distance to major streams and canals, distance to gypsum beds, precipitation, soils, and well depth. These data then were downloaded to a statistical software package for analysis by logistic regression. (2) Statistical models predicting the probability of elevated nitrate concentrations, the probability of unmixed young water (using chlorofluorocarbon-11 concentrations and tritium activities), and the probability of elevated volatile organic compound concentrations were developed using logistic regression techniques. (3) The statistical models were entered into a GIS and the probability map was constructed.
Rupert, Michael G.; Plummer, Niel
2009-01-01
This raster data set delineates the predicted probability of elevated nitrate concentrations in groundwater in the Eagle River watershed valley-fill aquifer, Eagle County, North-Central Colorado, 2006-2007. This data set was developed by a cooperative project between the U.S. Geological Survey, Eagle County, the Eagle River Water and Sanitation District, the Town of Eagle, the Town of Gypsum, and the Upper Eagle Regional Water Authority. This project was designed to evaluate potential land-development effects on groundwater and surface-water resources so that informed land-use and water management decisions can be made. This groundwater probability map and its associated probability maps was developed as follows: (1) A point data set of wells with groundwater quality and groundwater age data was overlaid with thematic layers of anthropogenic (related to human activities) and hydrogeologic data by using a geographic information system to assign each well values for depth to groundwater, distance to major streams and canals, distance to gypsum beds, precipitation, soils, and well depth. These data then were downloaded to a statistical software package for analysis by logistic regression. (2) Statistical models predicting the probability of elevated nitrate concentrations, the probability of unmixed young water (using chlorofluorocarbon-11 concentrations and tritium activities), and the probability of elevated volatile organic compound concentrations were developed using logistic regression techniques. (3) The statistical models were entered into a GIS and the probability map was constructed.
Code of Federal Regulations, 2014 CFR
2014-07-01
... effort to assess the probability of future behavior which could have an effect adverse to the national... prior experience with similar cases, reasonably suggest a degree of probability of prejudicial behavior...
Code of Federal Regulations, 2013 CFR
2013-07-01
... effort to assess the probability of future behavior which could have an effect adverse to the national... prior experience with similar cases, reasonably suggest a degree of probability of prejudicial behavior...
Mapping brain development during childhood, adolescence and young adulthood
NASA Astrophysics Data System (ADS)
Guo, Xiaojuan; Jin, Zhen; Chen, Kewei; Peng, Danling; Li, Yao
2009-02-01
Using optimized voxel-based morphometry (VBM), this study systematically investigated the differences and similarities of brain structural changes during the early three developmental periods of human lives: childhood, adolescence and young adulthood. These brain changes were discussed in relationship to the corresponding cognitive function development during these three periods. Magnetic Resonance Imaging (MRI) data from 158 Chinese healthy children, adolescents and young adults, aged 7.26 to 22.80 years old, were included in this study. Using the customized brain template together with the gray matter/white matter/cerebrospinal fluid prior probability maps, we found that there were more age-related positive changes in the frontal lobe, less in hippocampus and amygdala during childhood, but more in bilateral hippocampus and amygdala and left fusiform gyrus during adolescence and young adulthood. There were more age-related negative changes near to central sulcus during childhood, but these changes extended to the frontal and parietal lobes, mainly in the parietal lobe, during adolescence and young adulthood, and more in the prefrontal lobe during young adulthood. So gray matter volume in the parietal lobe significantly decreased from childhood and continued to decrease till young adulthood. These findings may aid in understanding the age-related differences in cognitive function.
NASA Astrophysics Data System (ADS)
Mondini, Alessandro C.; Chang, Kang-Tsung; Chiang, Shou-Hao; Schlögel, Romy; Notarnicola, Claudia; Saito, Hitoshi
2017-12-01
We propose a framework to systematically generate event landslide inventory maps from satellite images in southern Taiwan, where landslides are frequent and abundant. The spectral information is used to assess the pixel land cover class membership probability through a Maximum Likelihood classifier trained with randomly generated synthetic land cover spectral fingerprints, which are obtained from an independent training images dataset. Pixels are classified as landslides when the calculated landslide class membership probability, weighted by a susceptibility model, is higher than membership probabilities of other classes. We generated synthetic fingerprints from two FORMOSAT-2 images acquired in 2009 and tested the procedure on two other images, one in 2005 and the other in 2009. We also obtained two landslide maps through manual interpretation. The agreement between the two sets of inventories is given by the Cohen's k coefficients of 0.62 and 0.64, respectively. This procedure can now classify a new FORMOSAT-2 image automatically facilitating the production of landslide inventory maps.
Wada, Tetsuo
Despite many empirical studies having been carried out on examiner patent citations, few have scrutinized the obstacles to prior art searching when adding patent citations during patent prosecution at patent offices. This analysis takes advantage of the longitudinal gap between an International Search Report (ISR) as required by the Patent Cooperation Treaty (PCT) and subsequent national examination procedures. We investigate whether several kinds of distance actually affect the probability that prior art is detected at the time of an ISR; this occurs much earlier than in national phase examinations. Based on triadic PCT applications between 2002 and 2005 for the trilateral patent offices (the European Patent Office, the US Patent and Trademark Office, and the Japan Patent Office) and their family-level citations made by the trilateral offices, we find evidence that geographical distance negatively affects the probability of capture of prior patents in an ISR. In addition, the technological complexity of an application negatively affects the probability of capture, whereas the volume of forward citations of prior art affects it positively. These results demonstrate the presence of obstacles to searching at patent offices, and suggest ways to design work sharing by patent offices, such that the duplication of search costs arises only when patent office search horizons overlap.
Method for removing atomic-model bias in macromolecular crystallography
Terwilliger, Thomas C [Santa Fe, NM
2006-08-01
Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.
Is the recall of verbal-spatial information from working memory affected by symptoms of ADHD?
Caterino, Linda C; Verdi, Michael P
2012-10-01
OJECTIVE: The Kulhavy model for text learning using organized spatial displays proposes that learning will be increased when participants view visual images prior to related text. In contrast to previous studies, this study also included students who exhibited symptoms of ADHD. Participants were presented with either a map-text or text-map condition. The map-text condition led to a significantly higher performance than the text-map condition, overall. However, students who endorsed more symptoms of inattention and hyperactivity-impulsivity scored more poorly when asked to recall text facts, text features, and map features and were less able to correctly place map features on a reconstructed map than were students who endorsed fewer symptoms. The results of the study support the Kulhavy model for typical students; however, the benefit of viewing a display prior to text was not seen for students with ADHD symptoms, thus supporting previous studies that have demonstrated that ADHD appears to negatively affect operations that occur in working memory.
Updating: Learning versus Supposing
ERIC Educational Resources Information Center
Zhao, Jiaying; Crupi, Vincenzo; Tentori, Katya; Fitelson, Branden; Osherson, Daniel
2012-01-01
Bayesian orthodoxy posits a tight relationship between conditional probability and updating. Namely, the probability of an event "A" after learning "B" should equal the conditional probability of "A" given "B" prior to learning "B". We examine whether ordinary judgment conforms to the orthodox view. In three experiments we found substantial…
Spatial vent opening probability map of El Hierro Island (Canary Islands, Spain)
NASA Astrophysics Data System (ADS)
Becerril, Laura; Cappello, Annalisa; Galindo, Inés; Neri, Marco; Del Negro, Ciro
2013-04-01
The assessment of the probable spatial distribution of new eruptions is useful to manage and reduce the volcanic risk. It can be achieved in different ways, but it becomes especially hard when dealing with volcanic areas less studied, poorly monitored and characterized by a low frequent activity, as El Hierro. Even though it is the youngest of the Canary Islands, before the 2011 eruption in the "Las Calmas Sea", El Hierro had been the least studied volcanic Island of the Canaries, with more historically devoted attention to La Palma, Tenerife and Lanzarote. We propose a probabilistic method to build the susceptibility map of El Hierro, i.e. the spatial distribution of vent opening for future eruptions, based on the mathematical analysis of the volcano-structural data collected mostly on the Island and, secondly, on the submerged part of the volcano, up to a distance of ~10-20 km from the coast. The volcano-structural data were collected through new fieldwork measurements, bathymetric information, and analysis of geological maps, orthophotos and aerial photographs. They have been divided in different datasets and converted into separate and weighted probability density functions, which were then included in a non-homogeneous Poisson process to produce the volcanic susceptibility map. Future eruptive events on El Hierro is mainly concentrated on the rifts zones, extending also beyond the shoreline. The major probabilities to host new eruptions are located on the distal parts of the South and West rifts, with the highest probability reached in the south-western area of the West rift. High probabilities are also observed in the Northeast and South rifts, and the submarine parts of the rifts. This map represents the first effort to deal with the volcanic hazard at El Hierro and can be a support tool for decision makers in land planning, emergency plans and civil defence actions.
Bayesian Posterior Odds Ratios: Statistical Tools for Collaborative Evaluations
ERIC Educational Resources Information Center
Hicks, Tyler; Rodríguez-Campos, Liliana; Choi, Jeong Hoon
2018-01-01
To begin statistical analysis, Bayesians quantify their confidence in modeling hypotheses with priors. A prior describes the probability of a certain modeling hypothesis apart from the data. Bayesians should be able to defend their choice of prior to a skeptical audience. Collaboration between evaluators and stakeholders could make their choices…
Smith, Erik A.; Sanocki, Chris A.; Lorenz, David L.; Jacobsen, Katrin E.
2017-12-27
Streamflow distribution maps for the Cannon River and St. Louis River drainage basins were developed by the U.S. Geological Survey, in cooperation with the Legislative-Citizen Commission on Minnesota Resources, to illustrate relative and cumulative streamflow distributions. The Cannon River was selected to provide baseline data to assess the effects of potential surficial sand mining, and the St. Louis River was selected to determine the effects of ongoing Mesabi Iron Range mining. Each drainage basin (Cannon, St. Louis) was subdivided into nested drainage basins: the Cannon River was subdivided into 152 nested drainage basins, and the St. Louis River was subdivided into 353 nested drainage basins. For each smaller drainage basin, the estimated volumes of groundwater discharge (as base flow) and surface runoff flowing into all surface-water features were displayed under the following conditions: (1) extreme low-flow conditions, comparable to an exceedance-probability quantile of 0.95; (2) low-flow conditions, comparable to an exceedance-probability quantile of 0.90; (3) a median condition, comparable to an exceedance-probability quantile of 0.50; and (4) a high-flow condition, comparable to an exceedance-probability quantile of 0.02.Streamflow distribution maps were developed using flow-duration curve exceedance-probability quantiles in conjunction with Soil-Water-Balance model outputs; both the flow-duration curve and Soil-Water-Balance models were built upon previously published U.S. Geological Survey reports. The selected streamflow distribution maps provide a proactive water management tool for State cooperators by illustrating flow rates during a range of hydraulic conditions. Furthermore, after the nested drainage basins are highlighted in terms of surface-water flows, the streamflows can be evaluated in the context of meeting specific ecological flows under different flow regimes and potentially assist with decisions regarding groundwater and surface-water appropriations. Presented streamflow distribution maps are foundational work intended to support the development of additional streamflow distribution maps that include statistical constraints on the selected flow conditions.
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323
Stable Estimation of a Covariance Matrix Guided by Nuclear Norm Penalties
Chi, Eric C.; Lange, Kenneth
2014-01-01
Estimation of a covariance matrix or its inverse plays a central role in many statistical methods. For these methods to work reliably, estimated matrices must not only be invertible but also well-conditioned. The current paper introduces a novel prior to ensure a well-conditioned maximum a posteriori (MAP) covariance estimate. The prior shrinks the sample covariance estimator towards a stable target and leads to a MAP estimator that is consistent and asymptotically efficient. Thus, the MAP estimator gracefully transitions towards the sample covariance matrix as the number of samples grows relative to the number of covariates. The utility of the MAP estimator is demonstrated in two standard applications – discriminant analysis and EM clustering – in this sampling regime. PMID:25143662
Investigation of an Optimum Detection Scheme for a Star-Field Mapping System
NASA Technical Reports Server (NTRS)
Aldridge, M. D.; Credeur, L.
1970-01-01
An investigation was made to determine the optimum detection scheme for a star-field mapping system that uses coded detection resulting from starlight shining through specially arranged multiple slits of a reticle. The computer solution of equations derived from a theoretical model showed that the greatest probability of detection for a given star and background intensity occurred with the use of a single transparent slit. However, use of multiple slits improved the system's ability to reject the detection of undesirable lower intensity stars, but only by decreasing the probability of detection for lower intensity stars to be mapped. Also, it was found that the coding arrangement affected the root-mean-square star-position error and that detection is possible with error in the system's detected spin rate, though at a reduced probability.
Higher-dimensional attractors with absolutely continuous invariant probability
NASA Astrophysics Data System (ADS)
Bocker, Carlos; Bortolotti, Ricardo
2018-05-01
Consider a dynamical system given by , where E is a linear expanding map of , C is a linear contracting map of and f is in . We provide sufficient conditions for E that imply the existence of an open set of pairs for which the corresponding dynamic T admits a unique absolutely continuous invariant probability. A geometrical characteristic of transversality between self-intersections of images of is present in the dynamic of the maps in . In addition, we give a condition between E and C under which it is possible to perturb f to obtain a pair in .
NASA Astrophysics Data System (ADS)
Arca, B.; Salis, M.; Bacciu, V.; Duce, P.; Pellizzaro, G.; Ventura, A.; Spano, D.
2009-04-01
Although in many countries lightning is the main cause of ignition, in the Mediterranean Basin the forest fires are predominantly ignited by arson, or by human negligence. The fire season peaks coincide with extreme weather conditions (mainly strong winds, hot temperatures, low atmospheric water vapour content) and high tourist presence. Many works reported that in the Mediterranean Basin the projected impacts of climate change will cause greater weather variability and extreme weather conditions, with drier and hotter summers and heat waves. At long-term scale, climate changes could affect the fuel load and the dead/live fuel ratio, and therefore could change the vegetation flammability. At short-time scale, the increase of extreme weather events could directly affect fuel water status, and it could increase large fire occurrence. In this context, detecting the areas characterized by both high probability of large fire occurrence and high fire severity could represent an important component of the fire management planning. In this work we compared several fire probability and severity maps (fire occurrence, rate of spread, fireline intensity, flame length) obtained for a study area located in North Sardinia, Italy, using FlamMap simulator (USDA Forest Service, Missoula). FlamMap computes the potential fire behaviour characteristics over a defined landscape for given weather, wind and fuel moisture data. Different weather and fuel moisture scenarios were tested to predict the potential impact of climate changes on fire parameters. The study area, characterized by a mosaic of urban areas, protected areas, and other areas subject to anthropogenic disturbances, is mainly composed by fire-prone Mediterranean maquis. The input themes needed to run FlamMap were input as grid of 10 meters; the wind data, obtained using a computational fluid-dynamic model, were inserted as gridded file, with a resolution of 50 m. The analysis revealed high fire probability and severity in most of the areas, and therefore a high potential danger. The FlamMap outputs and the derived fire probability maps can be used in decision support systems for fire spread and behaviour and for fire danger assessment with actual and future fire regimes.
Williams, D.A.; Keszthelyi, L.P.; Schenk, P.M.; Milazzo, M.P.; Lopes, R.M.C.; Rathbun, J.A.; Greeley, R.
2005-01-01
We have studied data from the Galileo spacecraft's three remote sensing instruments (Solid-State Imager (SSI), Near-Infrared Mapping Spectrometer (NIMS), and Photopolarimeter-Radiometer (PPR)) covering the Zamama - Thor region of Io's antijovian hemisphere, and produced a geomorphological map of this region. This is the third of three regional maps we are producing from the Galileo spacecraft data. Our goal is to assess the variety of volcanic and tectonic materials and their interrelationships on Io using planetary mapping techniques, supplemented with all available Galileo remote sensing data. Based on the Galileo data analysis and our mapping, we have determined that the most recent geologic activity in the Zamama - Thor region has been dominated by two sites of large-scale volcanic surface changes. The Zamama Eruptive Center is a site of both explosive and effusive eruptions, which emanate from two relatively steep edifices (Zamama Tholi A and B) that appear to be built by both silicate and sulfur volcanism. A ???100-km long flow field formed sometime after the 1979 Voyager flybys, which appears to be a site of promethean-style compound flows, flow-front SO2 plumes, and adjacent sulfur flows. Larger, possibly stealthy, plumes have on at least one occasion during the Galileo mission tapped a source that probably includes S and/or Cl to produce a red pyroclastic deposit from the same vent from which silicate lavas were erupted. The Thor Eruptive Center, which may have been active prior to Voyager, became active again during the Galileo mission between May and August 2001. A pillanian-style eruption at Thor included the tallest plume observed to date on Io (at least 500 km high) and new dark lava flows. The plume produced a central dark pyroclastic deposit (probably silicate-rich) and an outlying white diffuse ring that is SO2-rich. Mapping shows that several of th000e new dark lava flows around the plume vent have reoccupied sites of earlier flows. Unlike most of the other pillanian eruptions observed during the Galileo mission, the 2001 Thor eruption did not produce a large red ring deposit, indicating a relative lack of S and/or Cl gases interacting with the magma during that eruption. Between these two eruptive centers are two paterae, Thomagata and Reshef. Thomagata Patera is located on a large shield-like mesa and shows no signs of activity. In contrast, Reshef Patera is located on a large, irregular mesa that is apparently undergoing degradation through erosion (perhaps from SO2 -sapping or chemical decomposition of sulfur-rich material) from multiple secondary volcanic centers. ?? 2005 Elsevier Inc. All rights reserved.
Victor A. Rudis
2000-01-01
Scant information exists about the spatial extent of human impact on forest resource supplies, i.e., depreciative and nonforest uses. I used observations of ground-sampled land use and intrusions on forest land to map the probability of resource use and human impact for broad areas. Data came from a seven-state survey region (Alabama, Arkansas, Louisiana, Mississippi,...
Victor A. Rudis
2000-01-01
Scant information exists about the spatial extent of human impact on forest resource supplies, i.e., depreciative and nonforest uses. I used observations of ground-sampled land use and intrusions on forest land to map the probability of resource use and human impact for broad areas. Data came from a seven State survey region (Alabama, Arkansas, Louisiana, Mississippi,...
Martian North Polar Impacts and Volcanoes: Feature Discrimination and Comparisons to Global Trends
NASA Technical Reports Server (NTRS)
Sakimoto, E. H.; Weren, S. L.
2003-01-01
The recent Mars Global Surveyor and Mars Odyssey Missions have greatly improved our available data for the north polar region of Mars. Pre- MGS and MO studies proposed possible volcanic features, and have revealed numerous volcanoes and impact craters in a range of weathering states that were poorly visible or not visible in prior data sets. This new data has helped in the reassessment of the polar deposits. From images or shaded Mars Orbiter Laser Altimeter (MOLA) topography grids alone, it has proved to be difficult to differentiate cratered cones of probable volcanic origins from impact craters that appear to have been filled. It is important that the distinction is made if possible, as the relative ages of the polar deposits hinge on small numbers of craters, and the local volcanic regime originally only proposed small numbers of volcanoes. Therefore, we have expanded prior work on detailed topographic parameter measurements and modeling for the polar volcanic landforms and mapped and measured all of the probable volcanic and impact features for the north polar region as well as other midlatitude fields, and suggest that: 1) The polar volcanic edifices are significantly different topographically from midlatitude edifices, and have steeper slopes and larger craters as a group; 2) The impact craters are actually distinct from the volcanoes in terms of the feature volume that is cavity compared to feature volume that is positive relief; 3) There are actually several distinct types of volcanic edifices present; 4) These types tend to be spatially grouped by edifice. This is a contrast to many of the other small volcanic fields around Mars, where small edifices tend to be mixed types within a field.
Mixture model based joint-MAP reconstruction of attenuation and activity maps in TOF-PET
NASA Astrophysics Data System (ADS)
Hemmati, H.; Kamali-Asl, A.; Ghafarian, P.; Ay, M. R.
2018-06-01
A challenge to have quantitative positron emission tomography (PET) images is to provide an accurate and patient-specific photon attenuation correction. In PET/MR scanners, the nature of MR signals and hardware limitations have led to a real challenge on the attenuation map extraction. Except for a constant factor, the activity and attenuation maps from emission data on TOF-PET system can be determined by the maximum likelihood reconstruction of attenuation and activity approach (MLAA) from emission data. The aim of the present study is to constrain the joint estimations of activity and attenuation approach for PET system using a mixture model prior based on the attenuation map histogram. This novel prior enforces non-negativity and its hyperparameters can be estimated using a mixture decomposition step from the current estimation of the attenuation map. The proposed method can also be helpful on the solving of scaling problem and is capable to assign the predefined regional attenuation coefficients with some degree of confidence to the attenuation map similar to segmentation-based attenuation correction approaches. The performance of the algorithm is studied with numerical and Monte Carlo simulations and a phantom experiment and was compared with MLAA algorithm with and without the smoothing prior. The results demonstrate that the proposed algorithm is capable of producing the cross-talk free activity and attenuation images from emission data. The proposed approach has potential to be a practical and competitive method for joint reconstruction of activity and attenuation maps from emission data on PET/MR and can be integrated on the other methods.
Rupert, Michael G.; Plummer, Niel
2009-01-01
This raster data set delineates the predicted probability of unmixed young groundwater (defined using chlorofluorocarbon-11 concentrations and tritium activities) in groundwater in the Eagle River watershed valley-fill aquifer, Eagle County, North-Central Colorado, 2006-2007. This data set was developed by a cooperative project between the U.S. Geological Survey, Eagle County, the Eagle River Water and Sanitation District, the Town of Eagle, the Town of Gypsum, and the Upper Eagle Regional Water Authority. This project was designed to evaluate potential land-development effects on groundwater and surface-water resources so that informed land-use and water management decisions can be made. This groundwater probability map and its associated probability maps were developed as follows: (1) A point data set of wells with groundwater quality and groundwater age data was overlaid with thematic layers of anthropogenic (related to human activities) and hydrogeologic data by using a geographic information system to assign each well values for depth to groundwater, distance to major streams and canals, distance to gypsum beds, precipitation, soils, and well depth. These data then were downloaded to a statistical software package for analysis by logistic regression. (2) Statistical models predicting the probability of elevated nitrate concentrations, the probability of unmixed young water (using chlorofluorocarbon-11 concentrations and tritium activities), and the probability of elevated volatile organic compound concentrations were developed using logistic regression techniques. (3) The statistical models were entered into a GIS and the probability map was constructed.
Estimation of contour motion and deformation for nonrigid object tracking
NASA Astrophysics Data System (ADS)
Shao, Jie; Porikli, Fatih; Chellappa, Rama
2007-08-01
We present an algorithm for nonrigid contour tracking in heavily cluttered background scenes. Based on the properties of nonrigid contour movements, a sequential framework for estimating contour motion and deformation is proposed. We solve the nonrigid contour tracking problem by decomposing it into three subproblems: motion estimation, deformation estimation, and shape regulation. First, we employ a particle filter to estimate the global motion parameters of the affine transform between successive frames. Then we generate a probabilistic deformation map to deform the contour. To improve robustness, multiple cues are used for deformation probability estimation. Finally, we use a shape prior model to constrain the deformed contour. This enables us to retrieve the occluded parts of the contours and accurately track them while allowing shape changes specific to the given object types. Our experiments show that the proposed algorithm significantly improves the tracker performance.
Complex Geometric Models of Diffusion and Relaxation in Healthy and Damaged White Matter
Farrell, Jonathan A.D.; Smith, Seth A.; Reich, Daniel S.; Calabresi, Peter A.; van Zijl, Peter C.M.
2010-01-01
Which aspects of tissue microstructure affect diffusion weighted MRI signals? Prior models, many of which use Monte-Carlo simulations, have focused on relatively simple models of the cellular microenvironment and have not considered important anatomic details. With the advent of higher-order analysis models for diffusion imaging, such as high-angular-resolution diffusion imaging (HARDI), more realistic models are necessary. This paper presents and evaluates the reproducibility of simulations of diffusion in complex geometries. Our framework is quantitative, does not require specialized hardware, is easily implemented with little programming experience, and is freely available as open-source software. Models may include compartments with different diffusivities, permeabilities, and T2 time constants using both parametric (e.g., spheres and cylinders) and arbitrary (e.g., mesh-based) geometries. Three-dimensional diffusion displacement-probability functions are mapped with high reproducibility, and thus can be readily used to assess reproducibility of diffusion-derived contrasts. PMID:19739233
Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion.
Leimkuhler, Thomas; Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter
2018-06-01
We propose a system to infer binocular disparity from a monocular video stream in real-time. Different from classic reconstruction of physical depth in computer vision, we compute perceptually plausible disparity, that is numerically inaccurate, but results in a very similar overall depth impression with plausible overall layout, sharp edges, fine details and agreement between luminance and disparity. We use several simple monocular cues to estimate disparity maps and confidence maps of low spatial and temporal resolution in real-time. These are complemented by spatially-varying, appearance-dependent and class-specific disparity prior maps, learned from example stereo images. Scene classification selects this prior at runtime. Fusion of prior and cues is done by means of robust MAP inference on a dense spatio-temporal conditional random field with high spatial and temporal resolution. Using normal distributions allows this in constant-time, parallel per-pixel work. We compare our approach to previous 2D-to-3D conversion systems in terms of different metrics, as well as a user study and validate our notion of perceptually plausible disparity.
Cannon, Susan H.; Gartner, Joseph E.; Rupert, Michael G.; Michael, John A.
2003-01-01
These maps present preliminary assessments of the probability of debris-flow activity and estimates of peak discharges that can potentially be generated by debris-flows issuing from basins burned by the Piru, Simi and Verdale Fires of October 2003 in southern California in response to the 25-year, 10-year, and 2-year 1-hour rain storms. The probability maps are based on the application of a logistic multiple regression model that describes the percent chance of debris-flow production from an individual basin as a function of burned extent, soil properties, basin gradients and storm rainfall. The peak discharge maps are based on application of a multiple-regression model that can be used to estimate debris-flow peak discharge at a basin outlet as a function of basin gradient, burn extent, and storm rainfall. Probabilities of debris-flow occurrence for the Piru Fire range between 2 and 94% and estimates of debris flow peak discharges range between 1,200 and 6,640 ft3/s (34 to 188 m3/s). Basins burned by the Simi Fire show probabilities for debris-flow occurrence between 1 and 98%, and peak discharge estimates between 1,130 and 6,180 ft3/s (32 and 175 m3/s). The probabilities for debris-flow activity calculated for the Verdale Fire range from negligible values to 13%. Peak discharges were not estimated for this fire because of these low probabilities. These maps are intended to identify those basins that are most prone to the largest debris-flow events and provide information for the preliminary design of mitigation measures and for the planning of evacuation timing and routes.
Three-dimensional choroidal segmentation in spectral OCT volumes using optic disc prior information
NASA Astrophysics Data System (ADS)
Hu, Zhihong; Girkin, Christopher A.; Hariri, Amirhossein; Sadda, SriniVas R.
2016-03-01
Recently, much attention has been focused on determining the role of the peripapillary choroid - the layer between the outer retinal pigment epithelium (RPE)/Bruchs membrane (BM) and choroid-sclera (C-S) junction, whether primary or secondary in the pathogenesis of glaucoma. However, the automated choroidal segmentation in spectral-domain optical coherence tomography (SD-OCT) images of optic nerve head (ONH) has not been reported probably due to the fact that the presence of the BM opening (BMO, corresponding to the optic disc) can deflect the choroidal segmentation from its correct position. The purpose of this study is to develop a 3D graph-based approach to identify the 3D choroidal layer in ONH-centered SD-OCT images using the BMO prior information. More specifically, an initial 3D choroidal segmentation was first performed using the 3D graph search algorithm. Note that varying surface interaction constraints based on the choroidal morphological model were applied. To assist the choroidal segmentation, two other surfaces of internal limiting membrane and innerouter segment junction were also segmented. Based on the segmented layer between the RPE/BM and C-S junction, a 2D projection map was created. The BMO in the projection map was detected by a 2D graph search. The pre-defined BMO information was then incorporated into the surface interaction constraints of the 3D graph search to obtain more accurate choroidal segmentation. Twenty SD-OCT images from 20 healthy subjects were used. The mean differences of the choroidal borders between the algorithm and manual segmentation were at a sub-voxel level, indicating a high level segmentation accuracy.
Kurrant, Douglas; Fear, Elise; Baran, Anastasia; LoVetri, Joe
2017-12-01
The authors have developed a method to combine a patient-specific map of tissue structure and average dielectric properties with microwave tomography. The patient-specific map is acquired with radar-based techniques and serves as prior information for microwave tomography. The impact that the degree of structural detail included in this prior information has on image quality was reported in a previous investigation. The aim of the present study is to extend this previous work by identifying and quantifying the impact that errors in the prior information have on image quality, including the reconstruction of internal structures and lesions embedded in fibroglandular tissue. This study also extends the work of others reported in literature by emulating a clinical setting with a set of experiments that incorporate heterogeneity into both the breast interior and glandular region, as well as prior information related to both fat and glandular structures. Patient-specific structural information is acquired using radar-based methods that form a regional map of the breast. Errors are introduced to create a discrepancy in the geometry and electrical properties between the regional map and the model used to generate the data. This permits the impact that errors in the prior information have on image quality to be evaluated. Image quality is quantitatively assessed by measuring the ability of the algorithm to reconstruct both internal structures and lesions embedded in fibroglandular tissue. The study is conducted using both 2D and 3D numerical breast models constructed from MRI scans. The reconstruction results demonstrate robustness of the method relative to errors in the dielectric properties of the background regional map, and to misalignment errors. These errors do not significantly influence the reconstruction accuracy of the underlying structures, or the ability of the algorithm to reconstruct malignant tissue. Although misalignment errors do not significantly impact the quality of the reconstructed fat and glandular structures for the 3D scenarios, the dielectric properties are reconstructed less accurately within the glandular structure for these cases relative to the 2D cases. However, general agreement between the 2D and 3D results was found. A key contribution of this paper is the detailed analysis of the impact of prior information errors on the reconstruction accuracy and ability to detect tumors. The results support the utility of acquiring patient-specific information with radar-based techniques and incorporating this information into MWT. The method is robust to errors in the dielectric properties of the background regional map, and to misalignment errors. Completion of this analysis is an important step toward developing the method into a practical diagnostic tool. © 2017 American Association of Physicists in Medicine.
LaMotte, A.E.; Greene, E.A.
2007-01-01
Spatial relations between land use and groundwater quality in the watershed adjacent to Assateague Island National Seashore, Maryland and Virginia, USA were analyzed by the use of two spatial models. One model used a logit analysis and the other was based on geostatistics. The models were developed and compared on the basis of existing concentrations of nitrate as nitrogen in samples from 529 domestic wells. The models were applied to produce spatial probability maps that show areas in the watershed where concentrations of nitrate in groundwater are likely to exceed a predetermined management threshold value. Maps of the watershed generated by logistic regression and probability kriging analysis showing where the probability of nitrate concentrations would exceed 3 mg/L (>0.50) compared favorably. Logistic regression was less dependent on the spatial distribution of sampled wells, and identified an additional high probability area within the watershed that was missed by probability kriging. The spatial probability maps could be used to determine the natural or anthropogenic factors that best explain the occurrence and distribution of elevated concentrations of nitrate (or other constituents) in shallow groundwater. This information can be used by local land-use planners, ecologists, and managers to protect water supplies and identify land-use planning solutions and monitoring programs in vulnerable areas. ?? 2006 Springer-Verlag.
Exoplanet Biosignatures: A Framework for Their Assessment.
Catling, David C; Krissansen-Totton, Joshua; Kiang, Nancy Y; Crisp, David; Robinson, Tyler D; DasSarma, Shiladitya; Rushby, Andrew J; Del Genio, Anthony; Bains, William; Domagal-Goldman, Shawn
2018-04-20
Finding life on exoplanets from telescopic observations is an ultimate goal of exoplanet science. Life produces gases and other substances, such as pigments, which can have distinct spectral or photometric signatures. Whether or not life is found with future data must be expressed with probabilities, requiring a framework of biosignature assessment. We present a framework in which we advocate using biogeochemical "Exo-Earth System" models to simulate potential biosignatures in spectra or photometry. Given actual observations, simulations are used to find the Bayesian likelihoods of those data occurring for scenarios with and without life. The latter includes "false positives" wherein abiotic sources mimic biosignatures. Prior knowledge of factors influencing planetary inhabitation, including previous observations, is combined with the likelihoods to give the Bayesian posterior probability of life existing on a given exoplanet. Four components of observation and analysis are necessary. (1) Characterization of stellar (e.g., age and spectrum) and exoplanetary system properties, including "external" exoplanet parameters (e.g., mass and radius), to determine an exoplanet's suitability for life. (2) Characterization of "internal" exoplanet parameters (e.g., climate) to evaluate habitability. (3) Assessment of potential biosignatures within the environmental context (components 1-2), including corroborating evidence. (4) Exclusion of false positives. We propose that resulting posterior Bayesian probabilities of life's existence map to five confidence levels, ranging from "very likely" (90-100%) to "very unlikely" (<10%) inhabited. Key Words: Bayesian statistics-Biosignatures-Drake equation-Exoplanets-Habitability-Planetary science. Astrobiology 18, xxx-xxx.
Xue, Zhe; Chen, Jia-Xu; Zhao, Yue; Medvar, Barbara
2017-01-01
A major challenge in physiology is to exploit the many large-scale data sets available from “-omic” studies to seek answers to key physiological questions. In previous studies, Bayes’ theorem has been used for this purpose. This approach requires a means to map continuously distributed experimental data to probabilities (likelihood values) to derive posterior probabilities from the combination of prior probabilities and new data. Here, we introduce the use of minimum Bayes’ factors for this purpose and illustrate the approach by addressing a physiological question, “Which deubiquitylating enzymes (DUBs) encoded by mammalian genomes are most likely to regulate plasma membrane transport processes in renal cortical collecting duct principal cells?” To do this, we have created a comprehensive online database of 110 DUBs present in the mammalian genome (https://hpcwebapps.cit.nih.gov/ESBL/Database/DUBs/). We used Bayes’ theorem to integrate available information from large-scale data sets derived from proteomic and transcriptomic studies of renal collecting duct cells to rank the 110 known DUBs with regard to likelihood of interacting with and regulating transport processes. The top-ranked DUBs were OTUB1, USP14, PSMD7, PSMD14, USP7, USP9X, OTUD4, USP10, and UCHL5. Among these USP7, USP9X, OTUD4, and USP10 are known to be involved in endosomal trafficking and have potential roles in endosomal recycling of plasma membrane proteins in the mammalian cortical collecting duct. PMID:28039431
Xue, Zhe; Chen, Jia-Xu; Zhao, Yue; Medvar, Barbara; Knepper, Mark A
2017-03-01
A major challenge in physiology is to exploit the many large-scale data sets available from "-omic" studies to seek answers to key physiological questions. In previous studies, Bayes' theorem has been used for this purpose. This approach requires a means to map continuously distributed experimental data to probabilities (likelihood values) to derive posterior probabilities from the combination of prior probabilities and new data. Here, we introduce the use of minimum Bayes' factors for this purpose and illustrate the approach by addressing a physiological question, "Which deubiquitylating enzymes (DUBs) encoded by mammalian genomes are most likely to regulate plasma membrane transport processes in renal cortical collecting duct principal cells?" To do this, we have created a comprehensive online database of 110 DUBs present in the mammalian genome (https://hpcwebapps.cit.nih.gov/ESBL/Database/DUBs/). We used Bayes' theorem to integrate available information from large-scale data sets derived from proteomic and transcriptomic studies of renal collecting duct cells to rank the 110 known DUBs with regard to likelihood of interacting with and regulating transport processes. The top-ranked DUBs were OTUB1, USP14, PSMD7, PSMD14, USP7, USP9X, OTUD4, USP10, and UCHL5. Among these USP7, USP9X, OTUD4, and USP10 are known to be involved in endosomal trafficking and have potential roles in endosomal recycling of plasma membrane proteins in the mammalian cortical collecting duct. Copyright © 2017 the American Physiological Society.
Attentional and Contextual Priors in Sound Perception.
Wolmetz, Michael; Elhilali, Mounya
2016-01-01
Behavioral and neural studies of selective attention have consistently demonstrated that explicit attentional cues to particular perceptual features profoundly alter perception and performance. The statistics of the sensory environment can also provide cues about what perceptual features to expect, but the extent to which these more implicit contextual cues impact perception and performance, as well as their relationship to explicit attentional cues, is not well understood. In this study, the explicit cues, or attentional prior probabilities, and the implicit cues, or contextual prior probabilities, associated with different acoustic frequencies in a detection task were simultaneously manipulated. Both attentional and contextual priors had similarly large but independent impacts on sound detectability, with evidence that listeners tracked and used contextual priors for a variety of sound classes (pure tones, harmonic complexes, and vowels). Further analyses showed that listeners updated their contextual priors rapidly and optimally, given the changing acoustic frequency statistics inherent in the paradigm. A Bayesian Observer model accounted for both attentional and contextual adaptations found with listeners. These results bolster the interpretation of perception as Bayesian inference, and suggest that some effects attributed to selective attention may be a special case of contextual prior integration along a feature axis.
Tree Biomass Estimation of Chinese fir (Cunninghamia lanceolata) Based on Bayesian Method
Zhang, Jianguo
2013-01-01
Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass. PMID:24278198
Tree biomass estimation of Chinese fir (Cunninghamia lanceolata) based on Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo
2013-01-01
Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation W = a(D2H)b was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass.
Attentional and Contextual Priors in Sound Perception
Wolmetz, Michael; Elhilali, Mounya
2016-01-01
Behavioral and neural studies of selective attention have consistently demonstrated that explicit attentional cues to particular perceptual features profoundly alter perception and performance. The statistics of the sensory environment can also provide cues about what perceptual features to expect, but the extent to which these more implicit contextual cues impact perception and performance, as well as their relationship to explicit attentional cues, is not well understood. In this study, the explicit cues, or attentional prior probabilities, and the implicit cues, or contextual prior probabilities, associated with different acoustic frequencies in a detection task were simultaneously manipulated. Both attentional and contextual priors had similarly large but independent impacts on sound detectability, with evidence that listeners tracked and used contextual priors for a variety of sound classes (pure tones, harmonic complexes, and vowels). Further analyses showed that listeners updated their contextual priors rapidly and optimally, given the changing acoustic frequency statistics inherent in the paradigm. A Bayesian Observer model accounted for both attentional and contextual adaptations found with listeners. These results bolster the interpretation of perception as Bayesian inference, and suggest that some effects attributed to selective attention may be a special case of contextual prior integration along a feature axis. PMID:26882228
The impossibility of probabilities
NASA Astrophysics Data System (ADS)
Zimmerman, Peter D.
2017-11-01
This paper discusses the problem of assigning probabilities to the likelihood of nuclear terrorism events, in particular examining the limitations of using Bayesian priors for this purpose. It suggests an alternate approach to analyzing the threat of nuclear terrorism.
NASA Astrophysics Data System (ADS)
Xu, Wenbo; Jing, Shaocai; Yu, Wenjuan; Wang, Zhaoxian; Zhang, Guoping; Huang, Jianxi
2013-11-01
In this study, the high risk areas of Sichuan Province with debris flow, Panzhihua and Liangshan Yi Autonomous Prefecture, were taken as the studied areas. By using rainfall and environmental factors as the predictors and based on the different prior probability combinations of debris flows, the prediction of debris flows was compared in the areas with statistical methods: logistic regression (LR) and Bayes discriminant analysis (BDA). The results through the comprehensive analysis show that (a) with the mid-range scale prior probability, the overall predicting accuracy of BDA is higher than those of LR; (b) with equal and extreme prior probabilities, the overall predicting accuracy of LR is higher than those of BDA; (c) the regional predicting models of debris flows with rainfall factors only have worse performance than those introduced environmental factors, and the predicting accuracies of occurrence and nonoccurrence of debris flows have been changed in the opposite direction as the supplemented information.
Probabilistic, Seismically-Induced Landslide Hazard Mapping of Western Oregon
NASA Astrophysics Data System (ADS)
Olsen, M. J.; Sharifi Mood, M.; Gillins, D. T.; Mahalingam, R.
2015-12-01
Earthquake-induced landslides can generate significant damage within urban communities by damaging structures, obstructing lifeline connection routes and utilities, generating various environmental impacts, and possibly resulting in loss of life. Reliable hazard and risk maps are important to assist agencies in efficiently allocating and managing limited resources to prepare for such events. This research presents a new methodology in order to communicate site-specific landslide hazard assessments in a large-scale, regional map. Implementation of the proposed methodology results in seismic-induced landslide hazard maps that depict the probabilities of exceeding landslide displacement thresholds (e.g. 0.1, 0.3, 1.0 and 10 meters). These maps integrate a variety of data sources including: recent landslide inventories, LIDAR and photogrammetric topographic data, geology map, mapped NEHRP site classifications based on available shear wave velocity data in each geologic unit, and USGS probabilistic seismic hazard curves. Soil strength estimates were obtained by evaluating slopes present along landslide scarps and deposits for major geologic units. Code was then developed to integrate these layers to perform a rigid, sliding block analysis to determine the amount and associated probabilities of displacement based on each bin of peak ground acceleration in the seismic hazard curve at each pixel. The methodology was applied to western Oregon, which contains weak, weathered, and often wet soils at steep slopes. Such conditions have a high landslide hazard even without seismic events. A series of landslide hazard maps highlighting the probabilities of exceeding the aforementioned thresholds were generated for the study area. These output maps were then utilized in a performance based design framework enabling them to be analyzed in conjunction with other hazards for fully probabilistic-based hazard evaluation and risk assessment. a) School of Civil and Construction Engineering, Oregon State University, Corvallis, OR 97331, USA
Automatic Depth Extraction from 2D Images Using a Cluster-Based Learning Framework.
Herrera, Jose L; Del-Blanco, Carlos R; Garcia, Narciso
2018-07-01
There has been a significant increase in the availability of 3D players and displays in the last years. Nonetheless, the amount of 3D content has not experimented an increment of such magnitude. To alleviate this problem, many algorithms for converting images and videos from 2D to 3D have been proposed. Here, we present an automatic learning-based 2D-3D image conversion approach, based on the key hypothesis that color images with similar structure likely present a similar depth structure. The presented algorithm estimates the depth of a color query image using the prior knowledge provided by a repository of color + depth images. The algorithm clusters this database attending to their structural similarity, and then creates a representative of each color-depth image cluster that will be used as prior depth map. The selection of the appropriate prior depth map corresponding to one given color query image is accomplished by comparing the structural similarity in the color domain between the query image and the database. The comparison is based on a K-Nearest Neighbor framework that uses a learning procedure to build an adaptive combination of image feature descriptors. The best correspondences determine the cluster, and in turn the associated prior depth map. Finally, this prior estimation is enhanced through a segmentation-guided filtering that obtains the final depth map estimation. This approach has been tested using two publicly available databases, and compared with several state-of-the-art algorithms in order to prove its efficiency.
PET image reconstruction using multi-parametric anato-functional priors
NASA Astrophysics Data System (ADS)
Mehranian, Abolfazl; Belzunce, Martin A.; Niccolini, Flavia; Politis, Marios; Prieto, Claudia; Turkheimer, Federico; Hammers, Alexander; Reader, Andrew J.
2017-08-01
In this study, we investigate the application of multi-parametric anato-functional (MR-PET) priors for the maximum a posteriori (MAP) reconstruction of brain PET data in order to address the limitations of the conventional anatomical priors in the presence of PET-MR mismatches. In addition to partial volume correction benefits, the suitability of these priors for reconstruction of low-count PET data is also introduced and demonstrated, comparing to standard maximum-likelihood (ML) reconstruction of high-count data. The conventional local Tikhonov and total variation (TV) priors and current state-of-the-art anatomical priors including the Kaipio, non-local Tikhonov prior with Bowsher and Gaussian similarity kernels are investigated and presented in a unified framework. The Gaussian kernels are calculated using both voxel- and patch-based feature vectors. To cope with PET and MR mismatches, the Bowsher and Gaussian priors are extended to multi-parametric priors. In addition, we propose a modified joint Burg entropy prior that by definition exploits all parametric information in the MAP reconstruction of PET data. The performance of the priors was extensively evaluated using 3D simulations and two clinical brain datasets of [18F]florbetaben and [18F]FDG radiotracers. For simulations, several anato-functional mismatches were intentionally introduced between the PET and MR images, and furthermore, for the FDG clinical dataset, two PET-unique active tumours were embedded in the PET data. Our simulation results showed that the joint Burg entropy prior far outperformed the conventional anatomical priors in terms of preserving PET unique lesions, while still reconstructing functional boundaries with corresponding MR boundaries. In addition, the multi-parametric extension of the Gaussian and Bowsher priors led to enhanced preservation of edge and PET unique features and also an improved bias-variance performance. In agreement with the simulation results, the clinical results also showed that the Gaussian prior with voxel-based feature vectors, the Bowsher and the joint Burg entropy priors were the best performing priors. However, for the FDG dataset with simulated tumours, the TV and proposed priors were capable of preserving the PET-unique tumours. Finally, an important outcome was the demonstration that the MAP reconstruction of a low-count FDG PET dataset using the proposed joint entropy prior can lead to comparable image quality to a conventional ML reconstruction with up to 5 times more counts. In conclusion, multi-parametric anato-functional priors provide a solution to address the pitfalls of the conventional priors and are therefore likely to increase the diagnostic confidence in MR-guided PET image reconstructions.
Ellefsen, Karl J.
2017-06-27
MapMark4 is a software package that implements the probability calculations in three-part mineral resource assessments. Functions within the software package are written in the R statistical programming language. These functions, their documentation, and a copy of this user’s guide are bundled together in R’s unit of shareable code, which is called a “package.” This user’s guide includes step-by-step instructions showing how the functions are used to carry out the probability calculations. The calculations are demonstrated using test data, which are included in the package.
Cerf, O; Griffiths, M; Aziza, F
2007-01-01
Conflicting laboratory-acquired data have been published about the heat resistance of Mycobacterium avium subsp. paratuberculosis (MAP), the cause of the deadly paratuberculosis (Johne's disease) of ruminants. Results of surveys of the presence of MAP in industrially pasteurized milk from several countries are conflicting also. This paper critically reviews the available data on the heat resistance of MAP and, based on these studies, a quantitative model describing the probability of finding MAP in pasteurized milk under the conditions prevailing in industrialized countries was derived using Monte Carlo simulation. The simulation assesses the probability of detecting MAP in 50-mL samples of pasteurized milk as lower than 1%. Hypotheses are presented to explain why higher frequencies were found by some authors; these included improper pasteurization and cross-contamination in the analytical laboratory. Hypotheses implicating a high rate of inter- and intraherd prevalence of paratuberculosis or heavy contamination of raw milk by feces were rejected.
Does prediction error drive one-shot declarative learning?
Greve, Andrea; Cooper, Elisa; Kaula, Alexander; Anderson, Michael C; Henson, Richard
2017-06-01
The role of prediction error (PE) in driving learning is well-established in fields such as classical and instrumental conditioning, reward learning and procedural memory; however, its role in human one-shot declarative encoding is less clear. According to one recent hypothesis, PE reflects the divergence between two probability distributions: one reflecting the prior probability (from previous experiences) and the other reflecting the sensory evidence (from the current experience). Assuming unimodal probability distributions, PE can be manipulated in three ways: (1) the distance between the mode of the prior and evidence, (2) the precision of the prior, and (3) the precision of the evidence. We tested these three manipulations across five experiments, in terms of peoples' ability to encode a single presentation of a scene-item pairing as a function of previous exposures to that scene and/or item. Memory was probed by presenting the scene together with three choices for the previously paired item, in which the two foil items were from other pairings within the same condition as the target item. In Experiment 1, we manipulated the evidence to be either consistent or inconsistent with prior expectations, predicting PE to be larger, and hence memory better, when the new pairing was inconsistent. In Experiments 2a-c, we manipulated the precision of the priors, predicting better memory for a new pairing when the (inconsistent) priors were more precise. In Experiment 3, we manipulated both visual noise and prior exposure for unfamiliar faces, before pairing them with scenes, predicting better memory when the sensory evidence was more precise. In all experiments, the PE hypotheses were supported. We discuss alternative explanations of individual experiments, and conclude the Predictive Interactive Multiple Memory Signals (PIMMS) framework provides the most parsimonious account of the full pattern of results.
Hossain, Monir; Wright, Steven; Petersen, Laura A
2002-04-01
One way to monitor patient access to emergent health care services is to use patient characteristics to predict arrival time at the hospital after onset of symptoms. This predicted arrival time can then be compared with actual arrival time to allow monitoring of access to services. Predicted arrival time could also be used to estimate potential effects of changes in health care service availability, such as closure of an emergency department or an acute care hospital. Our goal was to determine the best statistical method for prediction of arrival intervals for patients with acute myocardial infarction (AMI) symptoms. We compared the performance of multinomial logistic regression (MLR) and discriminant analysis (DA) models. Models for MLR and DA were developed using a dataset of 3,566 male veterans hospitalized with AMI in 81 VA Medical Centers in 1994-1995 throughout the United States. The dataset was randomly divided into a training set (n = 1,846) and a test set (n = 1,720). Arrival times were grouped into three intervals on the basis of treatment considerations: <6 hours, 6-12 hours, and >12 hours. One model for MLR and two models for DA were developed using the training dataset. One DA model had equal prior probabilities, and one DA model had proportional prior probabilities. Predictive performance of the models was compared using the test (n = 1,720) dataset. Using the test dataset, the proportions of patients in the three arrival time groups were 60.9% for <6 hours, 10.3% for 6-12 hours, and 28.8% for >12 hours after symptom onset. Whereas the overall predictive performance by MLR and DA with proportional priors was higher, the DA models with equal priors performed much better in the smaller groups. Correct classifications were 62.6% by MLR, 62.4% by DA using proportional prior probabilities, and 48.1% using equal prior probabilities of the groups. The misclassifications by MLR for the three groups were 9.5%, 100.0%, 74.2% for each time interval, respectively. Misclassifications by DA models were 9.8%, 100.0%, and 74.4% for the model with proportional priors and 47.6%, 79.5%, and 51.0% for the model with equal priors. The choice of MLR or DA with proportional priors, or DA with equal priors for monitoring time intervals of predicted hospital arrival time for a population should depend on the consequences of misclassification errors.
Mapping Wildfire Ignition Probability Using Sentinel 2 and LiDAR (Jerte Valley, Cáceres, Spain)
Sánchez Sánchez, Yolanda; Mateos Picado, Marina
2018-01-01
Wildfire is a major threat to the environment, and this threat is aggravated by different climatic and socioeconomic factors. The availability of detailed, reliable mapping and periodic and immediate updates makes wildfire prevention and extinction work more effective. An analyst protocol has been generated that allows the precise updating of high-resolution thematic maps. For this protocol, images obtained through the Sentinel 2A satellite, with a return time of five days, have been merged with Light Detection and Ranging (LiDAR) data with a density of 0.5 points/m2 in order to obtain vegetation mapping with an accuracy of 88% (kappa = 0.86), which is then extrapolated to fuel model mapping through a decision tree. This process, which is fast and reliable, serves as a cartographic base for the later calculation of ignition-probability mapping. The generated cartography is a fundamental tool to be used in the decision making involved in the planning of preventive silvicultural treatments, extinguishing media distribution, infrastructure construction, etc. PMID:29522460
Mapping Wildfire Ignition Probability Using Sentinel 2 and LiDAR (Jerte Valley, Cáceres, Spain).
Sánchez Sánchez, Yolanda; Martínez-Graña, Antonio; Santos Francés, Fernando; Mateos Picado, Marina
2018-03-09
Wildfire is a major threat to the environment, and this threat is aggravated by different climatic and socioeconomic factors. The availability of detailed, reliable mapping and periodic and immediate updates makes wildfire prevention and extinction work more effective. An analyst protocol has been generated that allows the precise updating of high-resolution thematic maps. For this protocol, images obtained through the Sentinel 2A satellite, with a return time of five days, have been merged with Light Detection and Ranging (LiDAR) data with a density of 0.5 points/m² in order to obtain vegetation mapping with an accuracy of 88% (kappa = 0.86), which is then extrapolated to fuel model mapping through a decision tree. This process, which is fast and reliable, serves as a cartographic base for the later calculation of ignition-probability mapping. The generated cartography is a fundamental tool to be used in the decision making involved in the planning of preventive silvicultural treatments, extinguishing media distribution, infrastructure construction, etc.
Özarslan, Evren; Koay, Cheng Guan; Shepherd, Timothy M; Komlosh, Michal E; İrfanoğlu, M Okan; Pierpaoli, Carlo; Basser, Peter J
2013-09-01
Diffusion-weighted magnetic resonance (MR) signals reflect information about underlying tissue microstructure and cytoarchitecture. We propose a quantitative, efficient, and robust mathematical and physical framework for representing diffusion-weighted MR imaging (MRI) data obtained in "q-space," and the corresponding "mean apparent propagator (MAP)" describing molecular displacements in "r-space." We also define and map novel quantitative descriptors of diffusion that can be computed robustly using this MAP-MRI framework. We describe efficient analytical representation of the three-dimensional q-space MR signal in a series expansion of basis functions that accurately describes diffusion in many complex geometries. The lowest order term in this expansion contains a diffusion tensor that characterizes the Gaussian displacement distribution, equivalent to diffusion tensor MRI (DTI). Inclusion of higher order terms enables the reconstruction of the true average propagator whose projection onto the unit "displacement" sphere provides an orientational distribution function (ODF) that contains only the orientational dependence of the diffusion process. The representation characterizes novel features of diffusion anisotropy and the non-Gaussian character of the three-dimensional diffusion process. Other important measures this representation provides include the return-to-the-origin probability (RTOP), and its variants for diffusion in one- and two-dimensions-the return-to-the-plane probability (RTPP), and the return-to-the-axis probability (RTAP), respectively. These zero net displacement probabilities measure the mean compartment (pore) volume and cross-sectional area in distributions of isolated pores irrespective of the pore shape. MAP-MRI represents a new comprehensive framework to model the three-dimensional q-space signal and transform it into diffusion propagators. Experiments on an excised marmoset brain specimen demonstrate that MAP-MRI provides several novel, quantifiable parameters that capture previously obscured intrinsic features of nervous tissue microstructure. This should prove helpful for investigating the functional organization of normal and pathologic nervous tissue. Copyright © 2013 Elsevier Inc. All rights reserved.
Verification of the WFAS Lightning Efficiency Map
Paul Sopko; Don Latham; Isaac Grenfell
2007-01-01
A Lightning Ignition Efficiency map was added to the suite of daily maps offered by the Wildland Fire Assessment System (WFAS) in 1999. This map computes a lightning probability of ignition (POI) based on the estimated fuel type, fuel depth, and 100-hour fuel moisture interpolated from the Remote Automated Weather Station (RAWS) network. An attempt to verify the...
Intrinsic Bayesian Active Contours for Extraction of Object Boundaries in Images
Srivastava, Anuj
2010-01-01
We present a framework for incorporating prior information about high-probability shapes in the process of contour extraction and object recognition in images. Here one studies shapes as elements of an infinite-dimensional, non-linear quotient space, and statistics of shapes are defined and computed intrinsically using differential geometry of this shape space. Prior models on shapes are constructed using probability distributions on tangent bundles of shape spaces. Similar to the past work on active contours, where curves are driven by vector fields based on image gradients and roughness penalties, we incorporate the prior shape knowledge in the form of vector fields on curves. Through experimental results, we demonstrate the use of prior shape models in the estimation of object boundaries, and their success in handling partial obscuration and missing data. Furthermore, we describe the use of this framework in shape-based object recognition or classification. PMID:21076692
The neural basis of belief updating and rational decision making
Achtziger, Anja; Hügelschäfer, Sabine; Steinhauser, Marco
2014-01-01
Rational decision making under uncertainty requires forming beliefs that integrate prior and new information through Bayes’ rule. Human decision makers typically deviate from Bayesian updating by either overweighting the prior (conservatism) or overweighting new information (e.g. the representativeness heuristic). We investigated these deviations through measurements of electrocortical activity in the human brain during incentivized probability-updating tasks and found evidence of extremely early commitment to boundedly rational heuristics. Participants who overweight new information display a lower sensibility to conflict detection, captured by an event-related potential (the N2) observed around 260 ms after the presentation of new information. Conservative decision makers (who overweight prior probabilities) make up their mind before new information is presented, as indicated by the lateralized readiness potential in the brain. That is, they do not inhibit the processing of new information but rather immediately rely on the prior for making a decision. PMID:22956673
The neural basis of belief updating and rational decision making.
Achtziger, Anja; Alós-Ferrer, Carlos; Hügelschäfer, Sabine; Steinhauser, Marco
2014-01-01
Rational decision making under uncertainty requires forming beliefs that integrate prior and new information through Bayes' rule. Human decision makers typically deviate from Bayesian updating by either overweighting the prior (conservatism) or overweighting new information (e.g. the representativeness heuristic). We investigated these deviations through measurements of electrocortical activity in the human brain during incentivized probability-updating tasks and found evidence of extremely early commitment to boundedly rational heuristics. Participants who overweight new information display a lower sensibility to conflict detection, captured by an event-related potential (the N2) observed around 260 ms after the presentation of new information. Conservative decision makers (who overweight prior probabilities) make up their mind before new information is presented, as indicated by the lateralized readiness potential in the brain. That is, they do not inhibit the processing of new information but rather immediately rely on the prior for making a decision.
NASA Astrophysics Data System (ADS)
Skilling, John
2005-11-01
This tutorial gives a basic overview of Bayesian methodology, from its axiomatic foundation through the conventional development of data analysis and model selection to its rôle in quantum mechanics, and ending with some comments on inference in general human affairs. The central theme is that probability calculus is the unique language within which we can develop models of our surroundings that have predictive capability. These models are patterns of belief; there is no need to claim external reality. 1. Logic and probability 2. Probability and inference 3. Probability and model selection 4. Prior probabilities 5. Probability and frequency 6. Probability and quantum mechanics 7. Probability and fundamentalism 8. Probability and deception 9. Prediction and truth
Numerical optimization using flow equations.
Punk, Matthias
2014-12-01
We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.
Numerical optimization using flow equations
NASA Astrophysics Data System (ADS)
Punk, Matthias
2014-12-01
We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.
Sampling probability distributions of lesions in mammograms
NASA Astrophysics Data System (ADS)
Looney, P.; Warren, L. M.; Dance, D. R.; Young, K. C.
2015-03-01
One approach to image perception studies in mammography using virtual clinical trials involves the insertion of simulated lesions into normal mammograms. To facilitate this, a method has been developed that allows for sampling of lesion positions across the cranio-caudal and medio-lateral radiographic projections in accordance with measured distributions of real lesion locations. 6825 mammograms from our mammography image database were segmented to find the breast outline. The outlines were averaged and smoothed to produce an average outline for each laterality and radiographic projection. Lesions in 3304 mammograms with malignant findings were mapped on to a standardised breast image corresponding to the average breast outline using piecewise affine transforms. A four dimensional probability distribution function was found from the lesion locations in the cranio-caudal and medio-lateral radiographic projections for calcification and noncalcification lesions. Lesion locations sampled from this probability distribution function were mapped on to individual mammograms using a piecewise affine transform which transforms the average outline to the outline of the breast in the mammogram. The four dimensional probability distribution function was validated by comparing it to the two dimensional distributions found by considering each radiographic projection and laterality independently. The correlation of the location of the lesions sampled from the four dimensional probability distribution function across radiographic projections was shown to match the correlation of the locations of the original mapped lesion locations. The current system has been implemented as a web-service on a server using the Python Django framework. The server performs the sampling, performs the mapping and returns the results in a javascript object notation format.
ERIC Educational Resources Information Center
Richardson, R. Thomas; Sammons, Dotty; Del-Parte, Donna
2018-01-01
This study compared learning performance during and following AR and non-AR topographic map instruction and practice Two-way ANOVA testing indicated no significant differences on a posttest assessment between map type and spatial ability. Prior learning activity results revealed a significant performance difference between AR and non-AR treatment…
ERIC Educational Resources Information Center
Bayen, Ute J.; Kuhlmann, Beatrice G.
2011-01-01
The authors investigated conditions under which judgments in source-monitoring tasks are influenced by prior schematic knowledge. According to a probability-matching account of source guessing (Spaniol & Bayen, 2002), when people do not remember the source of information, they match source-guessing probabilities to the perceived contingency…
78 FR 16189 - Transportation of Agricultural Commodities
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-14
... 32934, respectively, of the Moving Ahead for Progress in the 21st Century Act (MAP-21). Although prior statutory exemptions involving agriculture are unchanged, some of these exemptions overlap with MAP-21... HM Hazardous Materials HOS Hours of Service MAP-21 Moving Ahead for Progress in the 21st Century Act...
A high-resolution cattle CNV map by population-scale genome sequencing
USDA-ARS?s Scientific Manuscript database
Copy Number Variations (CNVs) are common genomic structural variations that have been linked to human diseases and phenotypic traits. Prior studies in cattle have produced low-resolution CNV maps. We constructed a draft, high-resolution map of cattle CNVs based on whole genome sequencing data from 7...
Dynamic prescription maps for site-specific variable rate irrigation of cotton
USDA-ARS?s Scientific Manuscript database
A prescription map is a set of instructions that controls a variable rate irrigation (VRI) system. These maps, which may be based on prior yield, soil texture, topography, or soil electrical conductivity data, are often manually applied at the beginning of an irrigation season and remain static. The...
Warren, Tessa; Dickey, Michael Walsh; Liburd, Teljer L
2017-07-01
The rational inference, or noisy channel, account of language comprehension predicts that comprehenders are sensitive to the probabilities of different interpretations for a given sentence and adapt as these probabilities change (Gibson, Bergen & Piantadosi, 2013). This account provides an important new perspective on aphasic sentence comprehension: aphasia may increase the likelihood of sentence distortion, leading people with aphasia (PWA) to rely more on the prior probability of an interpretation and less on the form or structure of the sentence (Gibson, Sandberg, Fedorenko, Bergen & Kiran, 2015). We report the results of a sentence-picture matching experiment that tested the predictions of the rational inference account and other current models of aphasic sentence comprehension across a variety of sentence structures. Consistent with the rational inference account, PWA showed similar sensitivity to the probability of particular kinds of form distortions as age-matched controls, yet overall their interpretations relied more on prior probability and less on sentence form. As predicted by rational inference, but not by other models of sentence comprehension in aphasia, PWA's interpretations were more faithful to the form for active and passive sentences than for direct object and prepositional object sentences. However contra rational inference, there was no evidence that individual PWA's severity of syntactic or semantic impairment predicted their sensitivity to form versus the prior probability of a sentence, as cued by semantics. These findings confirm and extend previous findings that suggest the rational inference account holds promise for explaining aphasic and neurotypical comprehension, but they also raise new challenges for the account. Copyright © 2017 Elsevier Ltd. All rights reserved.
Risk-targeted maps for Romania
NASA Astrophysics Data System (ADS)
Vacareanu, Radu; Pavel, Florin; Craciun, Ionut; Coliba, Veronica; Arion, Cristian; Aldea, Alexandru; Neagu, Cristian
2018-03-01
Romania has one of the highest seismic hazard levels in Europe. The seismic hazard is due to a combination of local crustal seismic sources, situated mainly in the western part of the country and the Vrancea intermediate-depth seismic source, which can be found at the bend of the Carpathian Mountains. Recent seismic hazard studies have shown that there are consistent differences between the slopes of the seismic hazard curves for sites situated in the fore-arc and back-arc of the Carpathian Mountains. Consequently, in this study we extend this finding to the evaluation of the probability of collapse of buildings and finally to the development of uniform risk-targeted maps. The main advantage of uniform risk approach is that the target probability of collapse will be uniform throughout the country. Finally, the results obtained are discussed in the light of a recent study with the same focus performed at European level using the hazard data from SHARE project. The analyses performed in this study have pointed out to a dominant influence of the quantile of peak ground acceleration used for anchoring the fragility function. This parameter basically alters the shape of the risk-targeted maps shifting the areas which have higher collapse probabilities from eastern Romania to western Romania, as its exceedance probability increases. Consequently, a uniform procedure for deriving risk-targeted maps appears as more than necessary.
Automatic 3D liver segmentation based on deep learning and globally optimized surface evolution
NASA Astrophysics Data System (ADS)
Hu, Peijun; Wu, Fa; Peng, Jialin; Liang, Ping; Kong, Dexing
2016-12-01
The detection and delineation of the liver from abdominal 3D computed tomography (CT) images are fundamental tasks in computer-assisted liver surgery planning. However, automatic and accurate segmentation, especially liver detection, remains challenging due to complex backgrounds, ambiguous boundaries, heterogeneous appearances and highly varied shapes of the liver. To address these difficulties, we propose an automatic segmentation framework based on 3D convolutional neural network (CNN) and globally optimized surface evolution. First, a deep 3D CNN is trained to learn a subject-specific probability map of the liver, which gives the initial surface and acts as a shape prior in the following segmentation step. Then, both global and local appearance information from the prior segmentation are adaptively incorporated into a segmentation model, which is globally optimized in a surface evolution way. The proposed method has been validated on 42 CT images from the public Sliver07 database and local hospitals. On the Sliver07 online testing set, the proposed method can achieve an overall score of 80.3+/- 4.5 , yielding a mean Dice similarity coefficient of 97.25+/- 0.65 % , and an average symmetric surface distance of 0.84+/- 0.25 mm. The quantitative validations and comparisons show that the proposed method is accurate and effective for clinical application.
BM-Map: Bayesian Mapping of Multireads for Next-Generation Sequencing Data
Ji, Yuan; Xu, Yanxun; Zhang, Qiong; Tsui, Kam-Wah; Yuan, Yuan; Norris, Clift; Liang, Shoudan; Liang, Han
2011-01-01
Summary Next-generation sequencing (NGS) technology generates millions of short reads, which provide valuable information for various aspects of cellular activities and biological functions. A key step in NGS applications (e.g., RNA-Seq) is to map short reads to correct genomic locations within the source genome. While most reads are mapped to a unique location, a significant proportion of reads align to multiple genomic locations with equal or similar numbers of mismatches; these are called multireads. The ambiguity in mapping the multireads may lead to bias in downstream analyses. Currently, most practitioners discard the multireads in their analysis, resulting in a loss of valuable information, especially for the genes with similar sequences. To refine the read mapping, we develop a Bayesian model that computes the posterior probability of mapping a multiread to each competing location. The probabilities are used for downstream analyses, such as the quantification of gene expression. We show through simulation studies and RNA-Seq analysis of real life data that the Bayesian method yields better mapping than the current leading methods. We provide a C++ program for downloading that is being packaged into a user-friendly software. PMID:21517792
NASA Technical Reports Server (NTRS)
Harwood, Kelly; Wickens, Christopher D.
1991-01-01
Computer-generated map displays for NOE and low-level helicopter flight were formed according to prior research on maps, navigational problem solving, and spatial cognition in large-scale environments. The north-up map emphasized consistency of object location, wheareas, the track-up map emphasized map-terrain congruency. A component analysis indicates that different cognitive components, e.g., orienting and absolute object location, are supported to varying degrees by properties of different frames of reference.
A multi-part matching strategy for mapping LOINC with laboratory terminologies
Lee, Li-Hui; Groß, Anika; Hartung, Michael; Liou, Der-Ming; Rahm, Erhard
2014-01-01
Objective To address the problem of mapping local laboratory terminologies to Logical Observation Identifiers Names and Codes (LOINC). To study different ontology matching algorithms and investigate how the probability of term combinations in LOINC helps to increase match quality and reduce manual effort. Materials and methods We proposed two matching strategies: full name and multi-part. The multi-part approach also considers the occurrence probability of combined concept parts. It can further recommend possible combinations of concept parts to allow more local terms to be mapped. Three real-world laboratory databases from Taiwanese hospitals were used to validate the proposed strategies with respect to different quality measures and execution run time. A comparison with the commonly used tool, Regenstrief LOINC Mapping Assistant (RELMA) Lab Auto Mapper (LAM), was also carried out. Results The new multi-part strategy yields the best match quality, with F-measure values between 89% and 96%. It can automatically match 70–85% of the laboratory terminologies to LOINC. The recommendation step can further propose mapping to (proposed) LOINC concepts for 9–20% of the local terminology concepts. On average, 91% of the local terminology concepts can be correctly mapped to existing or newly proposed LOINC concepts. Conclusions The mapping quality of the multi-part strategy is significantly better than that of LAM. It enables domain experts to perform LOINC matching with little manual work. The probability of term combinations proved to be a valuable strategy for increasing the quality of match results, providing recommendations for proposed LOINC conepts, and decreasing the run time for match processing. PMID:24363318
Holzer, Thomas L.; Noce, Thomas E.; Bennett, Michael J.
2008-01-01
Maps showing the probability of surface manifestations of liquefaction in the northern Santa Clara Valley were prepared with liquefaction probability curves. The area includes the communities of San Jose, Campbell, Cupertino, Los Altos, Los Gatos Milpitas, Mountain View, Palo Alto, Santa Clara, Saratoga, and Sunnyvale. The probability curves were based on complementary cumulative frequency distributions of the liquefaction potential index (LPI) for surficial geologic units in the study area. LPI values were computed with extensive cone penetration test soundings. Maps were developed for three earthquake scenarios, an M7.8 on the San Andreas Fault comparable to the 1906 event, an M6.7 on the Hayward Fault comparable to the 1868 event, and an M6.9 on the Calaveras Fault. Ground motions were estimated with the Boore and Atkinson (2008) attenuation relation. Liquefaction is predicted for all three events in young Holocene levee deposits along the major creeks. Liquefaction probabilities are highest for the M7.8 earthquake, ranging from 0.33 to 0.37 if a 1.5-m deep water table is assumed, and 0.10 to 0.14 if a 5-m deep water table is assumed. Liquefaction probabilities of the other surficial geologic units are less than 0.05. Probabilities for the scenario earthquakes are generally consistent with observations during historical earthquakes.
Where can pixel counting area estimates meet user-defined accuracy requirements?
NASA Astrophysics Data System (ADS)
Waldner, François; Defourny, Pierre
2017-08-01
Pixel counting is probably the most popular way to estimate class areas from satellite-derived maps. It involves determining the number of pixels allocated to a specific thematic class and multiplying it by the pixel area. In the presence of asymmetric classification errors, the pixel counting estimator is biased. The overarching objective of this article is to define the applicability conditions of pixel counting so that the estimates are below a user-defined accuracy target. By reasoning in terms of landscape fragmentation and spatial resolution, the proposed framework decouples the resolution bias and the classifier bias from the overall classification bias. The consequence is that prior to any classification, part of the tolerated bias is already committed due to the choice of the spatial resolution of the imagery. How much classification bias is affordable depends on the joint interaction of spatial resolution and fragmentation. The method was implemented over South Africa for cropland mapping, demonstrating its operational applicability. Particular attention was paid to modeling a realistic sensor's spatial response by explicitly accounting for the effect of its point spread function. The diagnostic capabilities offered by this framework have multiple potential domains of application such as guiding users in their choice of imagery and providing guidelines for space agencies to elaborate the design specifications of future instruments.
Modelling eye movements in a categorical search task
Zelinsky, Gregory J.; Adeli, Hossein; Peng, Yifan; Samaras, Dimitris
2013-01-01
We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. PMID:24018720
Saad, Wilson Abrão; Guarda, Ismael Francisco Motta Siqueira; Camargo, Luiz Antonio de Arruda; dos Santos, Talmir Augusto Faria Brisola; Saad, William Abrão; Simões, Sylvio; Guarda, Renata Saad
2002-04-05
Our studies have focused on the effect of injection of L-NAME and sodium nitroprussiate (SNP) on the salivary secretion, arterial blood pressure, sodium excretion and urinary volume induced by pilocarpine which was injected into the medial septal area (MSA). Rats were anesthetized with urethane (1.25 g/kg b. wt.) and a stainless steel cannula was implanted into their MSA. The amount of saliva secretion was studied over a five-minute period after injection of pilocarpine into MSA. Injection of pilocarpine (10, 20, 40, 80, 160 microg/microl) into MSA produced a dose-dependent increase in salivary secretion. L-NG-nitro arginine methyl-esther (L-NAME) (40 microg/microl), a nitric oxide (NO) synthase inhibitor, was injected into MSA prior to the injection of pilocarpine into MSA, producing an increase in salivary secretion due to the effect of pilocarpine. Sodium nitroprussiate (SNP) (30 microg/microl) was injected into MSA prior to the injection of pilocarpine into MSA attenuating the increase in salivary secretion induced by pilocarpine. Medial arterial pressure (MAP) increase after injections of pilocarpine into the MSA. L-NAME injected into the MSA prior to injection of pilocarpine into MSA increased the MAP. SNP injected into the MSA prior to pilocarpine attenuated the effect of pilocarpine on MAP. Pilocarpine (40 ug/ul) injected into the MAS induced an increase in sodium and urinary excretion. L-NAME injected prior to pilocarpine into the MSA increased the urinary sodium excretion and urinary volume induced by pilocarpine. SNP injected prior to pilocarpine into the MSA decreased the sodium excretion and urinary volume induced by pilocarpine. All these roles of pilocarpine depend on the release of nitric oxide into the MSA. We may also conclude that the MSA is involved with the cholinergic excitatory mechanism that induce salivary secretion, increase in MAP and increase in sodium excretion and urinary volume.
Comparing hard and soft prior bounds in geophysical inverse problems
NASA Technical Reports Server (NTRS)
Backus, George E.
1988-01-01
In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describes the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.
Comparing hard and soft prior bounds in geophysical inverse problems
NASA Technical Reports Server (NTRS)
Backus, George E.
1987-01-01
In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describeds the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Yavari, R.; Ramaswami, S.; Snipes, J. S.; Yen, C.-F.; Cheeseman, B. A.
2013-11-01
A comprehensive all-atom molecular-level computational investigation is carried out in order to identify and quantify: (i) the effect of prior longitudinal-compressive or axial-torsional loading on the longitudinal-tensile behavior of p-phenylene terephthalamide (PPTA) fibrils/fibers; and (ii) the role various microstructural/topological defects play in affecting this behavior. Experimental and computational results available in the relevant open literature were utilized to construct various defects within the molecular-level model and to assign the concentration to these defects consistent with the values generally encountered under "prototypical" PPTA-polymer synthesis and fiber fabrication conditions. When quantifying the effect of the prior longitudinal-compressive/axial-torsional loading on the longitudinal-tensile behavior of PPTA fibrils, the stochastic nature of the size/potency of these defects was taken into account. The results obtained revealed that: (a) due to the stochastic nature of the defect type, concentration/number density and size/potency, the PPTA fibril/fiber longitudinal-tensile strength is a statistical quantity possessing a characteristic probability density function; (b) application of the prior axial compression or axial torsion to the PPTA imperfect single-crystalline fibrils degrades their longitudinal-tensile strength and only slightly modifies the associated probability density function; and (c) introduction of the fibril/fiber interfaces into the computational analyses showed that prior axial torsion can induce major changes in the material microstructure, causing significant reductions in the PPTA-fiber longitudinal-tensile strength and appreciable changes in the associated probability density function.
NASA Astrophysics Data System (ADS)
Liu, Y.; Pau, G. S. H.; Finsterle, S.
2015-12-01
Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simulated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure for the hydrological problem considered. This work was supported, in part, by the U.S. Dept. of Energy under Contract No. DE-AC02-05CH11231
NASA Technical Reports Server (NTRS)
Huckle, H. F. (Principal Investigator)
1980-01-01
The most probable current U.S. taxonomic classification of the soils estimated to dominate world soil map units (WSM)) in selected crop producing states of Argentina and Brazil are presented. Representative U.S. soil series the units are given. The map units occurring in each state are listed with areal extent and major U.S. land resource areas in which similar soils most probably occur. Soil series sampled in LARS Technical Report 111579 and major land resource areas in which they occur with corresponding similar WSM units at the taxonomic subgroup levels are given.
ERIC Educational Resources Information Center
Axelrod, Michael I.; Zank, Amber J.
2012-01-01
Noncompliance is one of the most problematic behaviors within the school setting. One strategy to increase compliance of noncompliant students is a high-probability command sequence (HPCS; i.e., a set of simple commands in which an individual is likely to comply immediately prior to the delivery of a command that has a lower probability of…
Kim, Yusung; Tomé, Wolfgang A
2008-01-01
Voxel based iso-Tumor Control Probability (TCP) maps and iso-Complication maps are proposed as a plan-review tool especially for functional image-guided intensity-modulated radiotherapy (IMRT) strategies such as selective boosting (dose painting) and conformal avoidance IMRT. The maps employ voxel-based phenomenological biological dose-response models for target volumes and normal organs. Two IMRT strategies for prostate cancer, namely conventional uniform IMRT delivering an EUD = 84 Gy (equivalent uniform dose) to the entire PTV and selective boosting delivering an EUD = 82 Gy to the entire PTV, are investigated, to illustrate the advantages of this approach over iso-dose maps. Conventional uniform IMRT did yield a more uniform isodose map to the entire PTV while selective boosting did result in a nonuniform isodose map. However, when employing voxel based iso-TCP maps selective boosting exhibited a more uniform tumor control probability map compared to what could be achieved using conventional uniform IMRT, which showed TCP cold spots in high-risk tumor subvolumes despite delivering a higher EUD to the entire PTV. Voxel based iso-Complication maps are presented for rectum and bladder, and their utilization for selective avoidance IMRT strategies are discussed. We believe as the need for functional image guided treatment planning grows, voxel based iso-TCP and iso-Complication maps will become an important tool to assess the integrity of such treatment plans.
The Influence of Mind Mapping on Eighth Graders' Science Achievement
ERIC Educational Resources Information Center
Abi-El-Mona, Issam; Adb-El-Khalick, Fouad
2008-01-01
This study assessed the influence of using mind maps as a learning tool on eighth graders' science achievement, whether such influence was mediated by students' prior scholastic achievement, and the relationship between students' mind maps and their conceptual understandings. Sixty-two students enrolled in four intact sections of a grade 8 science…
Comparing the Performance of Japan's Earthquake Hazard Maps to Uniform and Randomized Maps
NASA Astrophysics Data System (ADS)
Brooks, E. M.; Stein, S. A.; Spencer, B. D.
2015-12-01
The devastating 2011 magnitude 9.1 Tohoku earthquake and the resulting shaking and tsunami were much larger than anticipated in earthquake hazard maps. Because this and all other earthquakes that caused ten or more fatalities in Japan since 1979 occurred in places assigned a relatively low hazard, Geller (2011) argued that "all of Japan is at risk from earthquakes, and the present state of seismological science does not allow us to reliably differentiate the risk level in particular geographic areas," so a map showing uniform hazard would be preferable to the existing map. Defenders of the maps countered by arguing that these earthquakes are low-probability events allowed by the maps, which predict the levels of shaking that should expected with a certain probability over a given time. Although such maps are used worldwide in making costly policy decisions for earthquake-resistant construction, how well these maps actually perform is unknown. We explore this hotly-contested issue by comparing how well a 510-year-long record of earthquake shaking in Japan is described by the Japanese national hazard (JNH) maps, uniform maps, and randomized maps. Surprisingly, as measured by the metric implicit in the JNH maps, i.e. that during the chosen time interval the predicted ground motion should be exceeded only at a specific fraction of the sites, both uniform and randomized maps do better than the actual maps. However, using as a metric the squared misfit between maximum observed shaking and that predicted, the JNH maps do better than uniform or randomized maps. These results indicate that the JNH maps are not performing as well as expected, that what factors control map performance is complicated, and that learning more about how maps perform and why would be valuable in making more effective policy.
Will it Blend? Visualization and Accuracy Evaluation of High-Resolution Fuzzy Vegetation Maps
NASA Astrophysics Data System (ADS)
Zlinszky, A.; Kania, A.
2016-06-01
Instead of assigning every map pixel to a single class, fuzzy classification includes information on the class assigned to each pixel but also the certainty of this class and the alternative possible classes based on fuzzy set theory. The advantages of fuzzy classification for vegetation mapping are well recognized, but the accuracy and uncertainty of fuzzy maps cannot be directly quantified with indices developed for hard-boundary categorizations. The rich information in such a map is impossible to convey with a single map product or accuracy figure. Here we introduce a suite of evaluation indices and visualization products for fuzzy maps generated with ensemble classifiers. We also propose a way of evaluating classwise prediction certainty with "dominance profiles" visualizing the number of pixels in bins according to the probability of the dominant class, also showing the probability of all the other classes. Together, these data products allow a quantitative understanding of the rich information in a fuzzy raster map both for individual classes and in terms of variability in space, and also establish the connection between spatially explicit class certainty and traditional accuracy metrics. These map products are directly comparable to widely used hard boundary evaluation procedures, support active learning-based iterative classification and can be applied for operational use.
Agapova, Maria; Devine, Emily B; Nguyen, Hiep; Wolf, Fredric M; Inoue, Lurdes Y T
2014-07-01
Assessing relative performance among competing interventions is an important part of comparative effectiveness research. Bayesian indirect comparisons add information to existing Cochrane reviews, such as which intervention is likely to perform best. However, heterogeneity variance priors may influence results and, potentially, clinical guidance. We highlight the features of Bayesian indirect comparisons using a case study of a Cochrane review update in asthma care. The probability that one self-management educational intervention outperforms others is estimated. Simulation studies investigate the effect of heterogeneity variance prior distributions. Results suggest a 55% probability that individual education is best, followed by combination (39%) and group (6%). The intervention with few trials was sensitive to prior distributions. Bayesian indirect comparisons updates of Cochrane reviews are valuable comparative effectiveness research tools.
Descriptive and Experimental Analyses of Potential Precursors to Problem Behavior
Borrero, Carrie S.W; Borrero, John C
2008-01-01
We conducted descriptive observations of severe problem behavior for 2 individuals with autism to identify precursors to problem behavior. Several comparative probability analyses were conducted in addition to lag-sequential analyses using the descriptive data. Results of the descriptive analyses showed that the probability of the potential precursor was greater given problem behavior compared to the unconditional probability of the potential precursor. Results of the lag-sequential analyses showed a marked increase in the probability of a potential precursor in the 1-s intervals immediately preceding an instance of problem behavior, and that the probability of problem behavior was highest in the 1-s intervals immediately following an instance of the precursor. We then conducted separate functional analyses of problem behavior and the precursor to identify respective operant functions. Results of the functional analyses showed that both problem behavior and the precursor served the same operant functions. These results replicate prior experimental analyses on the relation between problem behavior and precursors and extend prior research by illustrating a quantitative method to identify precursors to more severe problem behavior. PMID:18468281
Investigating prior probabilities in a multiple hypothesis test for use in space domain awareness
NASA Astrophysics Data System (ADS)
Hardy, Tyler J.; Cain, Stephen C.
2016-05-01
The goal of this research effort is to improve Space Domain Awareness (SDA) capabilities of current telescope systems through improved detection algorithms. Ground-based optical SDA telescopes are often spatially under-sampled, or aliased. This fact negatively impacts the detection performance of traditionally proposed binary and correlation-based detection algorithms. A Multiple Hypothesis Test (MHT) algorithm has been previously developed to mitigate the effects of spatial aliasing. This is done by testing potential Resident Space Objects (RSOs) against several sub-pixel shifted Point Spread Functions (PSFs). A MHT has been shown to increase detection performance for the same false alarm rate. In this paper, the assumption of a priori probability used in a MHT algorithm is investigated. First, an analysis of the pixel decision space is completed to determine alternate hypothesis prior probabilities. These probabilities are then implemented into a MHT algorithm, and the algorithm is then tested against previous MHT algorithms using simulated RSO data. Results are reported with Receiver Operating Characteristic (ROC) curves and probability of detection, Pd, analysis.
A Bayesian Assessment of Seismic Semi-Periodicity Forecasts
NASA Astrophysics Data System (ADS)
Nava, F.; Quinteros, C.; Glowacka, E.; Frez, J.
2016-01-01
Among the schemes for earthquake forecasting, the search for semi-periodicity during large earthquakes in a given seismogenic region plays an important role. When considering earthquake forecasts based on semi-periodic sequence identification, the Bayesian formalism is a useful tool for: (1) assessing how well a given earthquake satisfies a previously made forecast; (2) re-evaluating the semi-periodic sequence probability; and (3) testing other prior estimations of the sequence probability. A comparison of Bayesian estimates with updated estimates of semi-periodic sequences that incorporate new data not used in the original estimates shows extremely good agreement, indicating that: (1) the probability that a semi-periodic sequence is not due to chance is an appropriate estimate for the prior sequence probability estimate; and (2) the Bayesian formalism does a very good job of estimating corrected semi-periodicity probabilities, using slightly less data than that used for updated estimates. The Bayesian approach is exemplified explicitly by its application to the Parkfield semi-periodic forecast, and results are given for its application to other forecasts in Japan and Venezuela.
Coe, J.A.; Michael, J.A.; Crovelli, R.A.; Savage, W.Z.; Laprade, W.T.; Nashem, W.D.
2004-01-01
Ninety years of historical landslide records were used as input to the Poisson and binomial probability models. Results from these models show that, for precipitation-triggered landslides, approximately 9 percent of the area of Seattle has annual exceedance probabilities of 1 percent or greater. Application of the Poisson model for estimating the future occurrence of individual landslides results in a worst-case scenario map, with a maximum annual exceedance probability of 25 percent on a hillslope near Duwamish Head in West Seattle. Application of the binomial model for estimating the future occurrence of a year with one or more landslides results in a map with a maximum annual exceedance probability of 17 percent (also near Duwamish Head). Slope and geology both play a role in localizing the occurrence of landslides in Seattle. A positive correlation exists between slope and mean exceedance probability, with probability tending to increase as slope increases. Sixty-four percent of all historical landslide locations are within 150 m (500 ft, horizontal distance) of the Esperance Sand/Lawton Clay contact, but within this zone, no positive or negative correlation exists between exceedance probability and distance to the contact.
ERIC Educational Resources Information Center
Popova-Gonci, Viktoria; Lamb, Monica C.
2012-01-01
Prior learning assessment (PLA) students enter academia with different types of concepts--some of them have been formally accepted and labeled by academia and others are informally formulated by students via independent and/or experiential learning. The critical goal of PLA practices is to assess an intricate combination of prior learning…
Repeat migration and disappointment.
Grant, E K; Vanderkamp, J
1986-01-01
This article investigates the determinants of repeat migration among the 44 regions of Canada, using information from a large micro-database which spans the period 1968 to 1971. The explanation of repeat migration probabilities is a difficult task, and this attempt is only partly successful. May of the explanatory variables are not significant, and the overall explanatory power of the equations is not high. In the area of personal characteristics, the variables related to age, sex, and marital status are generally significant and with expected signs. The distance variable has a strongly positive effect on onward move probabilities. Variables related to prior migration experience have an important impact that differs between return and onward probabilities. In particular, the occurrence of prior moves has a striking effect on the probability of onward migration. The variable representing disappointment, or relative success of the initial move, plays a significant role in explaining repeat migration probabilities. The disappointment variable represents the ratio of actural versus expected wage income in the year after the initial move, and its effect on both repeat migration probabilities is always negative and almost always highly significant. The repeat probabilities diminish after a year's stay in the destination region, but disappointment in the most recent year still has a bearing on the delayed repeat probabilities. While the quantitative impact of the disappointment variable is not large, it is difficult to draw comparisons since similar estimates are not available elsewhere.
Multi-Agent Cooperative Target Search
Hu, Jinwen; Xie, Lihua; Xu, Jun; Xu, Zhao
2014-01-01
This paper addresses a vision-based cooperative search for multiple mobile ground targets by a group of unmanned aerial vehicles (UAVs) with limited sensing and communication capabilities. The airborne camera on each UAV has a limited field of view and its target discriminability varies as a function of altitude. First, by dividing the whole surveillance region into cells, a probability map can be formed for each UAV indicating the probability of target existence within each cell. Then, we propose a distributed probability map updating model which includes the fusion of measurement information, information sharing among neighboring agents, information decay and transmission due to environmental changes such as the target movement. Furthermore, we formulate the target search problem as a multi-agent cooperative coverage control problem by optimizing the collective coverage area and the detection performance. The proposed map updating model and the cooperative control scheme are distributed, i.e., assuming that each agent only communicates with its neighbors within its communication range. Finally, the effectiveness of the proposed algorithms is illustrated by simulation. PMID:24865884
Probing the statistics of transport in the Hénon Map
NASA Astrophysics Data System (ADS)
Alus, O.; Fishman, S.; Meiss, J. D.
2016-09-01
The phase space of an area-preserving map typically contains infinitely many elliptic islands embedded in a chaotic sea. Orbits near the boundary of a chaotic region have been observed to stick for long times, strongly influencing their transport properties. The boundary is composed of invariant "boundary circles." We briefly report recent results of the distribution of rotation numbers of boundary circles for the Hénon quadratic map and show that the probability of occurrence of small integer entries of their continued fraction expansions is larger than would be expected for a number chosen at random. However, large integer entries occur with probabilities distributed proportionally to the random case. The probability distributions of ratios of fluxes through island chains is reported as well. These island chains are neighbours in the sense of the Meiss-Ott Markov-tree model. Two distinct universality families are found. The distributions of the ratio between the flux and orbital period are also presented. All of these results have implications for models of transport in mixed phase space.
Distribution of submerged aquatic vegetation in the St. Louis River estuary: Maps and models
In late summer of 2011 and 2012 we used echo-sounding gear to map the distribution of submerged aquatic vegetation (SAV) in the St. Louis River Estuary (SLRE). From these data we produced maps of SAV distribution and we created logistic models to predict the probability of occurr...
Models for loosely linked gene duplicates suggest lengthy persistence of both copies.
O'Hely, Martin; Wockner, Leesa
2007-06-21
Consider the appearance of a duplicate copy of a gene at a locus linked loosely, if at all, to the locus at which the gene is usually found. If all copies of the gene are subject to non-functionalizing mutations, then two fates are possible: loss of functional copies at the duplicate locus (loss of duplicate expression), or loss of functional copies at the original locus (map change). This paper proposes a simple model to address the probability of map change, the time taken for a map change and/or loss of duplicate expression, and considers where in the spectrum between loss of duplicate expression and map change such a duplicate complex is likely to be found. The findings are: the probability of map change is always half the reciprocal of the population size N, the time for a map change to occur is order NlogN generations, and that there is a marked tendency for duplicates to remain near equi-frequency with the gene at the original locus for a large portion of that time. This is in excellent agreement with simulations.
A tool for the estimation of the distribution of landslide area in R
NASA Astrophysics Data System (ADS)
Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.
2012-04-01
We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.
Wang, Li; Li, Gang; Adeli, Ehsan; Liu, Mingxia; Wu, Zhengwang; Meng, Yu; Lin, Weili; Shen, Dinggang
2018-06-01
Tissue segmentation of infant brain MRIs with risk of autism is critically important for characterizing early brain development and identifying biomarkers. However, it is challenging due to low tissue contrast caused by inherent ongoing myelination and maturation. In particular, at around 6 months of age, the voxel intensities in both gray matter and white matter are within similar ranges, thus leading to the lowest image contrast in the first postnatal year. Previous studies typically employed intensity images and tentatively estimated tissue probabilities to train a sequence of classifiers for tissue segmentation. However, the important prior knowledge of brain anatomy is largely ignored during the segmentation. Consequently, the segmentation accuracy is still limited and topological errors frequently exist, which will significantly degrade the performance of subsequent analyses. Although topological errors could be partially handled by retrospective topological correction methods, their results may still be anatomically incorrect. To address these challenges, in this article, we propose an anatomy-guided joint tissue segmentation and topological correction framework for isointense infant MRI. Particularly, we adopt a signed distance map with respect to the outer cortical surface as anatomical prior knowledge, and incorporate such prior information into the proposed framework to guide segmentation in ambiguous regions. Experimental results on the subjects acquired from National Database for Autism Research demonstrate the effectiveness to topological errors and also some levels of robustness to motion. Comparisons with the state-of-the-art methods further demonstrate the advantages of the proposed method in terms of both segmentation accuracy and topological correctness. © 2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Marrero, J. M.; García, A.; Llinares, A.; Rodriguez-Losada, J. A.; Ortiz, R.
2012-03-01
One of the critical issues in managing volcanic crises is making the decision to evacuate a densely-populated region. In order to take a decision of such importance it is essential to estimate the cost in lives for each of the expected eruptive scenarios. One of the tools that assist in estimating the number of potential fatalities for such decision-making is the calculation of the FN-curves. In this case the FN-curve is a graphical representation that relates the frequency of the different hazards to be expected for a particular volcano or volcanic area, and the number of potential fatalities expected for each event if the zone of impact is not evacuated. In this study we propose a method for assessing the impact that a possible eruption from the Tenerife Central Volcanic Complex (CVC) would have on the population at risk. Factors taken into account include the spatial probability of the eruptive scenarios (susceptibility) and the temporal probability of the magnitudes of the eruptive scenarios. For each point or cell of the susceptibility map with greater probability, a series of probability-scaled hazard maps is constructed for the whole range of magnitudes expected. The number of potential fatalities is obtained from the intersection of the hazard maps with the spatial map of population distribution. The results show that the Emergency Plan for Tenerife must provide for the evacuation of more than 100,000 persons.
Singer, Donald A.; Kouda, Ryoichi
1991-01-01
The FINDER system employs geometric probability, Bayesian statistics, and the normal probability density function to integrate spatial and frequency information to produce a map of probabilities of target centers. Target centers can be mineral deposits, alteration associated with mineral deposits, or any other target that can be represented by a regular shape on a two dimensional map. The size, shape, mean, and standard deviation for each variable are characterized in a control area and the results applied by means of FINDER to the study area. The Kushikino deposit consists of groups of quartz-calcite-adularia veins that produced 55 tonnes of gold and 456 tonnes of silver since 1660. Part of a 6 by 10 km area near Kushikino served as a control area. Within the control area, data plotting, contouring, and cluster analysis were used to identify the barren and mineralized populations. Sodium was found to be depleted in an elliptically shaped area 3.1 by 1.6 km, potassium was both depleted and enriched locally in an elliptically shaped area 3.0 by 1.3 km, and sulfur was enriched in an elliptically shaped area 5.8 by 1.6 km. The potassium, sodium, and sulfur content from 233 surface rock samples were each used in FINDER to produce probability maps for the 12 by 30 km study area which includes Kushikino. High probability areas for each of the individual variables are over and offset up to 4 km eastward from the main Kushikino veins. In general, high probability areas identified by FINDER are displaced from the main veins and cover not only the host andesite and the dacite-andesite that is about the same age as the Kushikino mineralization, but also younger sedimentary rocks, andesite, and tuff units east and northeast of Kushikino. The maps also display the same patterns observed near Kushikino, but with somewhat lower probabilities, about 1.5 km east of the old gold prospect, Hajima, and in a broad zone 2.5 km east-west and 1 km north-south, centered 2 km west of the old gold prospect, Yaeyama.
Effects of Speech Practice on Fast Mapping in Monolingual and Bilingual Speakers
ERIC Educational Resources Information Center
Kan, Pui Fong; Sadagopan, Neeraja; Janich, Lauren; Andrade, Marixa
2014-01-01
Purpose: This study examines the effects of the levels of speech practice on fast mapping in monolingual and bilingual speakers. Method: Participants were 30 English-speaking monolingual and 30 Spanish-English bilingual young adults. Each participant was randomly assigned to 1 of 3 practice conditions prior to the fast-mapping task: (a) intensive…
ERIC Educational Resources Information Center
Marchand, C.; d'Ivernois, J. F.; Assal, J. P.; Slama, G.; Hivon, R.
2002-01-01
Assesses whether concept maps used with diabetic patients could describe their cognitive structure, before and after having followed an educational program. Involves 10 diabetic patients and shows that concept maps can be a suitable technique to explore the type and organization of the patients' prior knowledge and to visualize what they have…
A Bayesian predictive two-stage design for phase II clinical trials.
Sambucini, Valeria
2008-04-15
In this paper, we propose a Bayesian two-stage design for phase II clinical trials, which represents a predictive version of the single threshold design (STD) recently introduced by Tan and Machin. The STD two-stage sample sizes are determined specifying a minimum threshold for the posterior probability that the true response rate exceeds a pre-specified target value and assuming that the observed response rate is slightly higher than the target. Unlike the STD, we do not refer to a fixed experimental outcome, but take into account the uncertainty about future data. In both stages, the design aims to control the probability of getting a large posterior probability that the true response rate exceeds the target value. Such a probability is expressed in terms of prior predictive distributions of the data. The performance of the design is based on the distinction between analysis and design priors, recently introduced in the literature. The properties of the method are studied when all the design parameters vary.
Damage Proxy Map from Interferometric Synthetic Aperture Radar Coherence
NASA Technical Reports Server (NTRS)
Webb, Frank H. (Inventor); Yun, Sang-Ho (Inventor); Fielding, Eric Jameson (Inventor); Simons, Mark (Inventor)
2015-01-01
A method, apparatus, and article of manufacture provide the ability to generate a damage proxy map. A master coherence map and a slave coherence map, for an area prior and subsequent to (including) a damage event are obtained. The slave coherence map is registered to the master coherence map. Pixel values of the slave coherence map are modified using histogram matching to provide a first histogram of the master coherence map that exactly matches a second histogram of the slave coherence map. A coherence difference between the slave coherence map and the master coherence map is computed to produce a damage proxy map. The damage proxy map is displayed with the coherence difference displayed in a visually distinguishable manner.
NASA Astrophysics Data System (ADS)
Bevilacqua, Andrea; Neri, Augusto; Esposti Ongaro, Tomaso; Isaia, Roberto; Flandoli, Franco; Bisson, Marina
2016-04-01
Today hundreds of thousands people live inside the Campi Flegrei caldera (Italy) and in the adjacent part of the city of Naples making a future eruption of such volcano an event with huge consequences. Very high risks are associated with the occurrence of pyroclastic density currents (PDCs). Mapping of background or long-term PDC hazard in the area is a great challenge due to the unknown eruption time, scale and vent location of the next event as well as the complex dynamics of the flow over the caldera topography. This is additionally complicated by the remarkable epistemic uncertainty on the eruptive record, affecting the time of past events, the location of vents as well as the PDCs areal extent estimates. First probability maps of PDC invasion were produced combining a vent-opening probability map, statistical estimates concerning the eruptive scales and a Cox-type temporal model including self-excitement effects, based on the eruptive record of the last 15 kyr. Maps were produced by using a Monte Carlo approach and adopting a simplified inundation model based on the "box model" integral approximation tested with 2D transient numerical simulations of flow dynamics. In this presentation we illustrate the independent effects of eruption scale, vent location and time of forecast of the next event. Specific focus was given to the remarkable differences between the eastern and western sectors of the caldera and their effects on the hazard maps. The analysis allowed to identify areas with elevated probabilities of flow invasion as a function of the diverse assumptions made. With the quantification of some sources of uncertainty in relation to the system, we were also able to provide mean and percentile maps of PDC hazard levels.
Probabilistic self-organizing maps for continuous data.
Lopez-Rubio, Ezequiel
2010-10-01
The original self-organizing feature map did not define any probability distribution on the input space. However, the advantages of introducing probabilistic methodologies into self-organizing map models were soon evident. This has led to a wide range of proposals which reflect the current emergence of probabilistic approaches to computational intelligence. The underlying estimation theories behind them derive from two main lines of thought: the expectation maximization methodology and stochastic approximation methods. Here, we present a comprehensive view of the state of the art, with a unifying perspective of the involved theoretical frameworks. In particular, we examine the most commonly used continuous probability distributions, self-organization mechanisms, and learning schemes. Special emphasis is given to the connections among them and their relative advantages depending on the characteristics of the problem at hand. Furthermore, we evaluate their performance in two typical applications of self-organizing maps: classification and visualization.
Evaluation of two methods for using MR information in PET reconstruction
NASA Astrophysics Data System (ADS)
Caldeira, L.; Scheins, J.; Almeida, P.; Herzog, H.
2013-02-01
Using magnetic resonance (MR) information in maximum a posteriori (MAP) algorithms for positron emission tomography (PET) image reconstruction has been investigated in the last years. Recently, three methods to introduce this information have been evaluated and the Bowsher prior was considered the best. Its main advantage is that it does not require image segmentation. Another method that has been widely used for incorporating MR information is using boundaries obtained by segmentation. This method has also shown improvements in image quality. In this paper, two methods for incorporating MR information in PET reconstruction are compared. After a Bayes parameter optimization, the reconstructed images were compared using the mean squared error (MSE) and the coefficient of variation (CV). MSE values are 3% lower in Bowsher than using boundaries. CV values are 10% lower in Bowsher than using boundaries. Both methods performed better than using no prior, that is, maximum likelihood expectation maximization (MLEM) or MAP without anatomic information in terms of MSE and CV. Concluding, incorporating MR information using the Bowsher prior gives better results in terms of MSE and CV than boundaries. MAP algorithms showed again to be effective in noise reduction and convergence, specially when MR information is incorporated. The robustness of the priors in respect to noise and inhomogeneities in the MR image has however still to be performed.
The power of instructions: Proactive configuration of stimulus-response translation.
Meiran, Nachshon; Pereg, Maayan; Kessler, Yoav; Cole, Michael W; Braver, Todd S
2015-05-01
Humans are characterized by an especially highly developed ability to use instructions to prepare toward upcoming events; yet, it is unclear just how powerful instructions can be. Although prior work provides evidence that instructions can be sufficiently powerful to proactively program working memory to execute stimulus-response (S-R) translations, in a reflexlike fashion (intention-based reflexivity [IBR]), the results to date have been equivocal. To overcome this shortcoming, we developed, and tested in 4 studies, a novel paradigm (the NEXT paradigm) that isolates IBR effects even prior to first task execution. In each miniblock, participants received S-R mapping instructions for a new task. Prior to implementing this mapping, responses were required to advance through screens during a preparatory (NEXT) phase. When the NEXT response was incompatible with the instructed S-R mapping, interference (IBR effect) was observed. This NEXT compatibility effect and performance in the implementation (GO) trials barely changed when prior practice of a few trials was provided. Finally, a manipulation that encouraged preparation resulted in relatively durable NEXT compatibility effects (indicating durable preparatory efforts) coupled with improved GO performance (indicating the success of these efforts). Together, these findings establish IBR as a marker of instructed proactive control. (c) 2015 APA, all rights reserved).
Gradient-based reliability maps for ACM-based segmentation of hippocampus.
Zarpalas, Dimitrios; Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos
2014-04-01
Automatic segmentation of deep brain structures, such as the hippocampus (HC), in MR images has attracted considerable scientific attention due to the widespread use of MRI and to the principal role of some structures in various mental disorders. In this literature, there exists a substantial amount of work relying on deformable models incorporating prior knowledge about structures' anatomy and shape information. However, shape priors capture global shape characteristics and thus fail to model boundaries of varying properties; HC boundaries present rich, poor, and missing gradient regions. On top of that, shape prior knowledge is blended with image information in the evolution process, through global weighting of the two terms, again neglecting the spatially varying boundary properties, causing segmentation faults. An innovative method is hereby presented that aims to achieve highly accurate HC segmentation in MR images, based on the modeling of boundary properties at each anatomical location and the inclusion of appropriate image information for each of those, within an active contour model framework. Hence, blending of image information and prior knowledge is based on a local weighting map, which mixes gradient information, regional and whole brain statistical information with a multi-atlas-based spatial distribution map of the structure's labels. Experimental results on three different datasets demonstrate the efficacy and accuracy of the proposed method.
Setting the scene for SWOT: global maps of river reach hydrodynamic variables
NASA Astrophysics Data System (ADS)
Schumann, Guy J.-P.; Durand, Michael; Pavelsky, Tamlin; Lion, Christine; Allen, George
2017-04-01
Credible and reliable characterization of discharge from the Surface Water and Ocean Topography (SWOT) mission using the Manning-based algorithms needs a prior estimate constraining reach-scale channel roughness, base flow and river bathymetry. For some places, any one of those variables may exist locally or even regionally as a measurement, which is often only at a station, or sometimes as a basin-wide model estimate. However, to date none of those exist at the scale required for SWOT and thus need to be mapped at a continental scale. The prior estimates will be employed for producing initial discharge estimates, which will be used as starting-guesses for the various Manning-based algorithms, to be refined using the SWOT measurements themselves. A multitude of reach-scale variables were derived, including Landsat-based width, SRTM slope and accumulation area. As a possible starting point for building the prior database of low flow, river bathymetry and channel roughness estimates, we employed a variety of sources, including data from all GRDC records, simulations from the long-time runs of the global water balance model (WBM), and reach-based calculations from hydraulic geometry relationships as well as Manning's equation. Here, we present the first global maps of this prior database with some initial validation, caveats and prospective uses.
VizieR Online Data Catalog: Wide binaries in Tycho-Gaia: search method (Andrews+, 2017)
NASA Astrophysics Data System (ADS)
Andrews, J. J.; Chaname, J.; Agueros, M. A.
2017-11-01
Our catalogue of wide binaries identified in the Tycho-Gaia Astrometric Solution catalogue. The Gaia source IDs, Tycho IDs, astrometry, posterior probabilities for both the log-flat prior and power-law prior models, and angular separation are presented. (1 data file).
Kim, Yusung; Tomé, Wolfgang A.
2010-01-01
Summary Voxel based iso-Tumor Control Probability (TCP) maps and iso-Complication maps are proposed as a plan-review tool especially for functional image-guided intensity-modulated radiotherapy (IMRT) strategies such as selective boosting (dose painting) and conformal avoidance IMRT. The maps employ voxel-based phenomenological biological dose-response models for target volumes and normal organs. Two IMRT strategies for prostate cancer, namely conventional uniform IMRT delivering an EUD = 84 Gy (equivalent uniform dose) to the entire PTV and selective boosting delivering an EUD = 82 Gy to the entire PTV, are investigated, to illustrate the advantages of this approach over iso-dose maps. Conventional uniform IMRT did yield a more uniform isodose map to the entire PTV while selective boosting did result in a nonuniform isodose map. However, when employing voxel based iso-TCP maps selective boosting exhibited a more uniform tumor control probability map compared to what could be achieved using conventional uniform IMRT, which showed TCP cold spots in high-risk tumor subvolumes despite delivering a higher EUD to the entire PTV. Voxel based iso-Complication maps are presented for rectum and bladder, and their utilization for selective avoidance IMRT strategies are discussed. We believe as the need for functional image guided treatment planning grows, voxel based iso-TCP and iso-Complication maps will become an important tool to assess the integrity of such treatment plans. PMID:21151734
Halperin, Daniel M.; Lee, J. Jack; Dagohoy, Cecile Gonzales; Yao, James C.
2015-01-01
Purpose Despite a robust clinical trial enterprise and encouraging phase II results, the vast minority of oncologic drugs in development receive regulatory approval. In addition, clinicians occasionally make therapeutic decisions based on phase II data. Therefore, clinicians, investigators, and regulatory agencies require improved understanding of the implications of positive phase II studies. We hypothesized that prior probability of eventual drug approval was significantly different across GI cancers, with substantial ramifications for the predictive value of phase II studies. Methods We conducted a systematic search of phase II studies conducted between 1999 and 2004 and compared studies against US Food and Drug Administration and National Cancer Institute databases of approved indications for drugs tested in those studies. Results In all, 317 phase II trials were identified and followed for a median of 12.5 years. Following completion of phase III studies, eventual new drug application approval rates varied from 0% (zero of 45) in pancreatic adenocarcinoma to 34.8% (24 of 69) for colon adenocarcinoma. The proportion of drugs eventually approved was correlated with the disease under study (P < .001). The median type I error for all published trials was 0.05, and the median type II error was 0.1, with minimal variation. By using the observed median type I error for each disease, phase II studies have positive predictive values ranging from less than 1% to 90%, depending on primary site of the cancer. Conclusion Phase II trials in different GI malignancies have distinct prior probabilities of drug approval, yielding quantitatively and qualitatively different predictive values with similar statistical designs. Incorporation of prior probability into trial design may allow for more effective design and interpretation of phase II studies. PMID:26261263
Advanced prior modeling for 3D bright field electron tomography
NASA Astrophysics Data System (ADS)
Sreehari, Suhas; Venkatakrishnan, S. V.; Drummy, Lawrence F.; Simmons, Jeffrey P.; Bouman, Charles A.
2015-03-01
Many important imaging problems in material science involve reconstruction of images containing repetitive non-local structures. Model-based iterative reconstruction (MBIR) could in principle exploit such redundancies through the selection of a log prior probability term. However, in practice, determining such a log prior term that accounts for the similarity between distant structures in the image is quite challenging. Much progress has been made in the development of denoising algorithms like non-local means and BM3D, and these are known to successfully capture non-local redundancies in images. But the fact that these denoising operations are not explicitly formulated as cost functions makes it unclear as to how to incorporate them in the MBIR framework. In this paper, we formulate a solution to bright field electron tomography by augmenting the existing bright field MBIR method to incorporate any non-local denoising operator as a prior model. We accomplish this using a framework we call plug-and-play priors that decouples the log likelihood and the log prior probability terms in the MBIR cost function. We specifically use 3D non-local means (NLM) as the prior model in the plug-and-play framework, and showcase high quality tomographic reconstructions of a simulated aluminum spheres dataset, and two real datasets of aluminum spheres and ferritin structures. We observe that streak and smear artifacts are visibly suppressed, and that edges are preserved. Also, we report lower RMSE values compared to the conventional MBIR reconstruction using qGGMRF as the prior model.
NASA Astrophysics Data System (ADS)
Tablazon, J.; Caro, C. V.; Lagmay, A. M. F.; Briones, J. B. L.; Dasallas, L.; Lapidez, J. P.; Santiago, J.; Suarez, J. K.; Ladiero, C.; Gonzalo, L. A.; Mungcal, M. T. F.; Malano, V.
2015-03-01
A storm surge is the sudden rise of sea water over the astronomical tides, generated by an approaching storm. This event poses a major threat to the Philippine coastal areas, as manifested by Typhoon Haiyan on 8 November 2013. This hydro-meteorological hazard is one of the main reasons for the high number of casualties due to the typhoon, with 6300 deaths. It became evident that the need to develop a storm surge inundation map is of utmost importance. To develop these maps, the Nationwide Operational Assessment of Hazards under the Department of Science and Technology (DOST-Project NOAH) simulated historical tropical cyclones that entered the Philippine Area of Responsibility. The Japan Meteorological Agency storm surge model was used to simulate storm surge heights. The frequency distribution of the maximum storm surge heights was calculated using simulation results of tropical cyclones under a specific public storm warning signal (PSWS) that passed through a particular coastal area. This determines the storm surge height corresponding to a given probability of occurrence. The storm surge heights from the model were added to the maximum astronomical tide data from WXTide software. The team then created maps of inundation for a specific PSWS using the probability of exceedance derived from the frequency distribution. Buildings and other structures were assigned a probability of exceedance depending on their occupancy category, i.e., 1% probability of exceedance for critical facilities, 10% probability of exceedance for special occupancy structures, and 25% for standard occupancy and miscellaneous structures. The maps produced show the storm-surge-vulnerable areas in Metro Manila, illustrated by the flood depth of up to 4 m and extent of up to 6.5 km from the coastline. This information can help local government units in developing early warning systems, disaster preparedness and mitigation plans, vulnerability assessments, risk-sensitive land use plans, shoreline defense efforts, and coastal protection measures. These maps can also determine the best areas to build critical structures, or at least determine the level of protection of these structures should they be built in hazard areas. Moreover, these will support the local government units' mandate to raise public awareness, disseminate information about storm surge hazards, and implement appropriate countermeasures for a given PSWS.
NASA Astrophysics Data System (ADS)
Metzger, Andrew; Benavides, Amanda; Nopoulos, Peg; Magnotta, Vincent
2016-03-01
The goal of this project was to develop two age appropriate atlases (neonatal and one year old) that account for the rapid growth and maturational changes that occur during early development. Tissue maps from this age group were initially created by manually correcting the resulting tissue maps after applying an expectation maximization (EM) algorithm and an adult atlas to pediatric subjects. The EM algorithm classified each voxel into one of ten possible tissue types including several subcortical structures. This was followed by a novel level set segmentation designed to improve differentiation between distal cortical gray matter and white matter. To minimize the req uired manual corrections, the adult atlas was registered to the pediatric scans using high -dimensional, symmetric image normalization (SyN) registration. The subject images were then mapped to an age specific atlas space, again using SyN registration, and the resulting transformation applied to the manually corrected tissue maps. The individual maps were averaged in the age specific atlas space and blurred to generate the age appropriate anatomical priors. The resulting anatomical priors were then used by the EM algorithm to re-segment the initial training set as well as an independent testing set. The results from the adult and age-specific anatomical priors were compared to the manually corrected results. The age appropriate atlas provided superior results as compared to the adult atlas. The image analysis pipeline used in this work was built using the open source software package BRAINSTools.
The Impact of Concept Mapping on the Process of Problem-Based Learning
ERIC Educational Resources Information Center
Zwaal, Wichard; Otting, Hans
2012-01-01
A concept map is a graphical tool to activate and elaborate on prior knowledge, to support problem solving, promote conceptual thinking and understanding, and to organize and memorize knowledge. The aim of this study is to determine if the use of concept mapping (CM) in a problem-based learning (PBL) curriculum enhances the PBL process. The paper…
ERIC Educational Resources Information Center
Koc, Mustafa
2012-01-01
This study explored (a) pre-service teachers' perceptions of using concept mapping (CM) in one of their pedagogical courses, (b) the predictive power of such implementation in course achievement, and (c) the role of prior experience with CM, type of mapping, and gender on their perceptions and performances in CM and achievement. The subjects were…
Duels where both marksmen ’home’ or ’zero in’ on one another are here considered, and the effect of this on the win probability is determined. It is...leads to win probabilities that can be straightforwardly evaluated. Maximum-likelihood estimation of the hit probability and homing from field data is outlined. The solutions of the duels are displayed as contour maps. (Author)
NASA Technical Reports Server (NTRS)
Fowler, John W.; Aumann, H. H.
1994-01-01
The High-Resolution image construction program (HiRes) used at IPAC is based on the Maximum Correlation Method. After HiRes intensity images are constructed from IRAS data, additional images are needed to aid in scientific interpretation. Some of the images that are available for this purpose show the fitting noise, estimates of the achieved resolution, and detector track maps. Two methods have been developed for creating color maps without discarding any more spatial information than absolutely necessary: the 'cross-band simulation' and 'prior-knowledge' methods. These maps are demonstrated using the survey observations of a 2 x 2 degree field centered on M31. Prior knowledge may also be used to achieve super-resolution and to suppress ringing around bright point sources observed against background emission. Tools to suppress noise spikes and for accelerating convergence are also described.
Lowry, Tina; Vreeman, Daniel J; Loo, George T; Delman, Bradley N; Thum, Frederick L; Slovis, Benjamin H; Shapiro, Jason S
2017-01-01
Background A health information exchange (HIE)–based prior computed tomography (CT) alerting system may reduce avoidable CT imaging by notifying ordering clinicians of prior relevant studies when a study is ordered. For maximal effectiveness, a system would alert not only for prior same CTs (exams mapped to the same code from an exam name terminology) but also for similar CTs (exams mapped to different exam name terminology codes but in the same anatomic region) and anatomically proximate CTs (exams in adjacent anatomic regions). Notification of previous same studies across an HIE requires mapping of local site CT codes to a standard terminology for exam names (such as Logical Observation Identifiers Names and Codes [LOINC]) to show that two studies with different local codes and descriptions are equivalent. Notifying of prior similar or proximate CTs requires an additional mapping of exam codes to anatomic regions, ideally coded by an anatomic terminology. Several anatomic terminologies exist, but no prior studies have evaluated how well they would support an alerting use case. Objective The aim of this study was to evaluate the fitness of five existing standard anatomic terminologies to support similar or proximate alerts of an HIE-based prior CT alerting system. Methods We compared five standard anatomic terminologies (Foundational Model of Anatomy, Systematized Nomenclature of Medicine Clinical Terms, RadLex, LOINC, and LOINC/Radiological Society of North America [RSNA] Radiology Playbook) to an anatomic framework created specifically for our use case (Simple ANatomic Ontology for Proximity or Similarity [SANOPS]), to determine whether the existing terminologies could support our use case without modification. On the basis of an assessment of optimal terminology features for our purpose, we developed an ordinal anatomic terminology utility classification. We mapped samples of 100 random and the 100 most frequent LOINC CT codes to anatomic regions in each terminology, assigned utility classes for each mapping, and statistically compared each terminology’s utility class rankings. We also constructed seven hypothetical alerting scenarios to illustrate the terminologies’ differences. Results Both RadLex and the LOINC/RSNA Radiology Playbook anatomic terminologies ranked significantly better (P<.001) than the other standard terminologies for the 100 most frequent CTs, but no terminology ranked significantly better than any other for 100 random CTs. Hypothetical scenarios illustrated instances where no standard terminology would support appropriate proximate or similar alerts, without modification. Conclusions LOINC/RSNA Radiology Playbook and RadLex’s anatomic terminologies appear well suited to support proximate or similar alerts for commonly ordered CTs, but for less commonly ordered tests, modification of the existing terminologies with concepts and relations from SANOPS would likely be required. Our findings suggest SANOPS may serve as a framework for enhancing anatomic terminologies in support of other similar use cases. PMID:29242174
Crowdsourcing the creation of image segmentation algorithms for connectomics.
Arganda-Carreras, Ignacio; Turaga, Srinivas C; Berger, Daniel R; Cireşan, Dan; Giusti, Alessandro; Gambardella, Luca M; Schmidhuber, Jürgen; Laptev, Dmitry; Dwivedi, Sarvesh; Buhmann, Joachim M; Liu, Ting; Seyedhosseini, Mojtaba; Tasdizen, Tolga; Kamentsky, Lee; Burget, Radim; Uher, Vaclav; Tan, Xiao; Sun, Changming; Pham, Tuan D; Bas, Erhan; Uzunbas, Mustafa G; Cardona, Albert; Schindelin, Johannes; Seung, H Sebastian
2015-01-01
To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This "deep learning" approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge.
Michailidou, M; Melas, IN; Messinis, DE; Klamt, S; Alexopoulos, LG; Kolisis, FN; Loutrari, H
2015-01-01
Chronic inflammation is associated with the development of human hepatocellular carcinoma (HCC), an essentially incurable cancer. Anti-inflammatory nutraceuticals have emerged as promising candidates against HCC, yet the mechanisms through which they influence the cell signaling machinery to impose phenotypic changes remain unresolved. Herein we implemented a systems biology approach in HCC cells, based on the integration of cytokine release and phospoproteomic data from high-throughput xMAP Luminex assays to elucidate the action mode of prominent nutraceuticals in terms of topology alterations of HCC-specific signaling networks. An optimization algorithm based on SigNetTrainer, an Integer Linear Programming formulation, was applied to construct networks linking signal transduction to cytokine secretion by combining prior knowledge of protein connectivity with proteomic data. Our analysis identified the most probable target phosphoproteins of interrogated compounds and predicted translational control as a new mechanism underlying their anticytokine action. Induced alterations corroborated with inhibition of HCC-driven angiogenesis and metastasis. PMID:26225263
NASA Astrophysics Data System (ADS)
Groeneveld, G.; de Puit, M.; Bleay, S.; Bradshaw, R.; Francese, S.
2015-06-01
Despite the proven capabilities of Matrix Assisted Laser Desorption Ionisation Mass Spectrometry (MALDI MS) in laboratory settings, research is still needed to integrate this technique into current forensic fingerprinting practice. Optimised protocols enabling the compatible application of MALDI to developed fingermarks will allow additional intelligence to be gathered around a suspect’s lifestyle and activities prior to the deposition of their fingermarks while committing a crime. The detection and mapping of illicit drugs and metabolites in latent fingermarks would provide intelligence that is beneficial for both police investigations and court cases. This study investigated MALDI MS detection and mapping capabilities for a large range of drugs of abuse and their metabolites in fingermarks; the detection and mapping of a mixture of these drugs in marks, with and without prior development with cyanoacrylate fuming or Vacuum Metal Deposition, was also examined. Our findings indicate the versatility of MALDI technology and its ability to retrieve chemical intelligence either by detecting the compounds investigated or by using their ion signals to reconstruct 2D maps of fingermark ridge details.
Mapping Irrigated Areas in the Tunisian Semi-Arid Context with Landsat Thermal and VNIR Data Imagery
NASA Astrophysics Data System (ADS)
Rivalland, Vincent; Drissi, Hsan; Simonneaux, Vincent; Tardy, Benjamin; Boulet, Gilles
2016-04-01
Our study area is the Merguellil semi-arid irrigated plain in Tunisia, where the water resource management is an important stake for governmental institutions, farmer communities and more generally for the environment. Indeed, groundwater abstraction for irrigation is the primary cause of aquifer depletion. Moreover, unregistered pumping practices are widespread and very difficult to survey by authorities. Thus, the identification of areas actually irrigated in the whole plain is of major interest. In order to map the irrigated areas, we tried out a methodology based on the use of Landsat 7 and 8 Land Surface Temperature (LST) data issued from atmospherically corrected thermal band using the LANDARTs Tool jointly with the NDVI vegetation indices obtained from visible ane near infrared (VNIR) bands. For each Landsat acquisition during the years 2012 to 2014, we computed a probability of irrigation based on the location of the pixel in the NDVI - LST space. Basically for a given NDVI value, the cooler the pixel the higher its probability to be irrigated is. For each date, pixels were classified in seven bins of irrigation probability ranges. Pixel probabilities for each date were then summed over the study period resulting in a probability map of irrigation. Comparison with ground data shows a consistent identification of irrigated plots and supports the potential operational interest of the method. However, results were hampered by the low Landsat LST data availability due to clouds and the inadequate revisit frequency of the sensor.
Breininger, David R; Breininger, Robert D; Hall, Carlton R
2017-02-01
Seagrasses are the foundation of many coastal ecosystems and are in global decline because of anthropogenic impacts. For the Indian River Lagoon (Florida, U.S.A.), we developed competing multistate statistical models to quantify how environmental factors (surrounding land use, water depth, and time [year]) influenced the variability of seagrass state dynamics from 2003 to 2014 while accounting for time-specific detection probabilities that quantified our ability to determine seagrass state at particular locations and times. We classified seagrass states (presence or absence) at 764 points with geographic information system maps for years when seagrass maps were available and with aerial photographs when seagrass maps were not available. We used 4 categories (all conservation, mostly conservation, mostly urban, urban) to describe surrounding land use within sections of lagoonal waters, usually demarcated by land features that constricted these waters. The best models predicted that surrounding land use, depth, and year would affect transition and detection probabilities. Sections of the lagoon bordered by urban areas had the least stable seagrass beds and lowest detection probabilities, especially after a catastrophic seagrass die-off linked to an algal bloom. Sections of the lagoon bordered by conservation lands had the most stable seagrass beds, which supports watershed conservation efforts. Our results show that a multistate approach can empirically estimate state-transition probabilities as functions of environmental factors while accounting for state-dependent differences in seagrass detection probabilities as part of the overall statistical inference procedure. © 2016 Society for Conservation Biology.
Klausner, Z; Klement, E; Fattal, E
2018-02-01
Viruses that affect the health of humans and farm animals can spread over long distances via atmospheric mechanisms. The phenomenon of atmospheric long-distance dispersal (LDD) is associated with severe consequences because it may introduce pathogens into new areas. The introduction of new pathogens to Israel was attributed to LDD events numerous times. This provided the motivation for this study which is aimed to identify all the locations in the eastern Mediterranean that may serve as sources for pathogen incursion into Israel via LDD. This aim was achieved by calculating source-receptor relationship probability maps. These maps describe the probability that an infected vector or viral aerosol, once airborne, will have an atmospheric route that can transport it to a distant location. The resultant probability maps demonstrate a seasonal tendency in the probability of specific areas to serve as sources for pathogen LDD into Israel. Specifically, Cyprus' season is the summer; southern Turkey and the Greek islands of Crete, Karpathos and Rhodes are associated with spring and summer; lower Egypt and Jordan may serve as sources all year round, except the summer months. The method used in this study can easily be implemented to any other geographic region. The importance of this study is the ability to provide a climatologically valid and accurate risk assessment tool to support long-term decisions regarding preparatory actions for future outbreaks long before a specific outbreak occurs. © 2017 Blackwell Verlag GmbH.
Evansville Area Earthquake Hazards Mapping Project (EAEHMP) - Progress Report, 2008
Boyd, Oliver S.; Haase, Jennifer L.; Moore, David W.
2009-01-01
Maps of surficial geology, deterministic and probabilistic seismic hazard, and liquefaction potential index have been prepared by various members of the Evansville Area Earthquake Hazard Mapping Project for seven quadrangles in the Evansville, Indiana, and Henderson, Kentucky, metropolitan areas. The surficial geologic maps feature 23 types of surficial geologic deposits, artificial fill, and undifferentiated bedrock outcrop and include alluvial and lake deposits of the Ohio River valley. Probabilistic and deterministic seismic hazard and liquefaction hazard mapping is made possible by drawing on a wealth of information including surficial geologic maps, water well logs, and in-situ testing profiles using the cone penetration test, standard penetration test, down-hole shear wave velocity tests, and seismic refraction tests. These data were compiled and collected with contributions from the Indiana Geological Survey, Kentucky Geological Survey, Illinois State Geological Survey, United States Geological Survey, and Purdue University. Hazard map products are in progress and are expected to be completed by the end of 2009, with a public roll out in early 2010. Preliminary results suggest that there is a 2 percent probability that peak ground accelerations of about 0.3 g will be exceeded in much of the study area within 50 years, which is similar to the 2002 USGS National Seismic Hazard Maps for a firm rock site value. Accelerations as high as 0.4-0.5 g may be exceeded along the edge of the Ohio River basin. Most of the region outside of the river basin has a low liquefaction potential index (LPI), where the probability that LPI is greater than 5 (that is, there is a high potential for liquefaction) for a M7.7 New Madrid type event is only 20-30 percent. Within the river basin, most of the region has high LPI, where the probability that LPI is greater than 5 for a New Madrid type event is 80-100 percent.
Rupert, Michael G.
2003-01-01
Draft Federal regulations may require that each State develop a State Pesticide Management Plan for the herbicides atrazine, alachlor, metolachlor, and simazine. Maps were developed that the State of Colorado could use to predict the probability of detecting atrazine and desethyl-atrazine (a breakdown product of atrazine) in ground water in Colorado. These maps can be incorporated into the State Pesticide Management Plan and can help provide a sound hydrogeologic basis for atrazine management in Colorado. Maps showing the probability of detecting elevated nitrite plus nitrate as nitrogen (nitrate) concentrations in ground water in Colorado also were developed because nitrate is a contaminant of concern in many areas of Colorado. Maps showing the probability of detecting atrazine and(or) desethyl-atrazine (atrazine/DEA) at or greater than concentrations of 0.1 microgram per liter and nitrate concentrations in ground water greater than 5 milligrams per liter were developed as follows: (1) Ground-water quality data were overlaid with anthropogenic and hydrogeologic data using a geographic information system to produce a data set in which each well had corresponding data on atrazine use, fertilizer use, geology, hydrogeomorphic regions, land cover, precipitation, soils, and well construction. These data then were downloaded to a statistical software package for analysis by logistic regression. (2) Relations were observed between ground-water quality and the percentage of land-cover categories within circular regions (buffers) around wells. Several buffer sizes were evaluated; the buffer size that provided the strongest relation was selected for use in the logistic regression models. (3) Relations between concentrations of atrazine/DEA and nitrate in ground water and atrazine use, fertilizer use, geology, hydrogeomorphic regions, land cover, precipitation, soils, and well-construction data were evaluated, and several preliminary multivariate models with various combinations of independent variables were constructed. (4) The multivariate models that best predicted the presence of atrazine/DEA and elevated concentrations of nitrate in ground water were selected. (5) The accuracy of the multivariate models was confirmed by validating the models with an independent set of ground-water quality data. (6) The multivariate models were entered into a geographic information system and the probability maps were constructed.
Johnson, Ronald C.
2012-01-01
During the 1960s, 1970s, and 1980s, the U.S. Geological Survey mapped the entire area underlain by oil shale of the Eocene Green River Formation in the Piceance Basin of western Colorado. The Piceance Basin contains the largest known oil shale deposit in the world, with an estimated 1.53 trillion barrels of oil in place and as much as 400,000 barrels of oil per acre. This report places the sixty-nine 7½-minute geologic quadrangle maps and one 15-minute quadrangle map published during this period into a comprehensive time-stratigraphic framework based on the alternating rich and lean oil shale zones. The quadrangles are placed in their respective regional positions on one large stratigraphic chart so that tracking the various stratigraphic unit names that have been applied can be followed between adjacent quadrangles. Members of the Green River Formation were defined prior to the detailed mapping, and many inconsistencies and correlation problems had to be addressed as mapping progressed. As a result, some of the geologic units that were defined prior to mapping were modified or discarded. The extensive body of geologic data provided by the detailed quadrangle maps contributes to a better understanding of the distribution and characteristics of the oil shale-bearing rocks across the Piceance Basin.
Absolute continuity for operator valued completely positive maps on C∗-algebras
NASA Astrophysics Data System (ADS)
Gheondea, Aurelian; Kavruk, Ali Şamil
2009-02-01
Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.
Rupert, Michael G.
1998-01-01
Draft Federal regulations may require that each State develop a State Pesticide Management Plan for the herbicides atrazine, alachlor, cyanazine, metolachlor, and simazine. This study developed maps that the Idaho State Department of Agriculture might use to predict the probability of detecting atrazine and desethyl-atrazine (a breakdown product of atrazine) in ground water in the Idaho part of the upper Snake River Basin. These maps can be incorporated in the State Pesticide Management Plan and help provide a sound hydrogeologic basis for atrazine management in the study area. Maps showing the probability of detecting atrazine/desethyl-atrazine in ground water were developed as follows: (1) Ground-water monitoring data were overlaid with hydrogeologic and anthropogenic data using a geographic information system to produce a data set in which each well had corresponding data on atrazine use, depth to ground water, geology, land use, precipitation, soils, and well depth. These data then were downloaded to a statistical software package for analysis by logistic regression. (2) Individual (univariate) relations between atrazine/desethyl-atrazine in ground water and atrazine use, depth to ground water, geology, land use, precipitation, soils, and well depth data were evaluated to identify those independent variables significantly related to atrazine/ desethyl-atrazine detections. (3) Several preliminary multivariate models with various combinations of independent variables were constructed. (4) The multivariate models which best predicted the presence of atrazine/desethyl-atrazine in ground water were selected. (5) The multivariate models were entered into the geographic information system and the probability maps were constructed. Two models which best predicted the presence of atrazine/desethyl-atrazine in ground water were selected; one with and one without atrazine use. Correlations of the predicted probabilities of atrazine/desethyl-atrazine in ground water with the percent of actual detections were good; r-squared values were 0.91 and 0.96, respectively. Models were verified using a second set of groundwater quality data. Verification showed that wells with water containing atrazine/desethyl-atrazine had significantly higher probability ratings than wells with water containing no atrazine/desethylatrazine (p <0.002). Logistic regression also was used to develop a preliminary model to predict the probability of nitrite plus nitrate as nitrogen concentrations greater than background levels of 2 milligrams per liter. A direct comparison between the atrazine/ desethyl-atrazine and nitrite plus nitrate as nitrogen probability maps was possible because the same ground-water monitoring, hydrogeologic, and anthropogenic data were used to develop both maps. Land use, precipitation, soil hydrologic group, and well depth were significantly related with atrazine/desethyl-atrazine detections. Depth to water, land use, and soil drainage were signifi- cantly related with elevated nitrite plus nitrate as nitrogen concentrations. The differences between atrazine/desethyl-atrazine and nitrite plus nitrate as nitrogen relations were attributed to differences in chemical behavior of these compounds in the environment and possibly to differences in the extent of use and rates of their application.
Distinguishability notion based on Wootters statistical distance: Application to discrete maps
NASA Astrophysics Data System (ADS)
Gomez, Ignacio S.; Portesi, M.; Lamberti, P. W.
2017-08-01
We study the distinguishability notion given by Wootters for states represented by probability density functions. This presents the particularity that it can also be used for defining a statistical distance in chaotic unidimensional maps. Based on that definition, we provide a metric d ¯ for an arbitrary discrete map. Moreover, from d ¯ , we associate a metric space with each invariant density of a given map, which results to be the set of all distinguished points when the number of iterations of the map tends to infinity. Also, we give a characterization of the wandering set of a map in terms of the metric d ¯ , which allows us to identify the dissipative regions in the phase space. We illustrate the results in the case of the logistic and the circle maps numerically and analytically, and we obtain d ¯ and the wandering set for some characteristic values of their parameters. Finally, an extension of the metric space associated for arbitrary probability distributions (not necessarily invariant densities) is given along with some consequences. The statistical properties of distributions given by histograms are characterized in terms of the cardinal of the associated metric space. For two conjugate variables, the uncertainty principle is expressed in terms of the diameters of the associated metric space with those variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yaron, O.; Perley, D. A.; Gal-Yam, A.
With the advent of new wide-field, high-cadence optical transient surveys, our understanding of the diversity of core-collapse supernovae has grown tremendously in the last decade. However, the pre-supernova evolution of massive stars, that sets the physical backdrop to these violent events, is theoretically not well understood and difficult to probe observationally. Here we report the discovery of the supernova iPTF 13dqy = SN 2013fs a mere ~3 hr after explosion. Our rapid follow-up observations, which include multiwavelength photometry and extremely early (beginning at ~6 hr post-explosion) spectra, map the distribution of material in the immediate environment (≲ 10 15 cm)more » of the exploding star and establish that it was surrounded by circumstellar material (CSM) that was ejected during the final ~1 yr prior to explosion at a high rate, around 10 -3 solar masses per year. The complete disappearance of flash-ionised emission lines within the first several days requires that the dense CSM be confined to within ≲10 15 cm, consistent with radio non-detections at 70–100 days. The observations indicate that iPTF 13dqy was a regular Type II SN; thus, the finding that the probable red supergiant (RSG) progenitor of this common explosion ejected material at a highly elevated rate just prior to its demise suggests that pre-supernova instabilities may be common among exploding massive stars.« less
Confined dense circumstellar material surrounding a regular type II supernova
NASA Astrophysics Data System (ADS)
Yaron, O.; Perley, D. A.; Gal-Yam, A.; Groh, J. H.; Horesh, A.; Ofek, E. O.; Kulkarni, S. R.; Sollerman, J.; Fransson, C.; Rubin, A.; Szabo, P.; Sapir, N.; Taddia, F.; Cenko, S. B.; Valenti, S.; Arcavi, I.; Howell, D. A.; Kasliwal, M. M.; Vreeswijk, P. M.; Khazov, D.; Fox, O. D.; Cao, Y.; Gnat, O.; Kelly, P. L.; Nugent, P. E.; Filippenko, A. V.; Laher, R. R.; Wozniak, P. R.; Lee, W. H.; Rebbapragada, U. D.; Maguire, K.; Sullivan, M.; Soumagnac, M. T.
2017-02-01
With the advent of new wide-field, high-cadence optical transient surveys, our understanding of the diversity of core-collapse supernovae has grown tremendously in the last decade. However, the pre-supernova evolution of massive stars, which sets the physical backdrop to these violent events, is theoretically not well understood and difficult to probe observationally. Here we report the discovery of the supernova iPTF 13dqy = SN 2013fs a mere ~3 h after explosion. Our rapid follow-up observations, which include multiwavelength photometry and extremely early (beginning at ~6 h post-explosion) spectra, map the distribution of material in the immediate environment (<~1015 cm) of the exploding star and establish that it was surrounded by circumstellar material (CSM) that was ejected during the final ~1 yr prior to explosion at a high rate, around 10-3 solar masses per year. The complete disappearance of flash-ionized emission lines within the first several days requires that the dense CSM be confined to within <~1015 cm, consistent with radio non-detections at 70-100 days. The observations indicate that iPTF 13dqy was a regular type II supernova; thus, the finding that the probable red supergiant progenitor of this common explosion ejected material at a highly elevated rate just prior to its demise suggests that pre-supernova instabilities may be common among exploding massive stars.
NASA Astrophysics Data System (ADS)
Mastrolorenzo, G.; Pappalardo, L.; Troise, C.; Panizza, A.; de Natale, G.
2005-05-01
Integrated volcanological-probabilistic approaches has been used in order to simulate pyroclastic density currents and fallout and produce hazard maps for Campi Flegrei and Somma Vesuvius areas. On the basis of the analyses of all types of pyroclastic flows, surges, secondary pyroclastic density currents and fallout events occurred in the volcanological history of the two volcanic areas and the evaluation of probability for each type of events, matrixs of input parameters for a numerical simulation have been performed. The multi-dimensional input matrixs include the main controlling parameters of the pyroclasts transport and deposition dispersion, as well as the set of possible eruptive vents used in the simulation program. Probabilistic hazard maps provide of each points of campanian area, the yearly probability to be interested by a given event with a given intensity and resulting demage. Probability of a few events in one thousand years are typical of most areas around the volcanoes whitin a range of ca 10 km, including Neaples. Results provide constrains for the emergency plans in Neapolitan area.
NASA Astrophysics Data System (ADS)
Hoteit, I.; Hollt, T.; Hadwiger, M.; Knio, O. M.; Gopalakrishnan, G.; Zhan, P.
2016-02-01
Ocean reanalyses and forecasts are nowadays generated by combining ensemble simulations with data assimilation techniques. Most of these techniques resample the ensemble members after each assimilation cycle. Tracking behavior over time, such as all possible paths of a particle in an ensemble vector field, becomes very difficult, as the number of combinations rises exponentially with the number of assimilation cycles. In general a single possible path is not of interest but only the probabilities that any point in space might be reached by a particle at some point in time. We present an approach using probability-weighted piecewise particle trajectories to allow for interactive probability mapping. This is achieved by binning the domain and splitting up the tracing process into the individual assimilation cycles, so that particles that fall into the same bin after a cycle can be treated as a single particle with a larger probability as input for the next cycle. As a result we loose the possibility to track individual particles, but can create probability maps for any desired seed at interactive rates. The technique is integrated in an interactive visualization system that enables the visual analysis of the particle traces side by side with other forecast variables, such as the sea surface height, and their corresponding behavior over time. By harnessing the power of modern graphics processing units (GPUs) for visualization as well as computation, our system allows the user to browse through the simulation ensembles in real-time, view specific parameter settings or simulation models and move between different spatial or temporal regions without delay. In addition our system provides advanced visualizations to highlight the uncertainty, or show the complete distribution of the simulations at user-defined positions over the complete time series of the domain.
Predicting the spatial extent of liquefaction from geospatial and earthquake specific parameters
Zhu, Jing; Baise, Laurie G.; Thompson, Eric M.; Wald, David J.; Knudsen, Keith L.; Deodatis, George; Ellingwood, Bruce R.; Frangopol, Dan M.
2014-01-01
The spatially extensive damage from the 2010-2011 Christchurch, New Zealand earthquake events are a reminder of the need for liquefaction hazard maps for anticipating damage from future earthquakes. Liquefaction hazard mapping as traditionally relied on detailed geologic mapping and expensive site studies. These traditional techniques are difficult to apply globally for rapid response or loss estimation. We have developed a logistic regression model to predict the probability of liquefaction occurrence in coastal sedimentary areas as a function of simple and globally available geospatial features (e.g., derived from digital elevation models) and standard earthquake-specific intensity data (e.g., peak ground acceleration). Some of the geospatial explanatory variables that we consider are taken from the hydrology community, which has a long tradition of using remotely sensed data as proxies for subsurface parameters. As a result of using high resolution, remotely-sensed, and spatially continuous data as a proxy for important subsurface parameters such as soil density and soil saturation, and by using a probabilistic modeling framework, our liquefaction model inherently includes the natural spatial variability of liquefaction occurrence and provides an estimate of spatial extent of liquefaction for a given earthquake. To provide a quantitative check on how the predicted probabilities relate to spatial extent of liquefaction, we report the frequency of observed liquefaction features within a range of predicted probabilities. The percentage of liquefaction is the areal extent of observed liquefaction within a given probability contour. The regional model and the results show that there is a strong relationship between the predicted probability and the observed percentage of liquefaction. Visual inspection of the probability contours for each event also indicates that the pattern of liquefaction is well represented by the model.
The ZpiM algorithm: a method for interferometric image reconstruction in SAR/SAS.
Dias, José M B; Leitao, José M N
2002-01-01
This paper presents an effective algorithm for absolute phase (not simply modulo-2-pi) estimation from incomplete, noisy and modulo-2pi observations in interferometric aperture radar and sonar (InSAR/InSAS). The adopted framework is also representative of other applications such as optical interferometry, magnetic resonance imaging and diffraction tomography. The Bayesian viewpoint is adopted; the observation density is 2-pi-periodic and accounts for the interferometric pair decorrelation and system noise; the a priori probability of the absolute phase is modeled by a compound Gauss-Markov random field (CGMRF) tailored to piecewise smooth absolute phase images. We propose an iterative scheme for the computation of the maximum a posteriori probability (MAP) absolute phase estimate. Each iteration embodies a discrete optimization step (Z-step), implemented by network programming techniques and an iterative conditional modes (ICM) step (pi-step). Accordingly, the algorithm is termed ZpiM, where the letter M stands for maximization. An important contribution of the paper is the simultaneous implementation of phase unwrapping (inference of the 2pi-multiples) and smoothing (denoising of the observations). This improves considerably the accuracy of the absolute phase estimates compared to methods in which the data is low-pass filtered prior to unwrapping. A set of experimental results, comparing the proposed algorithm with alternative methods, illustrates the effectiveness of our approach.
Saad, Wilson A; Guarda, I F M S; Camargo, L A A; Santos, T A F B; Guarda, R S; Saad, Willian A; Simões, S; Rodrigues, J Antunes
2003-07-01
We investigated the effect of L-NAME, a nitric oxide (NO) inhibitor and sodium nitroprusside (SNP), an NO-donating agent, on pilocarpine-induced alterations in salivary flow, mean arterial blood pressure (MAP) and heart rate (HR) in rats. Male Holtzman rats (250-300 g) were implanted with a stainless steel cannula directly into the median preoptic nucleus (MnPO). Pilocarpine (10, 20, 40, 80, 160 g) injected into the MnPO induced an increase in salivary secretion (P<0.01). Pilocarpine (1, 2, 4, 8, 16 mg/kg) ip also increased salivary secretion (P<0.01). Injection of L-NAME (40 g) into the MnPO prior to pilocarpine (10, 20, 40, 80, 160 g) injected into the MnPO or ip (1, 2, 4, 8, 16 mg/kg) increased salivary secretion (P<0.01). SNP (30 g) injected into the MnPO or ip prior to pilocarpine attenuated salivary secretion (P<0.01). Pilocarpine (40 g) injection into the MnPO increased MAP and decreased HR (P<0.01). Pilocarpine (4 mg/kg body weight) ip produced a decrease in MAP and an increase in HR (P<0.01). Injection of L-NAME (40 g) into the MnPO prior to pilocarpine potentiated the increase in MAP and reduced HR (P<0.01). SNP (30 g) injected into the MnPO prior to pilocarpine attenuated (100%) the effect of pilocarpine on MAP, with no effect on HR. Administration of L-NAME (40 g) into the MnPO potentiated the effect of pilocarpine injected ip. SNP (30 g) injected into the MnPO attenuated the effect of ip pilocarpine on MAP and HR. The present study suggests that in the rat MnPO 1) NO is important for the effects of pilocarpine on salivary flow, and 2) pilocarpine interferes with blood pressure and HR (side effects of pilocarpine), that is attenuated by NO.
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Yokokawa, Miki; Desjardins, Benoit; Crawford, Thomas; Good, Eric; Morady, Fred; Bogun, Frank
2013-01-08
The purpose of this study was to assess the determinants of ventricular tachycardia (VT) recurrence in patients who underwent VT ablation for post-infarction VT. The factors that predict recurrence of VT after catheter ablation in patients with prior infarctions are not well described. Catheter ablation was performed in 98 consecutive patients (88 males [90%]; mean age 67 ± 10 years; ejection fraction 27 ± 13%) with post-infarction VT. Electrograms from the implantable cardioverter-defibrillator were analyzed, and VTs were classified as clinical, nonclinical, or new clinical. A total of 725 VTs were induced during the ablation procedure. All VTs were targeted. In 76 patients, 105 clinical VTs were inducible. Critical sites were identified with entrainment mapping and pace-mapping (≥10 of 12 matching leads) for 75 of 105 clinical VTs (71%) and for 278 of 620 nonclinical VTs (45%). Post-ablation, the clinical VT was not inducible in any patient, and all VTs were rendered noninducible in 63% of the patients. Over a mean follow-up period of 35 ± 23 months, 65 of 98 patients (66%) had no recurrent VTs and 33 (34%) had VT recurrence. A new VT occurred in 26 of 33 patients (79%), and a prior clinical VT recurred in 7 patients (21%). Patients with recurrent VT had a larger scar area as assessed by electroanatomic mapping compared with patients without recurrent VTs (93 ± 40 cm(2) vs. 69 ± 30 cm(2); p = 0.002). In patients with repeat procedures, the majority of inducible VTs for which a critical area could be identified were at a distance of 6 ± 3 mm to the prior ablation lesions. Patients with recurrent VTs have a larger scar as assessed by electroanatomic mapping. Most recurrent VTs were new, and the majority of these VTs were mapped to the vicinity of prior ablation lesions in patients with repeat procedures. Copyright © 2013 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Seismic-hazard maps and time histories for the commonwealth of Kentucky.
DOT National Transportation Integrated Search
2008-06-01
The ground-motion hazard maps and time histories for three earthquake scenarios, expected earthquakes, probable earthquakes, and maximum credible earthquakes on the free surface in hard rock (shear-wave velocity >1,500 m/s), were derived using the de...
A MATLAB implementation of the minimum relative entropy method for linear inverse problems
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian
2001-08-01
The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.
Mean phase predictor for maximum a posteriori demodulator
NASA Technical Reports Server (NTRS)
Altes, Richard A. (Inventor)
1996-01-01
A system and method for optimal maximum a posteriori (MAP) demodulation using a novel mean phase predictor. The mean phase predictor conducts cumulative averaging over multiple blocks of phase samples to provide accurate prior mean phases, to be input into a MAP phase estimator.
Code of Federal Regulations, 2012 CFR
2012-01-01
... AGRICULTURE EXPORT PROGRAMS COOPERATIVE AGREEMENTS FOR THE DEVELOPMENT OF FOREIGN MARKETS FOR AGRICULTURAL COMMODITIES Market Access Program § 1485.18 Advances. (a) Policy. In general, CCC operates MAP and EIP/MAP on... participant for generic promotion activities. Prior to making an advance, CCC may require the participant to...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-10
... explained in the legislative history of the Omnibus Trade and Competitiveness Act of 1988, the Department... Google Maps: https://maps.google.com . The rates were in effect prior to the POR, so we adjusted them to...
The Probabilities of Unique Events
2012-08-30
social justice and also participated in antinuclear demonstrations. The participants ranked the probability that Linda is a feminist bank teller as...investigated them. We propose a new theory (implemented in a computer program) in which such estimates depend on an intuitive non-numerical system capable only...of simple procedures, and a deliberative system that maps intuitions into numbers. The theory predicts that estimates of the probabilities of
Weighted image de-fogging using luminance dark prior
NASA Astrophysics Data System (ADS)
Kansal, Isha; Kasana, Singara Singh
2017-10-01
In this work, the weighted image de-fogging process based upon dark channel prior is modified by using luminance dark prior. Dark channel prior estimates the transmission by using three colour channels whereas luminance dark prior does the same by making use of only Y component of YUV colour space. For each pixel in a patch of ? size, the luminance dark prior uses ? pixels, rather than ? pixels used in DCP technique, which speeds up the de-fogging process. To estimate the transmission map, weighted approach based upon difference prior is used which mitigates halo artefacts at the time of transmission estimation. The major drawback of weighted technique is that it does not maintain the constancy of the transmission in a local patch even if there are no significant depth disruptions, due to which the de-fogged image looks over smooth and has low contrast. Apart from this, in some images, weighted transmission still carries less visible halo artefacts. Therefore, Gaussian filter is used to blur the estimated weighted transmission map which enhances the contrast of de-fogged images. In addition to this, a novel approach is proposed to remove the pixels belonging to bright light source(s) during the atmospheric light estimation process based upon histogram of YUV colour space. To show the effectiveness, the proposed technique is compared with existing techniques. This comparison shows that the proposed technique performs better than the existing techniques.
Understanding map projections: Chapter 15
Usery, E. Lynn; Kent, Alexander J.; Vujakovic, Peter
2018-01-01
It has probably never been more important in the history of cartography than now that people understand how maps work. With increasing globalization, for example, world maps provide a key format for the transmission of information, but are often poorly used. Examples of poor understanding and use of projections and the resultant maps are many; for instance, the use of rectangular world maps in the United Kingdom press to show Chinese and Korean missile ranges as circles, something which can only be achieved on equidistant projections and then only from one launch point (Vujakovic, 2014).
NASA Astrophysics Data System (ADS)
Morris, Phillip A.
The prevalence of low-cost side scanning sonar systems mounted on small recreational vessels has created improved opportunities to identify and map submerged navigational hazards in freshwater impoundments. However, these economical sensors also present unique challenges for automated techniques. This research explores related literature in automated sonar imagery processing and mapping technology, proposes and implements a framework derived from these sources, and evaluates the approach with video collected from a recreational grade sonar system. Image analysis techniques including optical character recognition and an unsupervised computer automated detection (CAD) algorithm are employed to extract the transducer GPS coordinates and slant range distance of objects protruding from the lake bottom. The retrieved information is formatted for inclusion into a spatial mapping model. Specific attributes of the sonar sensors are modeled such that probability profiles may be projected onto a three dimensional gridded map. These profiles are computed from multiple points of view as sonar traces crisscross or come near each other. As lake levels fluctuate over time so do the elevation points of view. With each sonar record, the probability of a hazard existing at certain elevations at the respective grid points is updated with Bayesian mechanics. As reinforcing data is collected, the confidence of the map improves. Given a lake's current elevation and a vessel draft, a final generated map can identify areas of the lake that have a high probability of containing hazards that threaten navigation. The approach is implemented in C/C++ utilizing OpenCV, Tesseract OCR, and QGIS open source software and evaluated in a designated test area at Lake Lavon, Collin County, Texas.
Determining Optimal Evacuation Decision Policies for Disasters
2012-03-01
18 3.3 Calculating the Hit Probability ( Phit ) . . . . . . . . . . . . . . . . . . 20 3.4 Phit versus Vertical...23 Figure 3.13 Large Probability Matrix (Map) . . . . . . . . . . . . . . . . . . . . . 24 Figure 3.14 Particle Trajectory with Phit data...26 Figure 3.15 Phit versus Vertical Volatility . . . . . . . . . . . . . . . . . . . . . . 27 Figure 4.1 Cost-To
Some Simple Formulas for Posterior Convergence Rates
2014-01-01
We derive some simple relations that demonstrate how the posterior convergence rate is related to two driving factors: a “penalized divergence” of the prior, which measures the ability of the prior distribution to propose a nonnegligible set of working models to approximate the true model and a “norm complexity” of the prior, which measures the complexity of the prior support, weighted by the prior probability masses. These formulas are explicit and involve no essential assumptions and are easy to apply. We apply this approach to the case with model averaging and derive some useful oracle inequalities that can optimize the performance adaptively without knowing the true model. PMID:27379278
Hippocampus segmentation using locally weighted prior based level set
NASA Astrophysics Data System (ADS)
Achuthan, Anusha; Rajeswari, Mandava
2015-12-01
Segmentation of hippocampus in the brain is one of a major challenge in medical image segmentation due to its' imaging characteristics, with almost similar intensity between another adjacent gray matter structure, such as amygdala. The intensity similarity has causes the hippocampus to have weak or fuzzy boundaries. With this main challenge being demonstrated by hippocampus, a segmentation method that relies on image information alone may not produce accurate segmentation results. Therefore, it is needed an assimilation of prior information such as shape and spatial information into existing segmentation method to produce the expected segmentation. Previous studies has widely integrated prior information into segmentation methods. However, the prior information has been utilized through a global manner integration, and this does not reflect the real scenario during clinical delineation. Therefore, in this paper, a locally integrated prior information into a level set model is presented. This work utilizes a mean shape model to provide automatic initialization for level set evolution, and has been integrated as prior information into the level set model. The local integration of edge based information and prior information has been implemented through an edge weighting map that decides at voxel level which information need to be observed during a level set evolution. The edge weighting map shows which corresponding voxels having sufficient edge information. Experiments shows that the proposed integration of prior information locally into a conventional edge-based level set model, known as geodesic active contour has shown improvement of 9% in averaged Dice coefficient.
Tack, Jason D.; Fedy, Bradley C.
2015-01-01
Proactive conservation planning for species requires the identification of important spatial attributes across ecologically relevant scales in a model-based framework. However, it is often difficult to develop predictive models, as the explanatory data required for model development across regional management scales is rarely available. Golden eagles are a large-ranging predator of conservation concern in the United States that may be negatively affected by wind energy development. Thus, identifying landscapes least likely to pose conflict between eagles and wind development via shared space prior to development will be critical for conserving populations in the face of imposing development. We used publically available data on golden eagle nests to generate predictive models of golden eagle nesting sites in Wyoming, USA, using a suite of environmental and anthropogenic variables. By overlaying predictive models of golden eagle nesting habitat with wind energy resource maps, we highlight areas of potential conflict among eagle nesting habitat and wind development. However, our results suggest that wind potential and the relative probability of golden eagle nesting are not necessarily spatially correlated. Indeed, the majority of our sample frame includes areas with disparate predictions between suitable nesting habitat and potential for developing wind energy resources. Map predictions cannot replace on-the-ground monitoring for potential risk of wind turbines on wildlife populations, though they provide industry and managers a useful framework to first assess potential development. PMID:26262876
Tack, Jason D.; Fedy, Bradley C.
2015-01-01
Proactive conservation planning for species requires the identification of important spatial attributes across ecologically relevant scales in a model-based framework. However, it is often difficult to develop predictive models, as the explanatory data required for model development across regional management scales is rarely available. Golden eagles are a large-ranging predator of conservation concern in the United States that may be negatively affected by wind energy development. Thus, identifying landscapes least likely to pose conflict between eagles and wind development via shared space prior to development will be critical for conserving populations in the face of imposing development. We used publically available data on golden eagle nests to generate predictive models of golden eagle nesting sites in Wyoming, USA, using a suite of environmental and anthropogenic variables. By overlaying predictive models of golden eagle nesting habitat with wind energy resource maps, we highlight areas of potential conflict among eagle nesting habitat and wind development. However, our results suggest that wind potential and the relative probability of golden eagle nesting are not necessarily spatially correlated. Indeed, the majority of our sample frame includes areas with disparate predictions between suitable nesting habitat and potential for developing wind energy resources. Map predictions cannot replace on-the-ground monitoring for potential risk of wind turbines on wildlife populations, though they provide industry and managers a useful framework to first assess potential development.
Tack, Jason D; Fedy, Bradley C
2015-01-01
Proactive conservation planning for species requires the identification of important spatial attributes across ecologically relevant scales in a model-based framework. However, it is often difficult to develop predictive models, as the explanatory data required for model development across regional management scales is rarely available. Golden eagles are a large-ranging predator of conservation concern in the United States that may be negatively affected by wind energy development. Thus, identifying landscapes least likely to pose conflict between eagles and wind development via shared space prior to development will be critical for conserving populations in the face of imposing development. We used publically available data on golden eagle nests to generate predictive models of golden eagle nesting sites in Wyoming, USA, using a suite of environmental and anthropogenic variables. By overlaying predictive models of golden eagle nesting habitat with wind energy resource maps, we highlight areas of potential conflict among eagle nesting habitat and wind development. However, our results suggest that wind potential and the relative probability of golden eagle nesting are not necessarily spatially correlated. Indeed, the majority of our sample frame includes areas with disparate predictions between suitable nesting habitat and potential for developing wind energy resources. Map predictions cannot replace on-the-ground monitoring for potential risk of wind turbines on wildlife populations, though they provide industry and managers a useful framework to first assess potential development.
Brenner, Darren R.; Amos, Christopher I.; Brhane, Yonathan; Timofeeva, Maria N.; Caporaso, Neil; Wang, Yufei; Christiani, David C.; Bickeböller, Heike; Yang, Ping; Albanes, Demetrius; Stevens, Victoria L.; Gapstur, Susan; McKay, James; Boffetta, Paolo; Zaridze, David; Szeszenia-Dabrowska, Neonilia; Lissowska, Jolanta; Rudnai, Peter; Fabianova, Eleonora; Mates, Dana; Bencko, Vladimir; Foretova, Lenka; Janout, Vladimir; Krokan, Hans E.; Skorpen, Frank; Gabrielsen, Maiken E.; Vatten, Lars; Njølstad, Inger; Chen, Chu; Goodman, Gary; Lathrop, Mark; Vooder, Tõnu; Välk, Kristjan; Nelis, Mari; Metspalu, Andres; Broderick, Peter; Eisen, Timothy; Wu, Xifeng; Zhang, Di; Chen, Wei; Spitz, Margaret R.; Wei, Yongyue; Su, Li; Xie, Dong; She, Jun; Matsuo, Keitaro; Matsuda, Fumihiko; Ito, Hidemi; Risch, Angela; Heinrich, Joachim; Rosenberger, Albert; Muley, Thomas; Dienemann, Hendrik; Field, John K.; Raji, Olaide; Chen, Ying; Gosney, John; Liloglou, Triantafillos; Davies, Michael P.A.; Marcus, Michael; McLaughlin, John; Orlow, Irene; Han, Younghun; Li, Yafang; Zong, Xuchen; Johansson, Mattias; Liu, Geoffrey; Tworoger, Shelley S.; Le Marchand, Loic; Henderson, Brian E.; Wilkens, Lynne R.; Dai, Juncheng; Shen, Hongbing; Houlston, Richard S.; Landi, Maria T.; Brennan, Paul; Hung, Rayjean J.
2015-01-01
Large-scale genome-wide association studies (GWAS) have likely uncovered all common variants at the GWAS significance level. Additional variants within the suggestive range (0.0001> P > 5×10−8) are, however, still of interest for identifying causal associations. This analysis aimed to apply novel variant prioritization approaches to identify additional lung cancer variants that may not reach the GWAS level. Effects were combined across studies with a total of 33456 controls and 6756 adenocarcinoma (AC; 13 studies), 5061 squamous cell carcinoma (SCC; 12 studies) and 2216 small cell lung cancer cases (9 studies). Based on prior information such as variant physical properties and functional significance, we applied stratified false discovery rates, hierarchical modeling and Bayesian false discovery probabilities for variant prioritization. We conducted a fine mapping analysis as validation of our methods by examining top-ranking novel variants in six independent populations with a total of 3128 cases and 2966 controls. Three novel loci in the suggestive range were identified based on our Bayesian framework analyses: KCNIP4 at 4p15.2 (rs6448050, P = 4.6×10−7) and MTMR2 at 11q21 (rs10501831, P = 3.1×10−6) with SCC, as well as GAREM at 18q12.1 (rs11662168, P = 3.4×10−7) with AC. Use of our prioritization methods validated two of the top three loci associated with SCC (P = 1.05×10−4 for KCNIP4, represented by rs9799795) and AC (P = 2.16×10−4 for GAREM, represented by rs3786309) in the independent fine mapping populations. This study highlights the utility of using prior functional data for sequence variants in prioritization analyses to search for robust signals in the suggestive range. PMID:26363033
Brenner, Darren R; Amos, Christopher I; Brhane, Yonathan; Timofeeva, Maria N; Caporaso, Neil; Wang, Yufei; Christiani, David C; Bickeböller, Heike; Yang, Ping; Albanes, Demetrius; Stevens, Victoria L; Gapstur, Susan; McKay, James; Boffetta, Paolo; Zaridze, David; Szeszenia-Dabrowska, Neonilia; Lissowska, Jolanta; Rudnai, Peter; Fabianova, Eleonora; Mates, Dana; Bencko, Vladimir; Foretova, Lenka; Janout, Vladimir; Krokan, Hans E; Skorpen, Frank; Gabrielsen, Maiken E; Vatten, Lars; Njølstad, Inger; Chen, Chu; Goodman, Gary; Lathrop, Mark; Vooder, Tõnu; Välk, Kristjan; Nelis, Mari; Metspalu, Andres; Broderick, Peter; Eisen, Timothy; Wu, Xifeng; Zhang, Di; Chen, Wei; Spitz, Margaret R; Wei, Yongyue; Su, Li; Xie, Dong; She, Jun; Matsuo, Keitaro; Matsuda, Fumihiko; Ito, Hidemi; Risch, Angela; Heinrich, Joachim; Rosenberger, Albert; Muley, Thomas; Dienemann, Hendrik; Field, John K; Raji, Olaide; Chen, Ying; Gosney, John; Liloglou, Triantafillos; Davies, Michael P A; Marcus, Michael; McLaughlin, John; Orlow, Irene; Han, Younghun; Li, Yafang; Zong, Xuchen; Johansson, Mattias; Liu, Geoffrey; Tworoger, Shelley S; Le Marchand, Loic; Henderson, Brian E; Wilkens, Lynne R; Dai, Juncheng; Shen, Hongbing; Houlston, Richard S; Landi, Maria T; Brennan, Paul; Hung, Rayjean J
2015-11-01
Large-scale genome-wide association studies (GWAS) have likely uncovered all common variants at the GWAS significance level. Additional variants within the suggestive range (0.0001> P > 5×10(-8)) are, however, still of interest for identifying causal associations. This analysis aimed to apply novel variant prioritization approaches to identify additional lung cancer variants that may not reach the GWAS level. Effects were combined across studies with a total of 33456 controls and 6756 adenocarcinoma (AC; 13 studies), 5061 squamous cell carcinoma (SCC; 12 studies) and 2216 small cell lung cancer cases (9 studies). Based on prior information such as variant physical properties and functional significance, we applied stratified false discovery rates, hierarchical modeling and Bayesian false discovery probabilities for variant prioritization. We conducted a fine mapping analysis as validation of our methods by examining top-ranking novel variants in six independent populations with a total of 3128 cases and 2966 controls. Three novel loci in the suggestive range were identified based on our Bayesian framework analyses: KCNIP4 at 4p15.2 (rs6448050, P = 4.6×10(-7)) and MTMR2 at 11q21 (rs10501831, P = 3.1×10(-6)) with SCC, as well as GAREM at 18q12.1 (rs11662168, P = 3.4×10(-7)) with AC. Use of our prioritization methods validated two of the top three loci associated with SCC (P = 1.05×10(-4) for KCNIP4, represented by rs9799795) and AC (P = 2.16×10(-4) for GAREM, represented by rs3786309) in the independent fine mapping populations. This study highlights the utility of using prior functional data for sequence variants in prioritization analyses to search for robust signals in the suggestive range. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Chesnais, Cédric B.; Awaca-Uvon, Naomi-Pitchouna; Bolay, Fatoma K.; Boussinesq, Michel; Fischer, Peter U.; Gankpala, Lincoln; Meite, Aboulaye; Missamou, François; Pion, Sébastien D.
2017-01-01
Background The Global Programme to Eliminate Lymphatic Filariasis uses point-of-care tests for circulating filarial antigenemia (CFA) to map endemic areas and for monitoring and evaluating the success of mass drug administration (MDA) programs. We compared the performance of the reference BinaxNOW Filariasis card test (ICT, introduced in 1997) with the Alere Filariasis Test Strip (FTS, introduced in 2013) in 5 endemic study sites in Africa. Methodology The tests were compared prior to MDA in two study sites (Congo and Côte d'Ivoire) and in three sites that had received MDA (DRC and 2 sites in Liberia). Data were analyzed with regard to % positivity, % agreement, and heterogeneity. Models evaluated potential effects of age, gender, and blood microfilaria (Mf) counts in individuals and effects of endemicity and history of MDA at the village level as potential factors linked to higher sensitivity of the FTS. Lastly, we assessed relationships between CFA scores and Mf in pre- and post-MDA settings. Principal findings Paired test results were available for 3,682 individuals. Antigenemia rates were 8% and 22% higher by FTS than by ICT in pre-MDA and in post-MDA sites, respectively. FTS/ICT ratios were higher in areas with low infection rates. The probability of having microfilaremia was much higher in persons with CFA scores >1 in untreated areas. However, this was not true in post-MDA settings. Conclusions/Significance This study has provided extensive new information on the performance of the FTS compared to ICT in Africa and it has confirmed the increased sensitivity of FTS reported in prior studies. Variability in FTS/ICT was related in part to endemicity level, history of MDA, and perhaps to the medications used for MDA. These results suggest that FTS should be superior to ICT for mapping, for transmission assessment surveys, and for post-MDA surveillance. PMID:28892473
Chesnais, Cédric B; Awaca-Uvon, Naomi-Pitchouna; Bolay, Fatoma K; Boussinesq, Michel; Fischer, Peter U; Gankpala, Lincoln; Meite, Aboulaye; Missamou, François; Pion, Sébastien D; Weil, Gary J
2017-09-01
The Global Programme to Eliminate Lymphatic Filariasis uses point-of-care tests for circulating filarial antigenemia (CFA) to map endemic areas and for monitoring and evaluating the success of mass drug administration (MDA) programs. We compared the performance of the reference BinaxNOW Filariasis card test (ICT, introduced in 1997) with the Alere Filariasis Test Strip (FTS, introduced in 2013) in 5 endemic study sites in Africa. The tests were compared prior to MDA in two study sites (Congo and Côte d'Ivoire) and in three sites that had received MDA (DRC and 2 sites in Liberia). Data were analyzed with regard to % positivity, % agreement, and heterogeneity. Models evaluated potential effects of age, gender, and blood microfilaria (Mf) counts in individuals and effects of endemicity and history of MDA at the village level as potential factors linked to higher sensitivity of the FTS. Lastly, we assessed relationships between CFA scores and Mf in pre- and post-MDA settings. Paired test results were available for 3,682 individuals. Antigenemia rates were 8% and 22% higher by FTS than by ICT in pre-MDA and in post-MDA sites, respectively. FTS/ICT ratios were higher in areas with low infection rates. The probability of having microfilaremia was much higher in persons with CFA scores >1 in untreated areas. However, this was not true in post-MDA settings. This study has provided extensive new information on the performance of the FTS compared to ICT in Africa and it has confirmed the increased sensitivity of FTS reported in prior studies. Variability in FTS/ICT was related in part to endemicity level, history of MDA, and perhaps to the medications used for MDA. These results suggest that FTS should be superior to ICT for mapping, for transmission assessment surveys, and for post-MDA surveillance.
Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.
Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen
2018-07-01
Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).
ERIC Educational Resources Information Center
Murphy, Amanda; Terrizzi, Marissa; Cormas, Peter
2012-01-01
"Probability is a difficult concept to teach, because children and adults find it counterintuitive." This is impetus to consider the detailed planning of a set of lessons with a "mixed", in many senses, group of fourth graders. Can the use of prior experience, and the knowledge associated with that experience, make probability a concept that is…
Robust Connectivity in Sensory and Ad Hoc Network
2011-02-01
as the prior probability is π0 = 0.8, the error probability should be capped at 0.2. This seemingly pathological result is due to the fact that the...publications and is the author of the book Multirate and Wavelet Signal Processing (Academic Press, 1998). His research interests include multiscale signal and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, W.J.; Cox, D.D.; Martz, H.F.
1997-12-01
When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems atmore » US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica{reg_sign} computer programs which are provided.« less
Sargeant, Glen A.; Sovada, Marsha A.; Slivinski, Christiane C.; Johnson, Douglas H.
2005-01-01
Accurate maps of species distributions are essential tools for wildlife research and conservation. Unfortunately, biologists often are forced to rely on maps derived from observed occurrences recorded opportunistically during observation periods of variable length. Spurious inferences are likely to result because such maps are profoundly affected by the duration and intensity of observation and by methods used to delineate distributions, especially when detection is uncertain. We conducted a systematic survey of swift fox (Vulpes velox) distribution in western Kansas, USA, and used Markov chain Monte Carlo (MCMC) image restoration to rectify these problems. During 1997–1999, we searched 355 townships (ca. 93 km) 1–3 times each for an average cost of $7,315 per year and achieved a detection rate (probability of detecting swift foxes, if present, during a single search) of = 0.69 (95% Bayesian confidence interval [BCI] = [0.60, 0.77]). Our analysis produced an estimate of the underlying distribution, rather than a map of observed occurrences, that reflected the uncertainty associated with estimates of model parameters. To evaluate our results, we analyzed simulated data with similar properties. Results of our simulations suggest negligible bias and good precision when probabilities of detection on ≥1 survey occasions (cumulative probabilities of detection) exceed 0.65. Although the use of MCMC image restoration has been limited by theoretical and computational complexities, alternatives do not possess the same advantages. Image models accommodate uncertain detection, do not require spatially independent data or a census of map units, and can be used to estimate species distributions directly from observations without relying on habitat covariates or parameters that must be estimated subjectively. These features facilitate economical surveys of large regions, the detection of temporal trends in distribution, and assessments of landscape-level relations between species and habitats. Requirements for the use of MCMC image restoration include study areas that can be partitioned into regular grids of mapping units, spatially contagious species distributions, reliable methods for identifying target species, and cumulative probabilities of detection ≥0.65.
Sargeant, G.A.; Sovada, M.A.; Slivinski, C.C.; Johnson, D.H.
2005-01-01
Accurate maps of species distributions are essential tools for wildlife research and conservation. Unfortunately, biologists often are forced to rely on maps derived from observed occurrences recorded opportunistically during observation periods of variable length. Spurious inferences are likely to result because such maps are profoundly affected by the duration and intensity of observation and by methods used to delineate distributions, especially when detection is uncertain. We conducted a systematic survey of swift fox (Vulpes velox) distribution in western Kansas, USA, and used Markov chain Monte Carlo (MCMC) image restoration to rectify these problems. During 1997-1999, we searched 355 townships (ca. 93 km2) 1-3 times each for an average cost of $7,315 per year and achieved a detection rate (probability of detecting swift foxes, if present, during a single search) of ?? = 0.69 (95% Bayesian confidence interval [BCI] = [0.60, 0.77]). Our analysis produced an estimate of the underlying distribution, rather than a map of observed occurrences, that reflected the uncertainty associated with estimates of model parameters. To evaluate our results, we analyzed simulated data with similar properties. Results of our simulations suggest negligible bias and good precision when probabilities of detection on ???1 survey occasions (cumulative probabilities of detection) exceed 0.65. Although the use of MCMC image restoration has been limited by theoretical and computational complexities, alternatives do not possess the same advantages. Image models accommodate uncertain detection, do not require spatially independent data or a census of map units, and can be used to estimate species distributions directly from observations without relying on habitat covariates or parameters that must be estimated subjectively. These features facilitate economical surveys of large regions, the detection of temporal trends in distribution, and assessments of landscape-level relations between species and habitats. Requirements for the use of MCMC image restoration include study areas that can be partitioned into regular grids of mapping units, spatially contagious species distributions, reliable methods for identifying target species, and cumulative probabilities of detection ???0.65.
Wildfire risk in the wildland-urban interface: A simulation study in northwestern Wisconsin
Massada, Avi Bar; Radeloff, Volker C.; Stewart, Susan I.; Hawbaker, Todd J.
2009-01-01
The rapid growth of housing in and near the wildland–urban interface (WUI) increases wildfirerisk to lives and structures. To reduce fire risk, it is necessary to identify WUI housing areas that are more susceptible to wildfire. This is challenging, because wildfire patterns depend on fire behavior and spread, which in turn depend on ignition locations, weather conditions, the spatial arrangement of fuels, and topography. The goal of our study was to assess wildfirerisk to a 60,000 ha WUI area in northwesternWisconsin while accounting for all of these factors. We conducted 6000 simulations with two dynamic fire models: Fire Area Simulator (FARSITE) and Minimum Travel Time (MTT) in order to map the spatial pattern of burn probabilities. Simulations were run under normal and extreme weather conditions to assess the effect of weather on fire spread, burn probability, and risk to structures. The resulting burn probability maps were intersected with maps of structure locations and land cover types. The simulations revealed clear hotspots of wildfire activity and a large range of wildfirerisk to structures in the study area. As expected, the extreme weather conditions yielded higher burn probabilities over the entire landscape, as well as to different land cover classes and individual structures. Moreover, the spatial pattern of risk was significantly different between extreme and normal weather conditions. The results highlight the fact that extreme weather conditions not only produce higher fire risk than normal weather conditions, but also change the fine-scale locations of high risk areas in the landscape, which is of great importance for fire management in WUI areas. In addition, the choice of weather data may limit the potential for comparisons of risk maps for different areas and for extrapolating risk maps to future scenarios where weather conditions are unknown. Our approach to modeling wildfirerisk to structures can aid fire risk reduction management activities by identifying areas with elevated wildfirerisk and those most vulnerable under extreme weather conditions.
Low Resolution Refinement of Atomic Models Against Crystallographic Data.
Nicholls, Robert A; Kovalevskiy, Oleg; Murshudov, Garib N
2017-01-01
This review describes some of the problems encountered during low-resolution refinement and map calculation. Refinement is considered as an application of Bayes' theorem, allowing combination of information from various sources including crystallographic experimental data and prior chemical and structural knowledge. The sources of prior knowledge relevant to macromolecules include basic chemical information such as bonds and angles, structural information from reference models of known homologs, knowledge about secondary structures, hydrogen bonding patterns, and similarity of non-crystallographically related copies of a molecule. Additionally, prior information encapsulating local conformational conservation is exploited, keeping local interatomic distances similar to those in the starting atomic model. The importance of designing an accurate likelihood function-the only link between model parameters and observed data-is emphasized. The review also reemphasizes the importance of phases, and describes how the use of raw observed amplitudes could give a better correlation between the calculated and "true" maps. It is shown that very noisy or absent observations can be replaced by calculated structure factors, weighted according to the accuracy of the atomic model. This approach helps to smoothen the map. However, such replacement should be used sparingly, as the bias toward errors in the model could be too much to avoid. It is in general recommended that, whenever a new map is calculated, map quality should be judged by inspection of the parts of the map where there is no atomic model. It is also noted that it is advisable to work with multiple blurred and sharpened maps, as different parts of a crystal may exhibit different degrees of mobility. Doing so can allow accurate building of atomic models, accounting for overall shape as well as finer structural details. Some of the results described in this review have been implemented in the programs REFMAC5, ProSMART and LORESTR, which are available as part of the CCP4 software suite.
Structure-aware depth super-resolution using Gaussian mixture model
NASA Astrophysics Data System (ADS)
Kim, Sunok; Oh, Changjae; Kim, Youngjung; Sohn, Kwanghoon
2015-03-01
This paper presents a probabilistic optimization approach to enhance the resolution of a depth map. Conventionally, a high-resolution color image is considered as a cue for depth super-resolution under the assumption that the pixels with similar color likely belong to similar depth. This assumption might induce a texture transferring from the color image into the depth map and an edge blurring artifact to the depth boundaries. In order to alleviate these problems, we propose an efficient depth prior exploiting a Gaussian mixture model in which an estimated depth map is considered to a feature for computing affinity between two pixels. Furthermore, a fixed-point iteration scheme is adopted to address the non-linearity of a constraint derived from the proposed prior. The experimental results show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.
VizieR Online Data Catalog: X-ray sources in the AKARI NEP deep field (Krumpe+, 2015)
NASA Astrophysics Data System (ADS)
Krumpe, M.; Miyaji, T.; Brunner, H.; Hanami, H.; Ishigaki, T.; Takagi, T.; Markowitz, A. G.; Goto, T.; Malkan, M. A.; Matsuhara, H.; Pearson, C.; Ueda, Y.; Wada, T.
2015-06-01
The fits images labelled SeMap* are the sensitivity maps in which we give the minimum flux that would have caused a detection at each position. This flux depends on the maximum likelihood threshold chosen in the source detection run, the point spread function, and the background level at the chosen position. We create sensitivity maps in different energy bands (0.5-2, 0.5-7, 2-4, 2-7, and 4-7keV) by searching for the flux to reject the null-hypothesis that the flux at a given position is only caused by a background fluctuation. In a chosen energy band, we determine for each position in the survey the flux required to obtain a certain Poisson probability above the background counts. Since ML=-ln(P), we know from our ML=12 threshold the probability we are aiming for. In practice, we search for a value of -ln P_total that falls within Delta ML=+/-0.2 of our targeted ML threshold. This tolerance range corresponds to having one spurious source more or less in the whole survey. Note, that outside the deep Subaru/Suprime-Cam imaging the sensitivity maps should be used with caution since we assume for their generation a ML=12 over the whole area covered by Chandra. More details on the procedure of producing the sensitivity maps, including the PSF-summed background map and PSF-weighted averaged exposure maps are given in the paper, section 5.3. The fits images labelled u90* are the upper limit maps, where the upper 90 per cent confidence flux limit is given at each position. We take a Bayesian approach following Kraft, Burrows & Nousek, 1991ApJ...374..344K. Consequently, we obtain the upper 90~per cent confidence flux limit by searching for the flux such that given the observed counts the Bayesian probability of having this flux or larger is 10~per cent. More details on the procedure of producing the upper 90 per cent flux limit maps are given in the paper, section 5.4. (6 data files).
Matranga, Domenica; Firenze, Alberto; Vullo, Angela
2013-10-01
The aim of this study was to show the potential of Bayesian analysis in statistical modelling of dental caries data. Because of the bounded nature of the dmft (DMFT) index, zero-inflated binomial (ZIB) and beta-binomial (ZIBB) models were considered. The effects of incorporating prior information available about the parameters of models were also shown. The data set used in this study was the Belo Horizonte Caries Prevention (BELCAP) study (Böhning et al. (1999)), consisting of five variables collected among 797 Brazilian school children designed to evaluate four programmes for reducing caries. Only the eight primary molar teeth were considered in the data set. A data augmentation algorithm was used for estimation. Firstly, noninformative priors were used to express our lack of knowledge about the regression parameters. Secondly, prior information about the probability of being a structural zero dmft and the probability of being caries affected in the subpopulation of susceptible children was incorporated. With noninformative priors, the best fitting model was the ZIBB. Education (OR = 0.76, 95% CrI: 0.59, 0.99), all interventions (OR = 0.46, 95% CrI: 0.35, 0.62), rinsing (OR = 0.61, 95% CrI: 0.47, 0.80) and hygiene (OR = 0.65, 95% CrI: 0.49, 0.86) were demonstrated to be factors protecting children from being caries affected. Being male increased the probability of being caries diseased (OR = 1.19, 95% CrI: 1.01, 1.42). However, after incorporating informative priors, ZIB models' estimates were not influenced, while ZIBB models reduced deviance and confirmed the association with all interventions and rinsing only. In our application, Bayesian estimates showed a similar accuracy and precision than likelihood-based estimates, although they offered many computational advantages and the possibility of expressing all forms of uncertainty in terms of probability. The overdispersion parameter could expound why the introduction of prior information had significant effects on the parameters of the ZIBB model, while ZIB estimates remained unchanged. Finally, the best performance of ZIBB compared to the ZIB model was shown to catch overdispersion in data. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
2010-01-01
Background The information provided by dense genome-wide markers using high throughput technology is of considerable potential in human disease studies and livestock breeding programs. Genome-wide association studies relate individual single nucleotide polymorphisms (SNP) from dense SNP panels to individual measurements of complex traits, with the underlying assumption being that any association is caused by linkage disequilibrium (LD) between SNP and quantitative trait loci (QTL) affecting the trait. Often SNP are in genomic regions of no trait variation. Whole genome Bayesian models are an effective way of incorporating this and other important prior information into modelling. However a full Bayesian analysis is often not feasible due to the large computational time involved. Results This article proposes an expectation-maximization (EM) algorithm called emBayesB which allows only a proportion of SNP to be in LD with QTL and incorporates prior information about the distribution of SNP effects. The posterior probability of being in LD with at least one QTL is calculated for each SNP along with estimates of the hyperparameters for the mixture prior. A simulated example of genomic selection from an international workshop is used to demonstrate the features of the EM algorithm. The accuracy of prediction is comparable to a full Bayesian analysis but the EM algorithm is considerably faster. The EM algorithm was accurate in locating QTL which explained more than 1% of the total genetic variation. A computational algorithm for very large SNP panels is described. Conclusions emBayesB is a fast and accurate EM algorithm for implementing genomic selection and predicting complex traits by mapping QTL in genome-wide dense SNP marker data. Its accuracy is similar to Bayesian methods but it takes only a fraction of the time. PMID:20969788
Wasser, Samuel K.; Hayward, Lisa S.; Hartman, Jennifer; Booth, Rebecca K.; Broms, Kristin; Berg, Jodi; Seely, Elizabeth; Lewis, Lyle; Smith, Heath
2012-01-01
State and federal actions to conserve northern spotted owl (Strix occidentalis caurina) habitat are largely initiated by establishing habitat occupancy. Northern spotted owl occupancy is typically assessed by eliciting their response to simulated conspecific vocalizations. However, proximity of barred owls (Strix varia)–a significant threat to northern spotted owls–can suppress northern spotted owl responsiveness to vocalization surveys and hence their probability of detection. We developed a survey method to simultaneously detect both species that does not require vocalization. Detection dogs (Canis familiaris) located owl pellets accumulated under roost sites, within search areas selected using habitat association maps. We compared success of detection dog surveys to vocalization surveys slightly modified from the U.S. Fish and Wildlife Service’s Draft 2010 Survey Protocol. Seventeen 2 km ×2 km polygons were each surveyed multiple times in an area where northern spotted owls were known to nest prior to 1997 and barred owl density was thought to be low. Mitochondrial DNA was used to confirm species from pellets detected by dogs. Spotted owl and barred owl detection probabilities were significantly higher for dog than vocalization surveys. For spotted owls, this difference increased with number of site visits. Cumulative detection probabilities of northern spotted owls were 29% after session 1, 62% after session 2, and 87% after session 3 for dog surveys, compared to 25% after session 1, increasing to 59% by session 6 for vocalization surveys. Mean detection probability for barred owls was 20.1% for dog surveys and 7.3% for vocal surveys. Results suggest that detection dog surveys can complement vocalization surveys by providing a reliable method for establishing occupancy of both northern spotted and barred owl without requiring owl vocalization. This helps meet objectives of Recovery Actions 24 and 25 of the Revised Recovery Plan for the Northern Spotted Owl. PMID:22916175
Wasser, Samuel K; Hayward, Lisa S; Hartman, Jennifer; Booth, Rebecca K; Broms, Kristin; Berg, Jodi; Seely, Elizabeth; Lewis, Lyle; Smith, Heath
2012-01-01
State and federal actions to conserve northern spotted owl (Strix occidentalis caurina) habitat are largely initiated by establishing habitat occupancy. Northern spotted owl occupancy is typically assessed by eliciting their response to simulated conspecific vocalizations. However, proximity of barred owls (Strix varia)-a significant threat to northern spotted owls-can suppress northern spotted owl responsiveness to vocalization surveys and hence their probability of detection. We developed a survey method to simultaneously detect both species that does not require vocalization. Detection dogs (Canis familiaris) located owl pellets accumulated under roost sites, within search areas selected using habitat association maps. We compared success of detection dog surveys to vocalization surveys slightly modified from the U.S. Fish and Wildlife Service's Draft 2010 Survey Protocol. Seventeen 2 km × 2 km polygons were each surveyed multiple times in an area where northern spotted owls were known to nest prior to 1997 and barred owl density was thought to be low. Mitochondrial DNA was used to confirm species from pellets detected by dogs. Spotted owl and barred owl detection probabilities were significantly higher for dog than vocalization surveys. For spotted owls, this difference increased with number of site visits. Cumulative detection probabilities of northern spotted owls were 29% after session 1, 62% after session 2, and 87% after session 3 for dog surveys, compared to 25% after session 1, increasing to 59% by session 6 for vocalization surveys. Mean detection probability for barred owls was 20.1% for dog surveys and 7.3% for vocal surveys. Results suggest that detection dog surveys can complement vocalization surveys by providing a reliable method for establishing occupancy of both northern spotted and barred owl without requiring owl vocalization. This helps meet objectives of Recovery Actions 24 and 25 of the Revised Recovery Plan for the Northern Spotted Owl.
NASA Astrophysics Data System (ADS)
Ibrahima, Fayadhoi; Meyer, Daniel; Tchelepi, Hamdi
2016-04-01
Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear two-phase flows in porous media are essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo (MC) simulations remain the preferred option.We propose an alternative approach to evaluate the probability distribution of the (water) saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the (water) saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation PDF and CDF essentially in terms of a deterministic mapping and the distribution and statistics of scalar random fields. In a large class of applications these random fields can be estimated at low computational costs (few MC runs), thus making the distribution method attractive. Even though the method relies on a key assumption of fixed streamlines, we show that it performs well for high input variances, which is the case of interest. Once the saturation distribution is determined, any one-point statistics thereof can be obtained, especially the saturation average and standard deviation. Moreover, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction in the prior knowledge of input distributions. We provide various examples and comparisons with MC simulations to illustrate the performance of the method.
N-mixture models for estimating population size from spatially replicated counts
Royle, J. Andrew
2004-01-01
Spatial replication is a common theme in count surveys of animals. Such surveys often generate sparse count data from which it is difficult to estimate population size while formally accounting for detection probability. In this article, i describe a class of models (n-mixture models) which allow for estimation of population size from such data. The key idea is to view site-specific population sizes, n, as independent random variables distributed according to some mixing distribution (e.g., Poisson). Prior parameters are estimated from the marginal likelihood of the data, having integrated over the prior distribution for n. Carroll and lombard (1985, journal of american statistical association 80, 423-426) proposed a class of estimators based on mixing over a prior distribution for detection probability. Their estimator can be applied in limited settings, but is sensitive to prior parameter values that are fixed a priori. Spatial replication provides additional information regarding the parameters of the prior distribution on n that is exploited by the n-mixture models and which leads to reasonable estimates of abundance from sparse data. A simulation study demonstrates superior operating characteristics (bias, confidence interval coverage) of the n-mixture estimator compared to the caroll and lombard estimator. Both estimators are applied to point count data on six species of birds illustrating the sensitivity to choice of prior on p and substantially different estimates of abundance as a consequence.
Kanis, John A; Harvey, Nicholas C; Cooper, Cyrus; Johansson, Helena; Odén, Anders; McCloskey, Eugene V
2016-01-01
In most assessment guidelines, treatment for osteoporosis is recommended in individuals with prior fragility fractures, especially fractures at spine and hip. However, for those without prior fractures, the intervention thresholds can be derived using different methods. The aim of this report was to undertake a systematic review of the available information on the use of FRAX® in assessment guidelines, in particular the setting of thresholds and their validation. We identified 120 guidelines or academic papers that incorporated FRAX of which 38 provided no clear statement on how the fracture probabilities derived are to be used in decision-making in clinical practice. The remainder recommended a fixed intervention threshold (n=58), most commonly as a component of more complex guidance (e.g. bone mineral density (BMD) thresholds) or an age-dependent threshold (n=22). Two guidelines have adopted both age-dependent and fixed thresholds. Fixed probability thresholds have ranged from 4 to 20 % for a major fracture and 1.3-5 % for hip fracture. More than one half (39) of the 58 publications identified utilized a threshold probability of 20 % for a major osteoporotic fracture, many of which also mention a hip fracture probability of 3 % as an alternative intervention threshold. In nearly all instances, no rationale is provided other than that this was the threshold used by the National Osteoporosis Foundation of the US. Where undertaken, fixed probability thresholds have been determined from tests of discrimination (Hong Kong), health economic assessment (US, Switzerland), to match the prevalence of osteoporosis (China) or to align with pre-existing guidelines or reimbursement criteria (Japan, Poland). Age-dependent intervention thresholds, first developed by the National Osteoporosis Guideline Group (NOGG), are based on the rationale that if a woman with a prior fragility fracture is eligible for treatment, then, at any given age, a man or woman with the same fracture probability but in the absence of a previous fracture (i.e. at the ‘fracture threshold’) should also be eligible. Under current NOGG guidelines, based on age-dependent probability thresholds, inequalities in access to therapy arise especially at older ages (≥ 70 years) depending on the presence or absence of a prior fracture. An alternative threshold using a hybrid model reduces this disparity. The use of FRAX (fixed or age-dependent thresholds) as the gateway to assessment identifies individuals at high risk more effectively than the use of BMD. However, the setting of intervention thresholds need to be country-specific. PMID:27465509
Imprecise Probability Methods for Weapons UQ
DOE Office of Scientific and Technical Information (OSTI.GOV)
Picard, Richard Roy; Vander Wiel, Scott Alan
Building on recent work in uncertainty quanti cation, we examine the use of imprecise probability methods to better characterize expert knowledge and to improve on misleading aspects of Bayesian analysis with informative prior distributions. Quantitative approaches to incorporate uncertainties in weapons certi cation are subject to rigorous external peer review, and in this regard, certain imprecise probability methods are well established in the literature and attractive. These methods are illustrated using experimental data from LANL detonator impact testing.
Bayesian analysis of the astrobiological implications of life’s early emergence on Earth
Spiegel, David S.; Turner, Edwin L.
2012-01-01
Life arose on Earth sometime in the first few hundred million years after the young planet had cooled to the point that it could support water-based organisms on its surface. The early emergence of life on Earth has been taken as evidence that the probability of abiogenesis is high, if starting from young Earth-like conditions. We revisit this argument quantitatively in a Bayesian statistical framework. By constructing a simple model of the probability of abiogenesis, we calculate a Bayesian estimate of its posterior probability, given the data that life emerged fairly early in Earth’s history and that, billions of years later, curious creatures noted this fact and considered its implications. We find that, given only this very limited empirical information, the choice of Bayesian prior for the abiogenesis probability parameter has a dominant influence on the computed posterior probability. Although terrestrial life's early emergence provides evidence that life might be abundant in the universe if early-Earth-like conditions are common, the evidence is inconclusive and indeed is consistent with an arbitrarily low intrinsic probability of abiogenesis for plausible uninformative priors. Finding a single case of life arising independently of our lineage (on Earth, elsewhere in the solar system, or on an extrasolar planet) would provide much stronger evidence that abiogenesis is not extremely rare in the universe. PMID:22198766
Bayesian analysis of the astrobiological implications of life's early emergence on Earth.
Spiegel, David S; Turner, Edwin L
2012-01-10
Life arose on Earth sometime in the first few hundred million years after the young planet had cooled to the point that it could support water-based organisms on its surface. The early emergence of life on Earth has been taken as evidence that the probability of abiogenesis is high, if starting from young Earth-like conditions. We revisit this argument quantitatively in a bayesian statistical framework. By constructing a simple model of the probability of abiogenesis, we calculate a bayesian estimate of its posterior probability, given the data that life emerged fairly early in Earth's history and that, billions of years later, curious creatures noted this fact and considered its implications. We find that, given only this very limited empirical information, the choice of bayesian prior for the abiogenesis probability parameter has a dominant influence on the computed posterior probability. Although terrestrial life's early emergence provides evidence that life might be abundant in the universe if early-Earth-like conditions are common, the evidence is inconclusive and indeed is consistent with an arbitrarily low intrinsic probability of abiogenesis for plausible uninformative priors. Finding a single case of life arising independently of our lineage (on Earth, elsewhere in the solar system, or on an extrasolar planet) would provide much stronger evidence that abiogenesis is not extremely rare in the universe.
A multi-source probabilistic hazard assessment of tephra dispersal in the Neapolitan area
NASA Astrophysics Data System (ADS)
Sandri, Laura; Costa, Antonio; Selva, Jacopo; Folch, Arnau; Macedonio, Giovanni; Tonini, Roberto
2015-04-01
In this study we present the results obtained from a long-term Probabilistic Hazard Assessment (PHA) of tephra dispersal in the Neapolitan area. Usual PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping eruption sizes and possible vent positions in a limited number of classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA then results from combining simulations considering different volcanological and meteorological conditions through weights associated to their specific probability of occurrence. However, volcanological parameters (i.e., erupted mass, eruption column height, eruption duration, bulk granulometry, fraction of aggregates) typically encompass a wide range of values. Because of such a natural variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. In the present study, we use a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological input values are chosen by using a stratified sampling method. This procedure allows for quantifying hazard without relying on the definition of scenarios, thus avoiding potential biases introduced by selecting single representative scenarios. Embedding this procedure into the Bayesian Event Tree scheme enables the tephra fall PHA and its epistemic uncertainties. We have appied this scheme to analyze long-term tephra fall PHA from Vesuvius and Campi Flegrei, in a multi-source paradigm. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained show that PHA accounting for the whole natural variability are consistent with previous probabilities maps elaborated for Vesuvius and Campi Flegrei on the basis of single representative scenarios, but show significant differences. In particular, the area characterized by a 300 kg/m2-load exceedance probability larger than 5%, accounting for the whole range of variability (that is, from small violent strombolian to plinian eruptions), is similar to that displayed in the maps based on the medium magnitude reference eruption, but it is of a smaller extent. This is due to the relatively higher weight of the small magnitude eruptions considered in this study, but neglected in the reference scenario maps. On the other hand, in our new maps the area characterized by a 300 kg/m2-load exceedance probability larger than 1% is much larger than that of the medium magnitude reference eruption, due to the contribution of plinian eruptions at lower probabilities, again neglected in the reference scenario maps.
Bouchard, Kristofer E.; Ganguli, Surya; Brainard, Michael S.
2015-01-01
The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability), as well as the probability of having transitioned to the current state from previous states (backward probability). Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance, and STDP) with pre-synaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, post-synaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions. PMID:26257637
A Brownian Bridge Movement Model to Track Mobile Targets
2016-09-01
breakout of Chinese forces in the South China Sea. Probability heat maps, depicting the probability of a target location at discrete times, are...achieve a higher probability of detection, it is more effective to have sensors cover a wider area at fewer discrete points in time than to have a...greater number of discrete looks using sensors covering smaller areas. 14. SUBJECT TERMS Brownian bridge movement models, unmanned sensors
What is the correct cost functional for variational data assimilation?
NASA Astrophysics Data System (ADS)
Bröcker, Jochen
2018-03-01
Variational approaches to data assimilation, and weakly constrained four dimensional variation (WC-4DVar) in particular, are important in the geosciences but also in other communities (often under different names). The cost functions and the resulting optimal trajectories may have a probabilistic interpretation, for instance by linking data assimilation with maximum aposteriori (MAP) estimation. This is possible in particular if the unknown trajectory is modelled as the solution of a stochastic differential equation (SDE), as is increasingly the case in weather forecasting and climate modelling. In this situation, the MAP estimator (or "most probable path" of the SDE) is obtained by minimising the Onsager-Machlup functional. Although this fact is well known, there seems to be some confusion in the literature, with the energy (or "least squares") functional sometimes been claimed to yield the most probable path. The first aim of this paper is to address this confusion and show that the energy functional does not, in general, provide the most probable path. The second aim is to discuss the implications in practice. Although the mentioned results pertain to stochastic models in continuous time, they do have consequences in practice where SDE's are approximated by discrete time schemes. It turns out that using an approximation to the SDE and calculating its most probable path does not necessarily yield a good approximation to the most probable path of the SDE proper. This suggest that even in discrete time, a version of the Onsager-Machlup functional should be used, rather than the energy functional, at least if the solution is to be interpreted as a MAP estimator.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-24
... increase in the CPI-U for the prior FY (0.0 percent). Column F FY 2010 TC MAP Exp. Incl. DSH. This column... including DSH expenditures. Column G FY 2010 TC MAP Exp. Net of DSH. This column contains the amount of the States' actual FY 2010 total computable DSH expenditures. Column H FY 2010 TC MAP Exp. Net of DSH. This...
Distributed multimodal data fusion for large scale wireless sensor networks
NASA Astrophysics Data System (ADS)
Ertin, Emre
2006-05-01
Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.
On the use of Bayesian Monte-Carlo in evaluation of nuclear data
NASA Astrophysics Data System (ADS)
De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles
2017-09-01
As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.
Identification of phreatophytic groundwater dependent ecosystems using geospatial technologies
NASA Astrophysics Data System (ADS)
Perez Hoyos, Isabel Cristina
The protection of groundwater dependent ecosystems (GDEs) is increasingly being recognized as an essential aspect for the sustainable management and allocation of water resources. Ecosystem services are crucial for human well-being and for a variety of flora and fauna. However, the conservation of GDEs is only possible if knowledge about their location and extent is available. Several studies have focused on the identification of GDEs at specific locations using ground-based measurements. However, recent progress in technologies such as remote sensing and their integration with geographic information systems (GIS) has provided alternative ways to map GDEs at much larger spatial extents. This study is concerned with the discovery of patterns in geospatial data sets using data mining techniques for mapping phreatophytic GDEs in the United States at 1 km spatial resolution. A methodology to identify the probability of an ecosystem to be groundwater dependent is developed. Probabilities are obtained by modeling the relationship between the known locations of GDEs and main factors influencing groundwater dependency, namely water table depth (WTD) and aridity index (AI). A methodology is proposed to predict WTD at 1 km spatial resolution using relevant geospatial data sets calibrated with WTD observations. An ensemble learning algorithm called random forest (RF) is used in order to model the distribution of groundwater in three study areas: Nevada, California, and Washington, as well as in the entire United States. RF regression performance is compared with a single regression tree (RT). The comparison is based on contrasting training error, true prediction error, and variable importance estimates of both methods. Additionally, remote sensing variables are omitted from the process of fitting the RF model to the data to evaluate the deterioration in the model performance when these variables are not used as an input. Research results suggest that although the prediction accuracy of a single RT is reduced in comparison with RFs, single trees can still be used to understand the interactions that might be taking place between predictor variables and the response variable. Regarding RF, there is a great potential in using the power of an ensemble of trees for prediction of WTD. The superior capability of RF to accurately map water table position in Nevada, California, and Washington demonstrate that this technique can be applied at scales larger than regional levels. It is also shown that the removal of remote sensing variables from the RF training process degrades the performance of the model. Using the predicted WTD, the probability of an ecosystem to be groundwater dependent (GDE probability) is estimated at 1 km spatial resolution. The modeling technique is evaluated in the state of Nevada, USA to develop a systematic approach for the identification of GDEs and it is then applied in the United States. The modeling approach selected for the development of the GDE probability map results from a comparison of the performance of classification trees (CT) and classification forests (CF). Predictive performance evaluation for the selection of the most accurate model is achieved using a threshold independent technique, and the prediction accuracy of both models is assessed in greater detail using threshold-dependent measures. The resulting GDE probability map can potentially be used for the definition of conservation areas since it can be translated into a binary classification map with two classes: GDE and NON-GDE. These maps are created by selecting a probability threshold. It is demonstrated that the choice of this threshold has dramatic effects on deterministic model performance measures.
Le, Quang A; Doctor, Jason N
2011-05-01
As quality-adjusted life years have become the standard metric in health economic evaluations, mapping health-profile or disease-specific measures onto preference-based measures to obtain quality-adjusted life years has become a solution when health utilities are not directly available. However, current mapping methods are limited due to their predictive validity, reliability, and/or other methodological issues. We employ probability theory together with a graphical model, called a Bayesian network, to convert health-profile measures into preference-based measures and to compare the results to those estimated with current mapping methods. A sample of 19,678 adults who completed both the 12-item Short Form Health Survey (SF-12v2) and EuroQoL 5D (EQ-5D) questionnaires from the 2003 Medical Expenditure Panel Survey was split into training and validation sets. Bayesian networks were constructed to explore the probabilistic relationships between each EQ-5D domain and 12 items of the SF-12v2. The EQ-5D utility scores were estimated on the basis of the predicted probability of each response level of the 5 EQ-5D domains obtained from the Bayesian inference process using the following methods: Monte Carlo simulation, expected utility, and most-likely probability. Results were then compared with current mapping methods including multinomial logistic regression, ordinary least squares, and censored least absolute deviations. The Bayesian networks consistently outperformed other mapping models in the overall sample (mean absolute error=0.077, mean square error=0.013, and R overall=0.802), in different age groups, number of chronic conditions, and ranges of the EQ-5D index. Bayesian networks provide a new robust and natural approach to map health status responses into health utility measures for health economic evaluations.
Mapping the biological condition of USA rivers and streams
We predicted the probable (pr) biological condition (BC) of ~5.4 million km of stream within the conterminous USA (CONUS). National maps of prBC could provide an important tool for prioritizing monitoring and restoration of streams. The USEPA uses a spatially balanced survey desi...
NASA Astrophysics Data System (ADS)
Bevilacqua, Andrea; Neri, Augusto; Bisson, Marina; Esposti Ongaro, Tomaso; Flandoli, Franco; Isaia, Roberto; Rosi, Mauro; Vitale, Stefano
2017-09-01
This study presents a new method for producing long-term hazard maps for pyroclastic density currents (PDC) originating at Campi Flegrei caldera. Such method is based on a doubly stochastic approach and is able to combine the uncertainty assessments on the spatial location of the volcanic vent, the size of the flow and the expected time of such an event. The results are obtained by using a Monte Carlo approach and adopting a simplified invasion model based on the box model integral approximation. Temporal assessments are modelled through a Cox-type process including self-excitement effects, based on the eruptive record of the last 15 kyr. Mean and percentile maps of PDC invasion probability are produced, exploring their sensitivity to some sources of uncertainty and to the effects of the dependence between PDC scales and the caldera sector where they originated. Conditional maps representative of PDC originating inside limited zones of the caldera, or of PDC with a limited range of scales are also produced. Finally, the effect of assuming different time windows for the hazard estimates is explored, also including the potential occurrence of a sequence of multiple events. Assuming that the last eruption of Monte Nuovo (A.D. 1538) marked the beginning of a new epoch of activity similar to the previous ones, results of the statistical analysis indicate a mean probability of PDC invasion above 5% in the next 50 years on almost the entire caldera (with a probability peak of 25% in the central part of the caldera). In contrast, probability values reduce by a factor of about 3 if the entire eruptive record is considered over the last 15 kyr, i.e. including both eruptive epochs and quiescent periods.
Toward uniform probabilistic seismic hazard assessments for Southeast Asia
NASA Astrophysics Data System (ADS)
Chan, C. H.; Wang, Y.; Shi, X.; Ornthammarath, T.; Warnitchai, P.; Kosuwan, S.; Thant, M.; Nguyen, P. H.; Nguyen, L. M.; Solidum, R., Jr.; Irsyam, M.; Hidayati, S.; Sieh, K.
2017-12-01
Although most Southeast Asian countries have seismic hazard maps, various methodologies and quality result in appreciable mismatches at national boundaries. We aim to conduct a uniform assessment across the region by through standardized earthquake and fault databases, ground-shaking scenarios, and regional hazard maps. Our earthquake database contains earthquake parameters obtained from global and national seismic networks, harmonized by removal of duplicate events and the use of moment magnitude. Our active-fault database includes fault parameters from previous studies and from the databases implemented for national seismic hazard maps. Another crucial input for seismic hazard assessment is proper evaluation of ground-shaking attenuation. Since few ground-motion prediction equations (GMPEs) have used local observations from this region, we evaluated attenuation by comparison of instrumental observations and felt intensities for recent earthquakes with predicted ground shaking from published GMPEs. We then utilize the best-fitting GMPEs and site conditions into our seismic hazard assessments. Based on the database and proper GMPEs, we have constructed regional probabilistic seismic hazard maps. The assessment shows highest seismic hazard levels near those faults with high slip rates, including the Sagaing Fault in central Myanmar, the Sumatran Fault in Sumatra, the Palu-Koro, Matano and Lawanopo Faults in Sulawesi, and the Philippine Fault across several islands of the Philippines. In addition, our assessment demonstrates the important fact that regions with low earthquake probability may well have a higher aggregate probability of future earthquakes, since they encompass much larger areas than the areas of high probability. The significant irony then is that in areas of low to moderate probability, where building codes are usually to provide less seismic resilience, seismic risk is likely to be greater. Infrastructural damage in East Malaysia during the 2015 Sabah earthquake offers a case in point.
Risk-targeted versus current seismic design maps for the conterminous United States
Luco, Nicolas; Ellingwood, Bruce R.; Hamburger, Ronald O.; Hooper, John D.; Kimball, Jeffrey K.; Kircher, Charles A.
2007-01-01
The probabilistic portions of the seismic design maps in the NEHRP Provisions (FEMA, 2003/2000/1997), and in the International Building Code (ICC, 2006/2003/2000) and ASCE Standard 7-05 (ASCE, 2005a), provide ground motion values from the USGS that have a 2% probability of being exceeded in 50 years. Under the assumption that the capacity against collapse of structures designed for these "uniformhazard" ground motions is equal to, without uncertainty, the corresponding mapped value at the location of the structure, the probability of its collapse in 50 years is also uniform. This is not the case however, when it is recognized that there is, in fact, uncertainty in the structural capacity. In that case, siteto-site variability in the shape of ground motion hazard curves results in a lack of uniformity. This paper explains the basis for proposed adjustments to the uniform-hazard portions of the seismic design maps currently in the NEHRP Provisions that result in uniform estimated collapse probability. For seismic design of nuclear facilities, analogous but specialized adjustments have recently been defined in ASCE Standard 43-05 (ASCE, 2005b). In support of the 2009 update of the NEHRP Provisions currently being conducted by the Building Seismic Safety Council (BSSC), herein we provide examples of the adjusted ground motions for a selected target collapse probability (or target risk). Relative to the probabilistic MCE ground motions currently in the NEHRP Provisions, the risk-targeted ground motions for design are smaller (by as much as about 30%) in the New Madrid Seismic Zone, near Charleston, South Carolina, and in the coastal region of Oregon, with relatively little (<15%) change almost everywhere else in the conterminous U.S.
Fire-probability maps for the Brazilian Amazonia
NASA Astrophysics Data System (ADS)
Cardoso, M.; Nobre, C.; Obregon, G.; Sampaio, G.
2009-04-01
Most fires in Amazonia result from the combination between climate and land-use factors. They occur mainly in the dry season and are used as an inexpensive tool for land clearing and management. However, their unintended consequences are of important concern. Fire emissions are the most important sources of greenhouse gases and aerosols in the region, accidental fires are a major threat to protected areas, and frequent fires may lead to permanent conversion of forest areas into savannas. Fire-activity models have thus become important tools for environmental analyses in Amazonia. They are used, for example, in warning systems for monitoring the risk of burnings in protected areas, to improve the description of biogeochemical cycles and vegetation composition in ecosystem models, and to help estimate the long-term potential for savannas in biome models. Previous modeling studies for the whole region were produced in units of satellite fire pixels, which complicate their direct use for environmental applications. By reinterpreting remote-sensing based data using a statistical approach, we were able to calibrate models for the whole region in units of probability, or chance of fires to occur. The application of these models for years 2005 and 2006 provided maps of fire potential at 3-month and 0.25-deg resolution as a function of precipitation and distance from main roads. In both years, the performance of the resulting maps was better for the period July-September. During these months, most of satellite-based fire observations were located in areas with relatively high chance of fire, as determined by the modeled probability maps. In addition to reproduce reasonably well the areas presenting maximum fire activity as detected by remote sensing, the new results in units of probability are easier to apply than previous estimates from fire-pixel models.
Fire-probability maps for the Brazilian Amazonia
NASA Astrophysics Data System (ADS)
Cardoso, Manoel; Sampaio, Gilvan; Obregon, Guillermo; Nobre, Carlos
2010-05-01
Most fires in Amazonia result from the combination between climate and land-use factors. They occur mainly in the dry season and are used as an inexpensive tool for land clearing and management. However, their unintended consequences are of important concern. Fire emissions are the most important sources of greenhouse gases and aerosols in the region, accidental fires are a major threat to protected areas, and frequent fires may lead to permanent conversion of forest areas into savannas. Fire-activity models have thus become important tools for environmental analyses in Amazonia. They are used, for example, in warning systems for monitoring the risk of burnings in protected areas, to improve the description of biogeochemical cycles and vegetation composition in ecosystem models, and to help estimate the long-term potential for savannas in biome models. Previous modeling studies for the whole region were produced in units of satellite fire pixels, which complicate their direct use for environmental applications. By reinterpreting remote-sensing based data using a statistical approach, we were able to calibrate models for the whole region in units of probability, or chance of fires to occur. The application of these models for years 2005 and 2006 provided maps of fire potential at 3-month and 0.25-deg resolution as a function of precipitation and distance from main roads. In both years, the performance of the resulting maps was better for the period July-September. During these months, most of satellite-based fire observations were located in areas with relatively high chance of fire, as determined by the modeled probability maps. In addition to reproduce reasonably well the areas presenting maximum fire activity as detected by remote sensing, the new results in units of probability are easier to apply than previous estimates from fire-pixel models.
Link Maps and Map Meetings: Scaffolding Student Learning
ERIC Educational Resources Information Center
Lindstrom, Christine; Sharma, Manjula D.
2009-01-01
With student numbers decreasing and traditional teaching methods having been found inefficient, it is widely accepted that alternative teaching methods need to be explored in tertiary physics education. In 2006 a different teaching environment was offered to 244 first year students with little or no prior formal instruction in physics. Students…
The Impact of Superintendent Support for Curriculum Mapping on Principals' Efficacious Use of Maps
ERIC Educational Resources Information Center
Danna, Stephen; Spatt, Spatt
2013-01-01
Pressures on school leaders to reform are pervasive within the United States. Prior studies show that superintendents who provide clear expectations and goals, collaborate, ensure quality professional development, and attend to curriculum alignment develop effective building leaders (Marzano & Waters, 2009; Wahlstrom, Louis, Leithwood, &…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana Kelly; Kurt Vedros; Robert Youngblood
This paper examines false indication probabilities in the context of the Mitigating System Performance Index (MSPI), in order to investigate the pros and cons of different approaches to resolving two coupled issues: (1) sensitivity to the prior distribution used in calculating the Bayesian-corrected unreliability contribution to the MSPI, and (2) whether (in a particular plant configuration) to model the fuel oil transfer pump (FOTP) as a separate component, or integrally to its emergency diesel generator (EDG). False indication probabilities were calculated for the following situations: (1) all component reliability parameters at their baseline values, so that the true indication ismore » green, meaning that an indication of white or above would be false positive; (2) one or more components degraded to the extent that the true indication would be (mid) white, and “false” would be green (negative) or yellow (negative) or red (negative). In key respects, this was the approach taken in NUREG-1753. The prior distributions examined were the constrained noninformative (CNI) prior used currently by the MSPI, a mixture of conjugate priors, the Jeffreys noninformative prior, a nonconjugate log(istic)-normal prior, and the minimally informative prior investigated in (Kelly et al., 2010). The mid-white performance state was set at ?CDF = ?10 ? 10-6/yr. For each simulated time history, a check is made of whether the calculated ?CDF is above or below 10-6/yr. If the parameters were at their baseline values, and ?CDF > 10-6/yr, this is counted as a false positive. Conversely, if one or all of the parameters are set to values corresponding to ?CDF > 10-6/yr but that time history’s ?CDF < 10-6/yr, this is counted as a false negative indication. The false indication (positive or negative) probability is then estimated as the number of false positive or negative counts divided by the number of time histories (100,000). Results are presented for a set of base case parameter values, and three sensitivity cases in which the number of FOTP demands was reduced, along with the Birnbaum importance of the FOTP.« less
Fang, Ruogu; Karlsson, Kolbeinn; Chen, Tsuhan; Sanelli, Pina C.
2014-01-01
Blood-brain-barrier permeability (BBBP) measurements extracted from the perfusion computed tomography (PCT) using the Patlak model can be a valuable indicator to predict hemorrhagic transformation in patients with acute stroke. Unfortunately, the standard Patlak model based PCT requires excessive radiation exposure, which raised attention on radiation safety. Minimizing radiation dose is of high value in clinical practice but can degrade the image quality due to the introduced severe noise. The purpose of this work is to construct high quality BBBP maps from low-dose PCT data by using the brain structural similarity between different individuals and the relations between the high- and low-dose maps. The proposed sparse high-dose induced (shd-Patlak) model performs by building a high-dose induced prior for the Patlak model with a set of location adaptive dictionaries, followed by an optimized estimation of BBBP map with the prior regularized Patlak model. Evaluation with the simulated low-dose clinical brain PCT datasets clearly demonstrate that the shd-Patlak model can achieve more significant gains than the standard Patlak model with improved visual quality, higher fidelity to the gold standard and more accurate details for clinical analysis. PMID:24200529
NASA Astrophysics Data System (ADS)
Fytilis, N.; Rizzo, D. M.
2012-12-01
Environmental managers are increasingly required to forecast the long-term effects and the resilience or vulnerability of biophysical systems to human-generated stresses. Mitigation strategies for hydrological and environmental systems need to be assessed in the presence of uncertainty. An important aspect of such complex systems is the assessment of variable uncertainty on the model response outputs. We develop a new classification tool that couples a Naïve Bayesian Classifier with a modified Kohonen Self-Organizing Map to tackle this challenge. For proof-of-concept, we use rapid geomorphic and reach-scale habitat assessments data from over 2500 Vermont stream reaches (~1371 stream miles) assessed by the Vermont Agency of Natural Resources (VTANR). In addition, the Vermont Department of Environmental Conservation (VTDEC) estimates stream habitat biodiversity indices (macro-invertebrates and fish) and a variety of water quality data. Our approach fully utilizes the existing VTANR and VTDEC data sets to improve classification of stream-reach habitat and biological integrity. The combined SOM-Naïve Bayesian architecture is sufficiently flexible to allow for continual updates and increased accuracy associated with acquiring new data. The Kohonen Self-Organizing Map (SOM) is an unsupervised artificial neural network that autonomously analyzes properties inherent in a given a set of data. It is typically used to cluster data vectors into similar categories when a priori classes do not exist. The ability of the SOM to convert nonlinear, high dimensional data to some user-defined lower dimension and mine large amounts of data types (i.e., discrete or continuous, biological or geomorphic data) makes it ideal for characterizing the sensitivity of river networks in a variety of contexts. The procedure is data-driven, and therefore does not require the development of site-specific, process-based classification stream models, or sets of if-then-else rules associated with expert systems. This has the potential to save time and resources, while enabling a truly adaptive management approach using existing knowledge (expressed as prior probabilities) and new information (expressed as likelihood functions) to update estimates (i.e., in this case, improved stream classifications expressed as posterior probabilities). The distribution parameters of these posterior probabilities are used to quantify uncertainty associated with environmental data. Since classification plays a leading role in the future development of data-enabled science and engineering, such a computational tool is applicable to a variety of engineering applications. The ability of the new classification neural network to characterize streams with high environmental risk is essential for a proactive adaptive watershed management approach.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing
2018-03-01
The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.
2MASS wide-field extinction maps. V. Corona Australis
NASA Astrophysics Data System (ADS)
Alves, João; Lombardi, Marco; Lada, Charles J.
2014-05-01
We present a near-infrared extinction map of a large region (~870 deg2) covering the isolated Corona Australis complex of molecular clouds. We reach a 1-σ error of 0.02 mag in the K-band extinction with a resolution of 3 arcmin over the entire map. We find that the Corona Australis cloud is about three times as large as revealed by previous CO and dust emission surveys. The cloud consists of a 45 pc long complex of filamentary structure from the well known star forming Western-end (the head, N ≥ 1023 cm-2) to the diffuse Eastern-end (the tail, N ≤ 1021 cm-2). Remarkably, about two thirds of the complex both in size and mass lie beneath AV ~ 1 mag. We find that the probability density function (PDF) of the cloud cannot be described by a single log-normal function. Similar to prior studies, we found a significant excess at high column densities, but a log-normal + power-law tail fit does not work well at low column densities. We show that at low column densities near the peak of the observed PDF, both the amplitude and shape of the PDF are dominated by noise in the extinction measurements making it impractical to derive the intrinsic cloud PDF below AK < 0.15 mag. Above AK ~ 0.15 mag, essentially the molecular component of the cloud, the PDF appears to be best described by a power-law with index -3, but could also described as the tail of a broad and relatively low amplitude, log-normal PDF that peaks at very low column densities. FITS files of the extinction maps are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/565/A18
Planetary Geology and Geophysics Program
NASA Technical Reports Server (NTRS)
McGill, George E.
2004-01-01
Geological mapping and topical studies, primarily in the southern Acidalia Planitia/Cydonia Mensae region of Mars is presented. The overall objective was to understand geologic processes and crustal history in the northern lowland in order to assess the probability that an ocean once existed in this region. The major deliverable is a block of 6 1:500,000 scale geologic maps that will be published in 2004 as a single map at 1:1,000,000 scale along with extensive descriptive and interpretive text. A major issue addressed by the mapping was the relative ages of the extensive plains of Acidalia Planitia and the knobs and mesas of Cydonia Mensae. The mapping results clearly favor a younger age for the plains. Topical studies included a preliminary analysis of the very abundant small domes and cones to assess the possibility that their origins could be determined by detailed mapping and remote-sensing analysis. We also tested the validity of putative shorelines by using GIs to co-register full-resolution MOLA altimetry data and Viking images with these shorelines plotted on them. Of the 3 proposed shorelines in this area, one is probably valid, one is definitely not valid, and the third is apparently 2 shorelines closely spaced in elevation. Publications supported entirely or in part by this grant are included.
Data Assimilation on a Quantum Annealing Computer: Feasibility and Scalability
NASA Astrophysics Data System (ADS)
Nearing, G. S.; Halem, M.; Chapman, D. R.; Pelissier, C. S.
2014-12-01
Data assimilation is one of the ubiquitous and computationally hard problems in the Earth Sciences. In particular, ensemble-based methods require a large number of model evaluations to estimate the prior probability density over system states, and variational methods require adjoint calculations and iteration to locate the maximum a posteriori solution in the presence of nonlinear models and observation operators. Quantum annealing computers (QAC) like the new D-Wave housed at the NASA Ames Research Center can be used for optimization and sampling, and therefore offers a new possibility for efficiently solving hard data assimilation problems. Coding on the QAC is not straightforward: a problem must be posed as a Quadratic Unconstrained Binary Optimization (QUBO) and mapped to a spherical Chimera graph. We have developed a method for compiling nonlinear 4D-Var problems on the D-Wave that consists of five steps: Emulating the nonlinear model and/or observation function using radial basis functions (RBF) or Chebyshev polynomials. Truncating a Taylor series around each RBF kernel. Reducing the Taylor polynomial to a quadratic using ancilla gadgets. Mapping the real-valued quadratic to a fixed-precision binary quadratic. Mapping the fully coupled binary quadratic to a partially coupled spherical Chimera graph using ancilla gadgets. At present the D-Wave contains 512 qbits (with 1024 and 2048 qbit machines due in the next two years); this machine size allows us to estimate only 3 state variables at each satellite overpass. However, QAC's solve optimization problems using a physical (quantum) system, and therefore do not require iterations or calculation of model adjoints. This has the potential to revolutionize our ability to efficiently perform variational data assimilation, as the size of these computers grows in the coming years.
Mentoring for NHS doctors: perceived benefits across the personal–professional interface
Steven, A; Oxley, J; Fleming, WG
2008-01-01
Summary Objective To investigate NHS doctors' perceived benefits of being involved in mentoring schemes and to explore the overlaps and relationships between areas of benefit. Design Extended qualitative analysis of a multi-site interview study following an interpretivist approach. Setting Six NHS mentoring schemes across England. Main outcome measures Perceived benefits. Results While primary analysis resulted in lists of perceived benefits, the extended analysis revealed three overarching areas: professional practice, personal well-being and development. Benefits appear to go beyond a doctor's professional role to cross the personal–professional interface. Problem solving and change management seem to be key processes underpinning the raft of personal and professional benefits reported. A conceptual map was developed to depict these areas and relationships. In addition secondary analysis suggests that in benefitting one area mentoring may lead to consequential benefits in others. Conclusions Prior research into mentoring has mainly taken place in a single health care sector. This multi-site study suggests that the perceived benefits of involvement in mentoring may cross the personal/professional interface and may override organizational differences. Furthermore the map developed highlights the complex relationships which exist between the three areas of professional practice, personal wellbeing and personal and professional development. Given the consistency of findings across several studies it seems probable that organizations would be strengthened by doctors who feel more satisfied and confident in their professional roles as a result of participation in mentoring. Mentoring may have the potential to take us beyond individual limits to greater benefits and the conceptual map may offer a starting point for the development of outcome criteria and evaluation tools for mentoring schemes. PMID:19029356
Probabilistic reasoning in data analysis.
Sirovich, Lawrence
2011-09-20
This Teaching Resource provides lecture notes, slides, and a student assignment for a lecture on probabilistic reasoning in the analysis of biological data. General probabilistic frameworks are introduced, and a number of standard probability distributions are described using simple intuitive ideas. Particular attention is focused on random arrivals that are independent of prior history (Markovian events), with an emphasis on waiting times, Poisson processes, and Poisson probability distributions. The use of these various probability distributions is applied to biomedical problems, including several classic experimental studies.
ERIC Educational Resources Information Center
Karbon, Jacqueline C.
Using a semantic mapping technique for vocabulary instruction, a study explored how children of diverse groups bring different cultural backgrounds and prior knowledge to tasks involved in learning new words. The study was conducted in three sixth-grade classrooms--one containing rural Native American (especially Menominee) children, another…
Anderson, Becci; Fuller, Tracy
2014-01-01
In July 2013, the USGS National Geospatial Program began producing new topographic maps for Alaska, providing a new map series for the state known as US Topo. Prior to the start of US Topo map production in Alaska, the most detailed statewide USGS topographic maps were 15-minute 1:63,360-scale maps, with their original production often dating back nearly fifty years. The new 7.5-minute digital maps are created at 1:25,000 map scale, and show greatly increased topographic detail when compared to the older maps. The map scale and data specifications were selected based on significant outreach to various map user groups in Alaska. This multi-year mapping initiative will vastly enhance the base topographic maps for Alaska and is possible because of improvements to key digital map datasets in the state. The new maps and data are beneficial in high priority applications such as safety, planning, research and resource management. New mapping will support science applications throughout the state and provide updated maps for parks, recreation lands and villages.
Cochlea segmentation using iterated random walks with shape prior
NASA Astrophysics Data System (ADS)
Ruiz Pujadas, Esmeralda; Kjer, Hans Martin; Vera, Sergio; Ceresa, Mario; González Ballester, Miguel Ángel
2016-03-01
Cochlear implants can restore hearing to deaf or partially deaf patients. In order to plan the intervention, a model from high resolution µCT images is to be built from accurate cochlea segmentations and then, adapted to a patient-specific model. Thus, a precise segmentation is required to build such a model. We propose a new framework for segmentation of µCT cochlear images using random walks where a region term is combined with a distance shape prior weighted by a confidence map to adjust its influence according to the strength of the image contour. Then, the region term can take advantage of the high contrast between the background and foreground and the distance prior guides the segmentation to the exterior of the cochlea as well as to less contrasted regions inside the cochlea. Finally, a refinement is performed preserving the topology using a topological method and an error control map to prevent boundary leakage. We tested the proposed approach with 10 datasets and compared it with the latest techniques with random walks and priors. The experiments suggest that this method gives promising results for cochlea segmentation.
An improved dehazing algorithm of aerial high-definition image
NASA Astrophysics Data System (ADS)
Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying
2016-01-01
For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.
NASA Astrophysics Data System (ADS)
Choy, S.; Ahmed, H.; Wheatley, A.; McCormack, D. G.; Parraga, G.
2010-03-01
We developed image analysis tools to evaluate spatial and temporal 3He magnetic resonance imaging (MRI) ventilation in asthma and cystic fibrosis. We also developed temporal ventilation probability maps to provide a way to describe and quantify ventilation heterogeneity over time, as a way to test respiratory exacerbations or treatment predictions and to provide a discrete probability measurement of 3He ventilation defect persistence.
Gross, Eliza L.; Low, Dennis J.
2013-01-01
Logistic regression models were created to predict and map the probability of elevated arsenic concentrations in groundwater statewide in Pennsylvania and in three intrastate regions to further improve predictions for those three regions (glacial aquifer system, Gettysburg Basin, Newark Basin). Although the Pennsylvania and regional predictive models retained some different variables, they have common characteristics that can be grouped by (1) geologic and soils variables describing arsenic sources and mobilizers, (2) geochemical variables describing the geochemical environment of the groundwater, and (3) locally specific variables that are unique to each of the three regions studied and not applicable to statewide analysis. Maps of Pennsylvania and the three intrastate regions were produced that illustrate that areas most at risk are those with geology and soils capable of functioning as an arsenic source or mobilizer and geochemical groundwater conditions able to facilitate redox reactions. The models have limitations because they may not characterize areas that have localized controls on arsenic mobility. The probability maps associated with this report are intended for regional-scale use and may not be accurate for use at the field scale or when considering individual wells.
The role of photogeologic mapping in traverse planning: Lessons from DRATS 2010 activities
Skinner, James A.; Fortezzo, Corey M.
2013-01-01
We produced a 1:24,000 scale photogeologic map of the Desert Research and Technology Studies (DRATS) 2010 simulated lunar mission traverse area and surrounding environments located within the northeastern part of the San Francisco Volcanic Field (SFVF), north-central Arizona. To mimic an exploratory mission, we approached the region “blindly” by rejecting prior knowledge or preconceived notions of the regional geologic setting and focused instead only on image and topographic base maps that were intended to be equivalent to pre-cursor mission “orbital returns”. We used photogeologic mapping techniques equivalent to those employed during the construction of modern planetary geologic maps. Based on image and topographic base maps, we identified 4 surficial units (talus, channel, dissected, and plains units), 5 volcanic units (older cone, younger cone, older flow, younger flow, and block field units), and 5 basement units (grey-toned mottled, red-toned platy, red-toned layered, light-toned slabby, and light-toned layered units). Comparison of our remote-based map units with published field-based map units indicates that the two techniques yield pervasively similar results of contrasting detail, with higher accuracies linked to remote-based units that have high topographic relief and tonal contrast relative to adjacent units. We list key scientific questions that remained after photogeologic mapping and prior to DRATS activities and identify 13 specific observations that the crew and science team would need to make in order to address those questions and refine the interpreted geologic context. We translated potential observations into 62 recommended sites for visitation and observation during the mission traverse. The production and use of a mission-specific photogeologic map for DRATS 2010 activities resulted in strategic and tactical recommendations regarding observational context and hypothesis tracking over the course of an exploratory mission.
Measuring novices' field mapping abilities using an in-class exercise based on expert task analysis
NASA Astrophysics Data System (ADS)
Caulkins, J. L.
2010-12-01
We are interested in developing a model of expert-like behavior for improving the teaching methods of undergraduate field geology. Our aim is to assist students in mastering the process of field mapping more efficiently and effectively and to improve their ability to think creatively in the field. To examine expert-mapping behavior, a cognitive task analysis was conducted with expert geologic mappers in an attempt to define the process of geologic mapping (i.e. to understand how experts carry out geological mapping). The task analysis indicates that expert mappers have a wealth of geologic scenarios at their disposal that they compare against examples seen in the field, experiences that most undergraduate mappers will not have had. While presenting students with many geological examples in class may increase their understanding of geologic processes, novices still struggle when presented with a novel field situation. Based on the task analysis, a short (45-minute) paper-map-based exercise was designed and tested with 14 pairs of 3rd year geology students. The exercise asks students to generate probable geologic models based on a series of four (4) data sets. Each data set represents a day’s worth of data; after the first “day,” new sheets simply include current and previously collected data (e.g. “Day 2” data set includes data from “Day 1” plus the new “Day 2” data). As the geologic complexity increases, students must adapt, reject or generate new geologic models in order to fit the growing data set. Preliminary results of the exercise indicate that students who produced more probable geologic models, and produced higher ratios of probable to improbable models, tended to go on to do better on the mapping exercises at the 3rd year field school. These results suggest that those students with more cognitively available geologic models may be more able to use these models in field settings than those who are unable to draw on these models for whatever reason. Giving students practice at generating geologic models to explain data may be useful in preparing our students for field mapping exercises.
Pluskiewicz, W; Adamczyk, P; Czekajło, A; Grzeszczak, W; Drozdzowska, B
2015-12-01
In 770 postmenopausal women, the fracture incidence during a 4-year follow-up was analyzed in relation to the fracture probability (FRAX risk assessment tool) and risk (Garvan risk calculator) predicted at baseline. Incident fractures occurred in 62 subjects with a higher prevalence in high-risk subgroups. Prior fracture, rheumatoid arthritis, femoral neck T-score and falls increased independent of fracture incidence. The aim of the study was to analyze the incidence of fractures during a 4-year follow-up in relation to the baseline fracture probability and risk. Enrolled in the study were 770 postmenopausal women with a mean age of 65.7 ± 7.3 years. Bone mineral density (BMD) at the proximal femur, clinical data, and fracture probability using the FRAX tool and risk using the Garvan calculator were determined. Each subject was asked yearly by phone call about the incidence of fracture during the follow-up period. Of the 770 women, 62 had a fracture during follow-up, and 46 had a major fracture. At baseline, BMD was significantly lower, and fracture probability and fracture risk were significantly higher in women who had a fracture. Among women with a major fracture, the percentage with a high baseline fracture probability (>10 %) was significantly higher than among those without a fracture (p < 0.01). Fracture incidence during follow-up was significantly higher among women with a high baseline fracture probability (12.7 % vs. 5.2 %) and a high fracture risk (9.2 vs. 5.3 %) so that the "fracture-free survival" curves were significantly different (p < 0.05). The number of clinical risk factors noted at baseline was significantly associated with fracture incidence (chi-squared = 20.82, p < 0.01). Prior fracture, rheumatoid arthritis, and femoral neck T-score were identified as significant risk factors for major fractures (for any fractures, the influence of falls was also significant). During follow-up, fracture incidence was predicted by baseline fracture probability (FRAX risk assessment tool) and risk (Garvan risk calculator). A number of clinical risk factors and a prior fracture, rheumatoid arthritis, femoral neck T-score, and falls were independently associated with an increased incidence of fractures. [Corrected
Neural correlates of the divergence of instrumental probability distributions.
Liljeholm, Mimi; Wang, Shuo; Zhang, June; O'Doherty, John P
2013-07-24
Flexible action selection requires knowledge about how alternative actions impact the environment: a "cognitive map" of instrumental contingencies. Reinforcement learning theories formalize this map as a set of stochastic relationships between actions and states, such that for any given action considered in a current state, a probability distribution is specified over possible outcome states. Here, we show that activity in the human inferior parietal lobule correlates with the divergence of such outcome distributions-a measure that reflects whether discrimination between alternative actions increases the controllability of the future-and, further, that this effect is dissociable from those of other information theoretic and motivational variables, such as outcome entropy, action values, and outcome utilities. Our results suggest that, although ultimately combined with reward estimates to generate action values, outcome probability distributions associated with alternative actions may be contrasted independently of valence computations, to narrow the scope of the action selection problem.
Statistical Inference in Hidden Markov Models Using k-Segment Constraints
Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher
2016-01-01
Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674
Efficient Bit-to-Symbol Likelihood Mappings
NASA Technical Reports Server (NTRS)
Moision, Bruce E.; Nakashima, Michael A.
2010-01-01
This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.
Modeling and Analysis of Information Product Maps
ERIC Educational Resources Information Center
Heien, Christopher Harris
2012-01-01
Information Product Maps are visual diagrams used to represent the inputs, processing, and outputs of data within an Information Manufacturing System. A data unit, drawn as an edge, symbolizes a grouping of raw data as it travels through this system. Processes, drawn as vertices, transform each data unit input into various forms prior to delivery…
Group-regularized individual prediction: theory and application to pain.
Lindquist, Martin A; Krishnan, Anjali; López-Solà, Marina; Jepma, Marieke; Woo, Choong-Wan; Koban, Leonie; Roy, Mathieu; Atlas, Lauren Y; Schmidt, Liane; Chang, Luke J; Reynolds Losin, Elizabeth A; Eisenbarth, Hedwig; Ashar, Yoni K; Delk, Elizabeth; Wager, Tor D
2017-01-15
Multivariate pattern analysis (MVPA) has become an important tool for identifying brain representations of psychological processes and clinical outcomes using fMRI and related methods. Such methods can be used to predict or 'decode' psychological states in individual subjects. Single-subject MVPA approaches, however, are limited by the amount and quality of individual-subject data. In spite of higher spatial resolution, predictive accuracy from single-subject data often does not exceed what can be accomplished using coarser, group-level maps, because single-subject patterns are trained on limited amounts of often-noisy data. Here, we present a method that combines population-level priors, in the form of biomarker patterns developed on prior samples, with single-subject MVPA maps to improve single-subject prediction. Theoretical results and simulations motivate a weighting based on the relative variances of biomarker-based prediction-based on population-level predictive maps from prior groups-and individual-subject, cross-validated prediction. Empirical results predicting pain using brain activity on a trial-by-trial basis (single-trial prediction) across 6 studies (N=180 participants) confirm the theoretical predictions. Regularization based on a population-level biomarker-in this case, the Neurologic Pain Signature (NPS)-improved single-subject prediction accuracy compared with idiographic maps based on the individuals' data alone. The regularization scheme that we propose, which we term group-regularized individual prediction (GRIP), can be applied broadly to within-person MVPA-based prediction. We also show how GRIP can be used to evaluate data quality and provide benchmarks for the appropriateness of population-level maps like the NPS for a given individual or study. Copyright © 2015 Elsevier Inc. All rights reserved.
Dexter, Franklin; De Oliveira, Gildasio S; McCarthy, Robert J
2016-01-15
We surveyed anesthesiology residents to evaluate the predictive effect of prior residence on desired location for future practice opportunities. One thousand five hundred United States anesthesiology residents were invited to participate. One question asked whether they intend to enter academic practice when they graduate from their residency/fellowship training. The analysis categorized the responses into "surely yes" and "probably" versus "even," "probably not," and "surely no." "After finishing your residency/fellowship training, are you planning to look seriously (e.g., interview) at jobs located more than a 2-hour drive from a location where you or your family (e.g., spouse or partner/significant other) have lived previously?" Responses were categorized into "very probably" and "somewhat probably" versus "somewhat improbably" and "not probable." Other questions explored predictors of the relationships quantified using the area under the receiver operating characteristic curve (area under the curve) ± its standard error. Among the 696 respondents, 36.9% (N = 256) would "probably" consider an academic practice. Fewer than half of those (P < 0.0001) would "very probably" consider a distant location (31.6%, 99% CI 24.4%-39.6%). Respondents with prior formal research training (e.g., PhD or Master's) had greater interest in academic practice at a distant location (AUC 0.63 ± 0.03, P = 0.0002). Except among respondents with formal research training, a good question to ask a job applicant is whether the applicant or the applicant's family has previously lived in the area.
Prior probability and feature predictability interactively bias perceptual decisions
Dunovan, Kyle E.; Tremel, Joshua J.; Wheeler, Mark E.
2014-01-01
Anticipating a forthcoming sensory experience facilitates perception for expected stimuli but also hinders perception for less likely alternatives. Recent neuroimaging studies suggest that expectation biases arise from feature-level predictions that enhance early sensory representations and facilitate evidence accumulation for contextually probable stimuli while suppressing alternatives. Reasonably then, the extent to which prior knowledge biases subsequent sensory processing should depend on the precision of expectations at the feature level as well as the degree to which expected features match those of an observed stimulus. In the present study we investigated how these two sources of uncertainty modulated pre- and post-stimulus bias mechanisms in the drift-diffusion model during a probabilistic face/house discrimination task. We tested several plausible models of choice bias, concluding that predictive cues led to a bias in both the starting-point and rate of evidence accumulation favoring the more probable stimulus category. We further tested the hypotheses that prior bias in the starting-point was conditional on the feature-level uncertainty of category expectations and that dynamic bias in the drift-rate was modulated by the match between expected and observed stimulus features. Starting-point estimates suggested that subjects formed a constant prior bias in favor of the face category, which exhibits less feature-level variability, that was strengthened or weakened by trial-wise predictive cues. Furthermore, we found that the gain on face/house evidence was increased for stimuli with less ambiguous features and that this relationship was enhanced by valid category expectations. These findings offer new evidence that bridges psychological models of decision-making with recent predictive coding theories of perception. PMID:24978303
The response analysis of fractional-order stochastic system via generalized cell mapping method.
Wang, Liang; Xue, Lili; Sun, Chunyan; Yue, Xiaole; Xu, Wei
2018-01-01
This paper is concerned with the response of a fractional-order stochastic system. The short memory principle is introduced to ensure that the response of the system is a Markov process. The generalized cell mapping method is applied to display the global dynamics of the noise-free system, such as attractors, basins of attraction, basin boundary, saddle, and invariant manifolds. The stochastic generalized cell mapping method is employed to obtain the evolutionary process of probability density functions of the response. The fractional-order ϕ 6 oscillator and the fractional-order smooth and discontinuous oscillator are taken as examples to give the implementations of our strategies. Studies have shown that the evolutionary direction of the probability density function of the fractional-order stochastic system is consistent with the unstable manifold. The effectiveness of the method is confirmed using Monte Carlo results.
Classification criteria and probability risk maps: limitations and perspectives.
Saisana, Michaela; Dubois, Gregoire; Chaloulakou, Archontoula; Spyrellis, Nikolas
2004-03-01
Delineation of polluted zones with respect to regulatory standards, accounting at the same time for the uncertainty of the estimated concentrations, relies on classification criteria that can lead to significantly different pollution risk maps, which, in turn, can depend on the regulatory standard itself. This paper reviews four popular classification criteria related to the violation of a probability threshold or a physical threshold, using annual (1996-2000) nitrogen dioxide concentrations from 40 air monitoring stations in Milan. The relative advantages and practical limitations of each criterion are discussed, and it is shown that some of the criteria are more appropriate for the problem at hand and that the choice of the criterion can be supported by the statistical distribution of the data and/or the regulatory standard. Finally, the polluted area is estimated over the different years and concentration thresholds using the appropriate risk maps as an additional source of uncertainty.
Improving deep convolutional neural networks with mixed maxout units.
Zhao, Hui-Zhen; Liu, Fu-Xian; Li, Long-Yue
2017-01-01
Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that "non-maximal features are unable to deliver" and "feature mapping subspace pooling is insufficient," we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance.
Landing Site Dispersion Analysis and Statistical Assessment for the Mars Phoenix Lander
NASA Technical Reports Server (NTRS)
Bonfiglio, Eugene P.; Adams, Douglas; Craig, Lynn; Spencer, David A.; Strauss, William; Seelos, Frank P.; Seelos, Kimberly D.; Arvidson, Ray; Heet, Tabatha
2008-01-01
The Mars Phoenix Lander launched on August 4, 2007 and successfully landed on Mars 10 months later on May 25, 2008. Landing ellipse predicts and hazard maps were key in selecting safe surface targets for Phoenix. Hazard maps were based on terrain slopes, geomorphology maps and automated rock counts of MRO's High Resolution Imaging Science Experiment (HiRISE) images. The expected landing dispersion which led to the selection of Phoenix's surface target is discussed as well as the actual landing dispersion predicts determined during operations in the weeks, days, and hours before landing. A statistical assessment of these dispersions is performed, comparing the actual landing-safety probabilities to criteria levied by the project. Also discussed are applications for this statistical analysis which were used by the Phoenix project. These include using the statistical analysis used to verify the effectiveness of a pre-planned maneuver menu and calculating the probability of future maneuvers.
On the Origins of Suboptimality in Human Probabilistic Inference
Acerbi, Luigi; Vijayakumar, Sethu; Wolpert, Daniel M.
2014-01-01
Humans have been shown to combine noisy sensory information with previous experience (priors), in qualitative and sometimes quantitative agreement with the statistically-optimal predictions of Bayesian integration. However, when the prior distribution becomes more complex than a simple Gaussian, such as skewed or bimodal, training takes much longer and performance appears suboptimal. It is unclear whether such suboptimality arises from an imprecise internal representation of the complex prior, or from additional constraints in performing probabilistic computations on complex distributions, even when accurately represented. Here we probe the sources of suboptimality in probabilistic inference using a novel estimation task in which subjects are exposed to an explicitly provided distribution, thereby removing the need to remember the prior. Subjects had to estimate the location of a target given a noisy cue and a visual representation of the prior probability density over locations, which changed on each trial. Different classes of priors were examined (Gaussian, unimodal, bimodal). Subjects' performance was in qualitative agreement with the predictions of Bayesian Decision Theory although generally suboptimal. The degree of suboptimality was modulated by statistical features of the priors but was largely independent of the class of the prior and level of noise in the cue, suggesting that suboptimality in dealing with complex statistical features, such as bimodality, may be due to a problem of acquiring the priors rather than computing with them. We performed a factorial model comparison across a large set of Bayesian observer models to identify additional sources of noise and suboptimality. Our analysis rejects several models of stochastic behavior, including probability matching and sample-averaging strategies. Instead we show that subjects' response variability was mainly driven by a combination of a noisy estimation of the parameters of the priors, and by variability in the decision process, which we represent as a noisy or stochastic posterior. PMID:24945142
Validation Workshop of the DRDC Concept Map Knowledge Model: Issues in Intelligence Analysis
2010-06-29
group noted problems with grammar , and a more standard approach to the grammar of the linking term (e.g. use only active tense ) would certainly have...Knowledge Model is distinct from a Concept Map. A Concept Map is a single map, probably presented in one view, while a Knowledge Model is a set of...Agenda The workshop followed the agenda presented in Table 2-3. Table 2-3: Workshop Agenda Time Title 13:00 – 13:15 Registration 13:15 – 13:45
Ali, Anjum A; Dale, Anders M; Badea, Alexandra; Johnson, G Allan
2005-08-15
We present the automated segmentation of magnetic resonance microscopy (MRM) images of the C57BL/6J mouse brain into 21 neuroanatomical structures, including the ventricular system, corpus callosum, hippocampus, caudate putamen, inferior colliculus, internal capsule, globus pallidus, and substantia nigra. The segmentation algorithm operates on multispectral, three-dimensional (3D) MR data acquired at 90-microm isotropic resolution. Probabilistic information used in the segmentation is extracted from training datasets of T2-weighted, proton density-weighted, and diffusion-weighted acquisitions. Spatial information is employed in the form of prior probabilities of occurrence of a structure at a location (location priors) and the pairwise probabilities between structures (contextual priors). Validation using standard morphometry indices shows good consistency between automatically segmented and manually traced data. Results achieved in the mouse brain are comparable with those achieved in human brain studies using similar techniques. The segmentation algorithm shows excellent potential for routine morphological phenotyping of mouse models.
Syndrome Diagnosis: Human Intuition or Machine Intelligence?
Braaten, Øivind; Friestad, Johannes
2008-01-01
The aim of this study was to investigate whether artificial intelligence methods can represent objective methods that are essential in syndrome diagnosis. Most syndromes have no external criterion standard of diagnosis. The predictive value of a clinical sign used in diagnosis is dependent on the prior probability of the syndrome diagnosis. Clinicians often misjudge the probabilities involved. Syndromology needs objective methods to ensure diagnostic consistency, and take prior probabilities into account. We applied two basic artificial intelligence methods to a database of machine-generated patients - a ‘vector method’ and a set method. As reference methods we ran an ID3 algorithm, a cluster analysis and a naive Bayes’ calculation on the same patient series. The overall diagnostic error rate for the the vector algorithm was 0.93%, and for the ID3 0.97%. For the clinical signs found by the set method, the predictive values varied between 0.71 and 1.0. The artificial intelligence methods that we used, proved simple, robust and powerful, and represent objective diagnostic methods. PMID:19415142
Bayen, Ute J.; Kuhlmann, Beatrice G.
2010-01-01
The authors investigated conditions under which judgments in source-monitoring tasks are influenced by prior schematic knowledge. According to a probability-matching account of source guessing (Spaniol & Bayen, 2002), when people do not remember the source of information, they match source guessing probabilities to the perceived contingency between sources and item types. When they do not have a representation of a contingency, they base their guesses on prior schematic knowledge. The authors provide support for this account in two experiments with sources presenting information that was expected for one source and somewhat unexpected for another. Schema-relevant information about the sources was provided at the time of encoding. When contingency perception was impeded by dividing attention, participants showed schema-based guessing (Experiment 1). Manipulating source - item contingency also affected guessing (Experiment 2). When this contingency was schema-inconsistent, it superseded schema-based expectations and led to schema-inconsistent guessing. PMID:21603251
Syndrome diagnosis: human intuition or machine intelligence?
Braaten, Oivind; Friestad, Johannes
2008-01-01
The aim of this study was to investigate whether artificial intelligence methods can represent objective methods that are essential in syndrome diagnosis. Most syndromes have no external criterion standard of diagnosis. The predictive value of a clinical sign used in diagnosis is dependent on the prior probability of the syndrome diagnosis. Clinicians often misjudge the probabilities involved. Syndromology needs objective methods to ensure diagnostic consistency, and take prior probabilities into account. We applied two basic artificial intelligence methods to a database of machine-generated patients - a 'vector method' and a set method. As reference methods we ran an ID3 algorithm, a cluster analysis and a naive Bayes' calculation on the same patient series. The overall diagnostic error rate for the the vector algorithm was 0.93%, and for the ID3 0.97%. For the clinical signs found by the set method, the predictive values varied between 0.71 and 1.0. The artificial intelligence methods that we used, proved simple, robust and powerful, and represent objective diagnostic methods.
Probability-based nitrate contamination map of groundwater in Kinmen.
Liu, Chen-Wuing; Wang, Yeuh-Bin; Jang, Cheng-Shin
2013-12-01
Groundwater supplies over 50% of drinking water in Kinmen. Approximately 16.8% of groundwater samples in Kinmen exceed the drinking water quality standard (DWQS) of NO3 (-)-N (10 mg/L). The residents drinking high nitrate-polluted groundwater pose a potential risk to health. To formulate effective water quality management plan and assure a safe drinking water in Kinmen, the detailed spatial distribution of nitrate-N in groundwater is a prerequisite. The aim of this study is to develop an efficient scheme for evaluating spatial distribution of nitrate-N in residential well water using logistic regression (LR) model. A probability-based nitrate-N contamination map in Kinmen is constructed. The LR model predicted the binary occurrence probability of groundwater nitrate-N concentrations exceeding DWQS by simple measurement variables as independent variables, including sampling season, soil type, water table depth, pH, EC, DO, and Eh. The analyzed results reveal that three statistically significant explanatory variables, soil type, pH, and EC, are selected for the forward stepwise LR analysis. The total ratio of correct classification reaches 92.7%. The highest probability of nitrate-N contamination map presents in the central zone, indicating that groundwater in the central zone should not be used for drinking purposes. Furthermore, a handy EC-pH-probability curve of nitrate-N exceeding the threshold of DWQS was developed. This curve can be used for preliminary screening of nitrate-N contamination in Kinmen groundwater. This study recommended that the local agency should implement the best management practice strategies to control nonpoint nitrogen sources and carry out a systematic monitoring of groundwater quality in residential wells of the high nitrate-N contamination zones.
On the influence of zero-padding on the nonlinear operations in Quantitative Susceptibility Mapping
Eskreis-Winkler, Sarah; Zhou, Dong; Liu, Tian; Gupta, Ajay; Gauthier, Susan A.; Wang, Yi; Spincemaille, Pascal
2016-01-01
Purpose Zero padding is a well-studied interpolation technique that improves image visualization without increasing image resolution. This interpolation is often performed as a last step before images are displayed on clinical workstations. Here, we seek to demonstrate the importance of zero padding before rather than after performing non-linear post-processing algorithms, such as Quantitative Susceptibility Mapping (QSM). To do so, we evaluate apparent spatial resolution, relative error and depiction of multiple sclerosis (MS) lesions on images that were zero padded prior to, in the middle of, and after the application of the QSM algorithm. Materials and Methods High resolution gradient echo (GRE) data were acquired on twenty MS patients, from which low resolution data were derived using k-space cropping. Pre-, mid-, and post-zero padded QSM images were reconstructed from these low resolution data by zero padding prior to field mapping, after field mapping, and after susceptibility mapping, respectively. Using high resolution QSM as the gold standard, apparent spatial resolution, relative error, and image quality of the pre-, mid-, and post-zero padded QSM images were measured and compared. Results Both the accuracy and apparent spatial resolution of the pre-zero padded QSM was higher than that of mid-zero padded QSM (p < 0.001; p < 0.001), which was higher than that of post-zero padded QSM (p < 0.001; p < 0.001). The image quality of pre-zero padded reconstructions was higher than that of mid- and post-zero padded reconstructions (p = 0.004; p < 0.001). Conclusion Zero padding of the complex GRE data prior to nonlinear susceptibility mapping improves image accuracy and apparent resolution compared to zero padding afterwards. It also provides better delineation of MS lesion geometry, which may improve lesion subclassification and disease monitoring in MS patients. PMID:27587225
On the influence of zero-padding on the nonlinear operations in Quantitative Susceptibility Mapping.
Eskreis-Winkler, Sarah; Zhou, Dong; Liu, Tian; Gupta, Ajay; Gauthier, Susan A; Wang, Yi; Spincemaille, Pascal
2017-01-01
Zero padding is a well-studied interpolation technique that improves image visualization without increasing image resolution. This interpolation is often performed as a last step before images are displayed on clinical workstations. Here, we seek to demonstrate the importance of zero padding before rather than after performing non-linear post-processing algorithms, such as Quantitative Susceptibility Mapping (QSM). To do so, we evaluate apparent spatial resolution, relative error and depiction of multiple sclerosis (MS) lesions on images that were zero padded prior to, in the middle of, and after the application of the QSM algorithm. High resolution gradient echo (GRE) data were acquired on twenty MS patients, from which low resolution data were derived using k-space cropping. Pre-, mid-, and post-zero padded QSM images were reconstructed from these low resolution data by zero padding prior to field mapping, after field mapping, and after susceptibility mapping, respectively. Using high resolution QSM as the gold standard, apparent spatial resolution, relative error, and image quality of the pre-, mid-, and post-zero padded QSM images were measured and compared. Both the accuracy and apparent spatial resolution of the pre-zero padded QSM was higher than that of mid-zero padded QSM (p<0.001; p<0.001), which was higher than that of post-zero padded QSM (p<0.001; p<0.001). The image quality of pre-zero padded reconstructions was higher than that of mid- and post-zero padded reconstructions (p=0.004; p<0.001). Zero padding of the complex GRE data prior to nonlinear susceptibility mapping improves image accuracy and apparent resolution compared to zero padding afterwards. It also provides better delineation of MS lesion geometry, which may improve lesion subclassification and disease monitoring in MS patients. Copyright © 2016 Elsevier Inc. All rights reserved.
The application of multiple reaction monitoring and multi-analyte profiling to HDL proteins
2014-01-01
Background HDL carries a rich protein cargo and examining HDL protein composition promises to improve our understanding of its functions. Conventional mass spectrometry methods can be lengthy and difficult to extend to large populations. In addition, without prior enrichment of the sample, the ability of these methods to detect low abundance proteins is limited. Our objective was to develop a high-throughput approach to examine HDL protein composition applicable to diabetes and cardiovascular disease (CVD). Methods We optimized two multiplexed assays to examine HDL proteins using a quantitative immunoassay (Multi-Analyte Profiling- MAP) and mass spectrometric-based quantitative proteomics (Multiple Reaction Monitoring-MRM). We screened HDL proteins using human xMAP (90 protein panel) and MRM (56 protein panel). We extended the application of these two methods to HDL isolated from a group of participants with diabetes and prior cardiovascular events and a group of non-diabetic controls. Results We were able to quantitate 69 HDL proteins using MAP and 32 proteins using MRM. For several common proteins, the use of MRM and MAP was highly correlated (p < 0.01). Using MAP, several low abundance proteins implicated in atherosclerosis and inflammation were found on HDL. On the other hand, MRM allowed the examination of several HDL proteins not available by MAP. Conclusions MAP and MRM offer a sensitive and high-throughput approach to examine changes in HDL proteins in diabetes and CVD. This approach can be used to measure the presented HDL proteins in large clinical studies. PMID:24397693
Statistical Significance of Optical Map Alignments
Sarkar, Deepayan; Goldstein, Steve; Schwartz, David C.
2012-01-01
Abstract The Optical Mapping System constructs ordered restriction maps spanning entire genomes through the assembly and analysis of large datasets comprising individually analyzed genomic DNA molecules. Such restriction maps uniquely reveal mammalian genome structure and variation, but also raise computational and statistical questions beyond those that have been solved in the analysis of smaller, microbial genomes. We address the problem of how to filter maps that align poorly to a reference genome. We obtain map-specific thresholds that control errors and improve iterative assembly. We also show how an optimal self-alignment score provides an accurate approximation to the probability of alignment, which is useful in applications seeking to identify structural genomic abnormalities. PMID:22506568
An Atlas of ShakeMaps for Landslide and Liquefaction Modeling
NASA Astrophysics Data System (ADS)
Johnson, K. L.; Nowicki, M. A.; Mah, R. T.; Garcia, D.; Harp, E. L.; Godt, J. W.; Lin, K.; Wald, D. J.
2012-12-01
The human consequences of a seismic event are often a result of subsequent hazards induced by the earthquake, such as landslides. While the United States Geological Survey (USGS) ShakeMap and Prompt Assessment of Global Earthquakes for Response (PAGER) systems are, in conjunction, capable of estimating the damage potential of earthquake shaking in near-real time, they do not currently provide estimates for the potential of further damage by secondary processes. We are developing a sound basis for providing estimates of the likelihood and spatial distribution of landslides for any global earthquake under the PAGER system. Here we discuss several important ingredients in this effort. First, we report on the development of a standardized hazard layer from which to calibrate observed landslide distributions; in contrast, prior studies have used a wide variety of means for estimating the hazard input. This layer now takes the form of a ShakeMap, a standardized approach for computing geospatial estimates for a variety of shaking metrics (both peak ground motions and shaking intensity) from any well-recorded earthquake. We have created ShakeMaps for about 20 historical landslide "case history" events, significant in terms of their landslide occurrence, as part of an updated release of the USGS ShakeMap Atlas. We have also collected digitized landslide data from open-source databases for many of the earthquake events of interest. When these are combined with up-to-date topographic and geologic maps, we have the basic ingredients for calibrating landslide probabilities for a significant collection of earthquakes. In terms of modeling, rather than focusing on mechanistic models of landsliding, we adopt a strictly statistical approach to quantify landslide likelihood. We incorporate geology, slope, peak ground acceleration, and landslide data as variables in a logistic regression, selecting the best explanatory variables given the standardized new hazard layers (see Nowicki et al., this meeting, for more detail on the regression). To make the ShakeMap and PAGER systems more comprehensive in terms of secondary losses, we are working to calibrate a similarly constrained regression for liquefaction estimation using a suite of well-studied earthquakes for which detailed, digitized liquefaction datasets are available; here variants of wetness index and soil strength replace geology and slope. We expect that this Atlas of ShakeMaps for landslide and liquefaction case history events, which will soon be publicly available via the internet, will aid in improving the accuracy of loss-modeling systems such as PAGER, as well as allow for a common framework for numerous other mechanistic and empirical studies.
2008-01-01
Since 1994, Irish cattle have been exposed to greater risks of acquiring Mycobacterium avium subspecies paratuberculosis (MAP) infection as a consequence of the importation of over 70,000 animals from continental Europe. In recent years, there has been an increase in the number of reported clinical cases of paratuberculosis in Ireland. This study examines the prevalence of factors that promote the introduction and within-herd transmission of Mycobacterium avium subspecies paratuberculosis (MAP) on selected Irish dairy farms in the Cork region, and the association between these factors and the results of MAP screening tests on milk sock filter residue (MFR). A total of 59 dairy farms, selected using non-random methods but apparently free of endemic paratuberculosis, were enrolled into the study. A questionnaire was used to collect data about risk factors for MAP introduction and transmission. The MFR was assessed on six occasions over 24 months for the presence of MAP, using culture and immunomagnetic separation prior to polymerase chain reaction (IMS-PCR). Furthermore, blood samples from all entire male and female animals over one year of age in 20 herds were tested by ELISA. Eighteen (31%) farms had operated as closed herds since 1994, 28 (47%) had purchased from multiple sources and 14 (24%) had either direct or indirect (progeny) contact with imported animals. Milk and colostrum were mixed on 51% of farms, while 88% of farms fed pooled milk. Thirty (51%) herds tested negative to MFR culture and IMS-PCR, 12 (20%) were MFR culture positive, 26 (44%) were IMS-PCR positive and seven (12%) were both culture and IMS-PCR positive. The probability of a positive MFR culture was significantly associated with reduced attendance at calving, and with increased use of individual calf pens and increased (but not significantly) if mulitiple suckling was practised. There was poor agreement between MFR culture and MFR IMS-PCR results, but moderate agreement between MFR culture and ELISA test results. This study highlights a lack of awareness among Irish dairy farmers about the effect of inadequate biosecurity on MAP introduction. Furthermore, within-herd transmission will be facilitated by traditional calf rearing and waste management practices. The findings of viable MAP in the presence of known transmission factors in non-clinically affected herds could be a prelude to long-term problems for the Irish cattle and agri-business generally. PMID:21851718
Probabilistic Surface Characterization for Safe Landing Hazard Detection and Avoidance (HDA)
NASA Technical Reports Server (NTRS)
Johnson, Andrew E. (Inventor); Ivanov, Tonislav I. (Inventor); Huertas, Andres (Inventor)
2015-01-01
Apparatuses, systems, computer programs and methods for performing hazard detection and avoidance for landing vehicles are provided. Hazard assessment takes into consideration the geometry of the lander. Safety probabilities are computed for a plurality of pixels in a digital elevation map. The safety probabilities are combined for pixels associated with one or more aim points and orientations. A worst case probability value is assigned to each of the one or more aim points and orientations.
Factors That Influence Fast Mapping in Children Exposed to Spanish and English
Alt, Mary; Meyers, Christina; Figueroa, Cecilia
2015-01-01
Purpose The purpose of this study was to determine if children exposed to two languages would benefit from the phonotactic probability cues of a single language in the same way as monolingual peers and to determine if cross-linguistic influence would be present in a fast mapping task. Method Two groups of typically-developing children (monolingual English and bilingual Spanish-English) took part in a computer-based fast mapping task which manipulated phonotactic probability. Children were preschool-aged (N = 50) or school-aged (N = 34). Fast mapping was assessed through name identification and naming tasks. Data were analyzed using mixed ANOVAs with post-hoc testing and simple regression. Results Bilingual and monolingual preschoolers showed sensitivity to English phonotactic cues in both tasks, but bilingual preschoolers were less accurate than monolingual peers in the naming task. School-aged bilingual children had nearly identical performance to monolingual peers. Conclusions Knowing that children exposed to two languages can benefit from the statistical cues of a single language can help inform ideas about instruction and assessment for bilingual learners. PMID:23816663
Schelle, E; Rawlins, B G; Lark, R M; Webster, R; Staton, I; McLeod, C W
2008-09-01
We investigated the use of metals accumulated on tree bark for mapping their deposition across metropolitan Sheffield by sampling 642 trees of three common species. Mean concentrations of metals were generally an order of magnitude greater than in samples from a remote uncontaminated site. We found trivially small differences among tree species with respect to metal concentrations on bark, and in subsequent statistical analyses did not discriminate between them. We mapped the concentrations of As, Cd and Ni by lognormal universal kriging using parameters estimated by residual maximum likelihood (REML). The concentrations of Ni and Cd were greatest close to a large steel works, their probable source, and declined markedly within 500 m of it and from there more gradually over several kilometres. Arsenic was much more evenly distributed, probably as a result of locally mined coal burned in domestic fires for many years. Tree bark seems to integrate airborne pollution over time, and our findings show that sampling and analysing it are cost-effective means of mapping and identifying sources.
Video attention deviation estimation using inter-frame visual saliency map analysis
NASA Astrophysics Data System (ADS)
Feng, Yunlong; Cheung, Gene; Le Callet, Patrick; Ji, Yusheng
2012-01-01
A viewer's visual attention during video playback is the matching of his eye gaze movement to the changing video content over time. If the gaze movement matches the video content (e.g., follow a rolling soccer ball), then the viewer keeps his visual attention. If the gaze location moves from one video object to another, then the viewer shifts his visual attention. A video that causes a viewer to shift his attention often is a "busy" video. Determination of which video content is busy is an important practical problem; a busy video is difficult for encoder to deploy region of interest (ROI)-based bit allocation, and hard for content provider to insert additional overlays like advertisements, making the video even busier. One way to determine the busyness of video content is to conduct eye gaze experiments with a sizable group of test subjects, but this is time-consuming and costineffective. In this paper, we propose an alternative method to determine the busyness of video-formally called video attention deviation (VAD): analyze the spatial visual saliency maps of the video frames across time. We first derive transition probabilities of a Markov model for eye gaze using saliency maps of a number of consecutive frames. We then compute steady state probability of the saccade state in the model-our estimate of VAD. We demonstrate that the computed steady state probability for saccade using saliency map analysis matches that computed using actual gaze traces for a range of videos with different degrees of busyness. Further, our analysis can also be used to segment video into shorter clips of different degrees of busyness by computing the Kullback-Leibler divergence using consecutive motion compensated saliency maps.
Historical Topographic Map Collection bookmark
Fishburn, Kristin A.; Allord, Gregory J.
2017-06-29
The U.S. Geological Survey (USGS) National Geospatial Program is scanning published USGS 1:250,000-scale and larger topographic maps printed between 1884, the inception of the topographic mapping program, and 2006. The goal of this project, which began publishing the historical scanned maps in 2011, is to provide a digital repository of USGS topographic maps, available to the public at no cost. For more than 125 years, USGS topographic maps have accurately portrayed the complex geography of the Nation. The USGS is the Nation’s largest producer of printed topographic maps, and prior to 2006, USGS topographic maps were created using traditional cartographic methods and printed using a lithographic printing process. As the USGS continues the release of a new generation of topographic maps (US Topo) in electronic form, the topographic map remains an indispensable tool for government, science, industry, land management planning, and leisure.
NASA Technical Reports Server (NTRS)
Gray, Lincoln
1998-01-01
Our goal was to produce an interactive visualization from a mathematical model that successfully predicts metastases from head and neck cancer. We met this goal early in the project. The visualization is available for the public to view. Our work appears to fill a need for more information about this deadly disease. The idea of this project was to make an easily interpretable visualization based on what we call "functional maps" of disease. A functional map is a graphic summary of medical data, where distances between parts of the body are determined by the probability of disease, not by anatomical distances. Functional maps often beat little resemblance to anatomical maps, but they can be used to predict the spread of disease. The idea of modeling the spread of disease in an abstract multidimensional space is difficult for many people. Our goal was to make the important predictions easy to see. NASA must face this problem frequently: how to help laypersons and professionals see important trends in abstract, complex data. We took advantage of concepts perfected in NASA's graphics libraries. As an analogy, consider a functional map of early America. Suppose we choose travel times, rather than miles, as our measures of inter-city distances. For Abraham Lincoln, travel times would have been the more meaningful measure of separation between cities. In such a map New Orleans would be close to Memphis because of the Mississippi River. St. Louis would be close to Portland because of the Oregon Trail. Oklahoma City would be far from Little Rock because of the Cheyenne. Such a map would look puzzling to those of us who have always seen physical maps, but the functional map would be more useful in predicting the probabilities of inter-site transit. Continuing the analogy, we could predict the spread of social diseases such as gambling along the rivers and cattle rustling along the trails. We could simply print the functional map of America, but it would be more interesting to show meaningful patterns of dispersal. We had previously published the functional map of the head and neck, but it was difficult to explain to either patients or surgeons because that view of our body did not resemble anatomy. This discrepancy between functional and physical maps is just a mathematical restatement of the well-known fact that some diseases, such as head and neck cancer, spread in complex patterns, not always to the next nearest site. We had discovered that a computer could re-arrange anatomy so that this particular disease spreads to the next nearest site. The functional map explains over 95% of the metastases in 1400 patients. In a sense, we had graphed what our body "looks like" to a tumor. The tumor readily travels between adjacent areas in the functional map. The functional map is a succinct visual display of trends that are not easily appreciated in tables of probabilities.
Confined dense circumstellar material surrounding a regular type II supernova
Yaron, O.; Perley, D. A.; Gal-Yam, A.; ...
2017-02-13
With the advent of new wide-field, high-cadence optical transient surveys, our understanding of the diversity of core-collapse supernovae has grown tremendously in the last decade. However, the pre-supernova evolution of massive stars, that sets the physical backdrop to these violent events, is theoretically not well understood and difficult to probe observationally. Here we report the discovery of the supernova iPTF 13dqy = SN 2013fs a mere ~3 hr after explosion. Our rapid follow-up observations, which include multiwavelength photometry and extremely early (beginning at ~6 hr post-explosion) spectra, map the distribution of material in the immediate environment (≲ 10 15 cm)more » of the exploding star and establish that it was surrounded by circumstellar material (CSM) that was ejected during the final ~1 yr prior to explosion at a high rate, around 10 -3 solar masses per year. The complete disappearance of flash-ionised emission lines within the first several days requires that the dense CSM be confined to within ≲10 15 cm, consistent with radio non-detections at 70–100 days. The observations indicate that iPTF 13dqy was a regular Type II SN; thus, the finding that the probable red supergiant (RSG) progenitor of this common explosion ejected material at a highly elevated rate just prior to its demise suggests that pre-supernova instabilities may be common among exploding massive stars.« less
Identification, prediction, and mitigation of sinkhole hazards in evaporite karst areas
Gutierrez, F.; Cooper, A.H.; Johnson, K.S.
2008-01-01
Sinkholes usually have a higher probability of occurrence and a greater genetic diversity in evaporite terrains than in carbonate karst areas. This is because evaporites have a higher solubility and, commonly, a lower mechanical strength. Subsidence damage resulting from evaporite dissolution generates substantial losses throughout the world, but the causes are only well understood in a few areas. To deal with these hazards, a phased approach is needed for sinkhole identification, investigation, prediction, and mitigation. Identification techniques include field surveys and geomorphological mapping combined with accounts from local people and historical sources. Detailed sinkhole maps can be constructed from sequential historical maps, recent topographical maps, and digital elevation models (DEMs) complemented with building-damage surveying, remote sensing, and high-resolution geodetic surveys. On a more detailed level, information from exposed paleosubsidence features (paleokarst), speleological explorations, geophysical investigations, trenching, dating techniques, and boreholes may help in investigating dissolution and subsidence features. Information on the hydrogeological pathways including caves, springs, and swallow holes are particularly important especially when corroborated by tracer tests. These diverse data sources make a valuable database-the karst inventory. From this dataset, sinkhole susceptibility zonations (relative probability) may be produced based on the spatial distribution of the features and good knowledge of the local geology. Sinkhole distribution can be investigated by spatial distribution analysis techniques including studies of preferential elongation, alignment, and nearest neighbor analysis. More objective susceptibility models may be obtained by analyzing the statistical relationships between the known sinkholes and the conditioning factors. Chronological information on sinkhole formation is required to estimate the probability of occurrence of sinkholes (number of sinkholes/km2 year). Such spatial and temporal predictions, frequently derived from limited records and based on the assumption that past sinkhole activity may be extrapolated to the future, are non-corroborated hypotheses. Validation methods allow us to assess the predictive capability of the susceptibility maps and to transform them into probability maps. Avoiding the most hazardous areas by preventive planning is the safest strategy for development in sinkhole-prone areas. Corrective measures could be applied to reduce the dissolution activity and subsidence processes. A more practical solution for safe development is to reduce the vulnerability of the structures by using subsidence-proof designs. ?? 2007 Springer-Verlag.
A new statistical methodology predicting chip failure probability considering electromigration
NASA Astrophysics Data System (ADS)
Sun, Ted
In this research thesis, we present a new approach to analyze chip reliability subject to electromigration (EM) whose fundamental causes and EM phenomenon happened in different materials are presented in this thesis. This new approach utilizes the statistical nature of EM failure in order to assess overall EM risk. It includes within-die temperature variations from the chip's temperature map extracted by an Electronic Design Automation (EDA) tool to estimate the failure probability of a design. Both the power estimation and thermal analysis are performed in the EDA flow. We first used the traditional EM approach to analyze the design with a single temperature across the entire chip that involves 6 metal and 5 via layers. Next, we used the same traditional approach but with a realistic temperature map. The traditional EM analysis approach and that coupled with a temperature map and the comparison between the results of considering and not considering temperature map are presented in in this research. A comparison between these two results confirms that using a temperature map yields a less pessimistic estimation of the chip's EM risk. Finally, we employed the statistical methodology we developed considering a temperature map and different use-condition voltages and frequencies to estimate the overall failure probability of the chip. The statistical model established considers the scaling work with the usage of traditional Black equation and four major conditions. The statistical result comparisons are within our expectations. The results of this statistical analysis confirm that the chip level failure probability is higher i) at higher use-condition frequencies for all use-condition voltages, and ii) when a single temperature instead of a temperature map across the chip is considered. In this thesis, I start with an overall review on current design types, common flows, and necessary verifications and reliability checking steps used in this IC design industry. Furthermore, the important concepts about "Scripting Automation" which is used in all the integration of using diversified EDA tools in this research work are also described in detail with several examples and my completed coding works are also put in the appendix for your reference. Hopefully, this construction of my thesis will give readers a thorough understanding about my research work from the automation of EDA tools to the statistical data generation, from the nature of EM to the statistical model construction, and the comparisons among the traditional EM analysis and the statistical EM analysis approaches.
Kim, Jong Gyu
2012-01-01
Background During the planning of a thoracodorsal artery perforator (TDAP) free flap, preoperative multidetector-row computed tomographic (MDCT) angiography is valuable for predicting the locations of perforators. However, CT-based perforator mapping of the thoracodorsal artery is not easy because of its small diameter. Thus, we evaluated 1-mm-thick MDCT images in multiple planes to search for reliable perforators accurately. Methods Between July 2010 and October 2011, 19 consecutive patients (13 males, 6 females) who underwent MDCT prior to TDAP free flap operations were enrolled in this study. Patients ranged in age from 10 to 75 years (mean, 39.3 years). MDCT images were acquired at a thickness of 1 mm in the axial, coronal, and sagittal planes. Results The thoracodorsal artery perforators were detected in all 19 cases. The reliable perforators originating from the descending branch were found in 14 cases, of which 6 had transverse branches. The former were well identified in the coronal view, and the latter in the axial view. The location of the most reliable perforators on MDCT images corresponded well with the surgical findings. Conclusions Though MDCT has been widely used in performing the abdominal perforator free flap for detecting reliable perforating vessels, it is not popular in the TDAP free flap. The results of this study suggest that multiple planes of MDCT may increase the probability of detecting the most reliable perforators, along with decreasing the probability of missing available vessels. PMID:22872839
A fully traits-based approach to modeling global vegetation distribution.
van Bodegom, Peter M; Douma, Jacob C; Verheijen, Lieneke M
2014-09-23
Dynamic Global Vegetation Models (DGVMs) are indispensable for our understanding of climate change impacts. The application of traits in DGVMs is increasingly refined. However, a comprehensive analysis of the direct impacts of trait variation on global vegetation distribution does not yet exist. Here, we present such analysis as proof of principle. We run regressions of trait observations for leaf mass per area, stem-specific density, and seed mass from a global database against multiple environmental drivers, making use of findings of global trait convergence. This analysis explained up to 52% of the global variation of traits. Global trait maps, generated by coupling the regression equations to gridded soil and climate maps, showed up to orders of magnitude variation in trait values. Subsequently, nine vegetation types were characterized by the trait combinations that they possess using Gaussian mixture density functions. The trait maps were input to these functions to determine global occurrence probabilities for each vegetation type. We prepared vegetation maps, assuming that the most probable (and thus, most suited) vegetation type at each location will be realized. This fully traits-based vegetation map predicted 42% of the observed vegetation distribution correctly. Our results indicate that a major proportion of the predictive ability of DGVMs with respect to vegetation distribution can be attained by three traits alone if traits like stem-specific density and seed mass are included. We envision that our traits-based approach, our observation-driven trait maps, and our vegetation maps may inspire a new generation of powerful traits-based DGVMs.
Liu, Zhong; Gao, Xiaoguang; Fu, Xiaowei
2018-05-08
In this paper, we mainly study a cooperative search and coverage algorithm for a given bounded rectangle region, which contains several unknown stationary targets, by a team of unmanned aerial vehicles (UAVs) with non-ideal sensors and limited communication ranges. Our goal is to minimize the search time, while gathering more information about the environment and finding more targets. For this purpose, a novel cooperative search and coverage algorithm with controllable revisit mechanism is presented. Firstly, as the representation of the environment, the cognitive maps that included the target probability map (TPM), the uncertain map (UM), and the digital pheromone map (DPM) are constituted. We also design a distributed update and fusion scheme for the cognitive map. This update and fusion scheme can guarantee that each one of the cognitive maps converges to the same one, which reflects the targets’ true existence or absence in each cell of the search region. Secondly, we develop a controllable revisit mechanism based on the DPM. This mechanism can concentrate the UAVs to revisit sub-areas that have a large target probability or high uncertainty. Thirdly, in the frame of distributed receding horizon optimizing, a path planning algorithm for the multi-UAVs cooperative search and coverage is designed. In the path planning algorithm, the movement of the UAVs is restricted by the potential fields to meet the requirements of avoiding collision and maintaining connectivity constraints. Moreover, using the minimum spanning tree (MST) topology optimization strategy, we can obtain a tradeoff between the search coverage enhancement and the connectivity maintenance. The feasibility of the proposed algorithm is demonstrated by comparison simulations by way of analyzing the effects of the controllable revisit mechanism and the connectivity maintenance scheme. The Monte Carlo method is employed to validate the influence of the number of UAVs, the sensing radius, the detection and false alarm probabilities, and the communication range on the proposed algorithm.
Fast algorithm for probabilistic bone edge detection (FAPBED)
NASA Astrophysics Data System (ADS)
Scepanovic, Danilo; Kirshtein, Joshua; Jain, Ameet K.; Taylor, Russell H.
2005-04-01
The registration of preoperative CT to intra-operative reality systems is a crucial step in Computer Assisted Orthopedic Surgery (CAOS). The intra-operative sensors include 3D digitizers, fiducials, X-rays and Ultrasound (US). FAPBED is designed to process CT volumes for registration to tracked US data. Tracked US is advantageous because it is real time, noninvasive, and non-ionizing, but it is also known to have inherent inaccuracies which create the need to develop a framework that is robust to various uncertainties, and can be useful in US-CT registration. Furthermore, conventional registration methods depend on accurate and absolute segmentation. Our proposed probabilistic framework addresses the segmentation-registration duality, wherein exact segmentation is not a prerequisite to achieve accurate registration. In this paper, we develop a method for fast and automatic probabilistic bone surface (edge) detection in CT images. Various features that influence the likelihood of the surface at each spatial coordinate are combined using a simple probabilistic framework, which strikes a fair balance between a high-level understanding of features in an image and the low-level number crunching of standard image processing techniques. The algorithm evaluates different features for detecting the probability of a bone surface at each voxel, and compounds the results of these methods to yield a final, low-noise, probability map of bone surfaces in the volume. Such a probability map can then be used in conjunction with a similar map from tracked intra-operative US to achieve accurate registration. Eight sample pelvic CT scans were used to extract feature parameters and validate the final probability maps. An un-optimized fully automatic Matlab code runs in five minutes per CT volume on average, and was validated by comparison against hand-segmented gold standards. The mean probability assigned to nonzero surface points was 0.8, while nonzero non-surface points had a mean value of 0.38 indicating clear identification of surface points on average. The segmentation was also sufficiently crisp, with a full width at half maximum (FWHM) value of 1.51 voxels.
NASA Astrophysics Data System (ADS)
Ahmed, Ali; Hasan, Rafiq; Pekau, Oscar A.
2016-12-01
Two recent developments have come into the forefront with reference to updating the seismic design provisions for codes: (1) publication of new seismic hazard maps for Canada by the Geological Survey of Canada, and (2) emergence of the concept of new spectral format outdating the conventional standardized spectral format. The fourth -generation seismic hazard maps are based on enriched seismic data, enhanced knowledge of regional seismicity and improved seismic hazard modeling techniques. Therefore, the new maps are more accurate and need to incorporate into the Canadian Highway Bridge Design Code (CHBDC) for its next edition similar to its building counterpart National Building Code of Canada (NBCC). In fact, the code writers expressed similar intentions with comments in the commentary of CHBCD 2006. During the process of updating codes, NBCC, and AASHTO Guide Specifications for LRFD Seismic Bridge Design, American Association of State Highway and Transportation Officials, Washington (2009) lowered the probability level from 10 to 2% and 10 to 5%, respectively. This study has brought five sets of hazard maps corresponding to 2%, 5% and 10% probability of exceedance in 50 years developed by the GSC under investigation. To have a sound statistical inference, 389 Canadian cities are selected. This study shows the implications of the changes of new hazard maps on the design process (i.e., extent of magnification or reduction of the design forces).
NOS II inhibition attenuates post-suspension hypotension in Sprague-Dawley rats
NASA Technical Reports Server (NTRS)
Eatman, D.; Walton, M.; Socci, R. R.; Emmett, N.; Bayorh, M. A.
2003-01-01
The reduction in mean arterial pressure observed in astronauts may be related to the impairment of autonomic function and/or excessive production of endothelium-derived relaxing factors. Here, we examined the role of a nitric oxide synthase II (NOS II) inhibitor AMT (2-amino-dihydro-6-methyl-4H-1,3-thiazine) against the post-suspension reduction in mean arterial pressure (MAP) in conscious male Sprague-Dawley rats. Direct MAP and heart rate were determined prior to tail-suspension, daily during the 7-day suspension and every 2 hrs post-suspension. Prior to release from suspension and at 2 and 4 hrs post-suspension, AMT (0.1 mg/kg), or saline, were administered intravenously. During the 7-day suspension, MAP was not altered, nor were there significant changes in heart rate. The reduction in MAP post-suspension in saline-treated rats was associated with significant increases in plasma nitric oxide and prostacyclin. 2-Amino-dihydro-6-methyl4H-1,3-thiazine reduced plasma nitric oxide levels, but not those of prostacyclin, attenuated the observed post-suspension reduction in MAP and modified the baroreflex sensitivity for heart rate. Thus, the post suspension reduction in mean arterial pressure is due, in part, to overproduction of nitric oxide, via the NOS II pathway, and alteration in baroreflex activity.
Prior-knowledge-based spectral mixture analysis for impervious surface mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jinshui; He, Chunyang; Zhou, Yuyu
2014-01-03
In this study, we developed a prior-knowledge-based spectral mixture analysis (PKSMA) to map impervious surfaces by using endmembers derived separately for high- and low-density urban regions. First, an urban area was categorized into high- and low-density urban areas, using a multi-step classification method. Next, in high-density urban areas that were assumed to have only vegetation and impervious surfaces (ISs), the Vegetation-Impervious model (V-I) was used in a spectral mixture analysis (SMA) with three endmembers: vegetation, high albedo, and low albedo. In low-density urban areas, the Vegetation-Impervious-Soil model (V-I-S) was used in an SMA analysis with four endmembers: high albedo, lowmore » albedo, soil, and vegetation. The fraction of IS with high and low albedo in each pixel was combined to produce the final IS map. The root mean-square error (RMSE) of the IS map produced using PKSMA was about 11.0%, compared to 14.52% using four-endmember SMA. Particularly in high-density urban areas, PKSMA (RMSE = 6.47%) showed better performance than four-endmember (15.91%). The results indicate that PKSMA can improve IS mapping compared to traditional SMA by using appropriately selected endmembers and is particularly strong in high-density urban areas.« less
Wildfire risk in the wildland-urban interface: A simulation study in northwestern Wisconsin
Bar-Massada, A.; Radeloff, V.C.; Stewart, S.I.; Hawbaker, T.J.
2009-01-01
The rapid growth of housing in and near the wildland-urban interface (WUI) increases wildfire risk to lives and structures. To reduce fire risk, it is necessary to identify WUI housing areas that are more susceptible to wildfire. This is challenging, because wildfire patterns depend on fire behavior and spread, which in turn depend on ignition locations, weather conditions, the spatial arrangement of fuels, and topography. The goal of our study was to assess wildfire risk to a 60,000 ha WUI area in northwestern Wisconsin while accounting for all of these factors. We conducted 6000 simulations with two dynamic fire models: Fire Area Simulator (FARSITE) and Minimum Travel Time (MTT) in order to map the spatial pattern of burn probabilities. Simulations were run under normal and extreme weather conditions to assess the effect of weather on fire spread, burn probability, and risk to structures. The resulting burn probability maps were intersected with maps of structure locations and land cover types. The simulations revealed clear hotspots of wildfire activity and a large range of wildfire risk to structures in the study area. As expected, the extreme weather conditions yielded higher burn probabilities over the entire landscape, as well as to different land cover classes and individual structures. Moreover, the spatial pattern of risk was significantly different between extreme and normal weather conditions. The results highlight the fact that extreme weather conditions not only produce higher fire risk than normal weather conditions, but also change the fine-scale locations of high risk areas in the landscape, which is of great importance for fire management in WUI areas. In addition, the choice of weather data may limit the potential for comparisons of risk maps for different areas and for extrapolating risk maps to future scenarios where weather conditions are unknown. Our approach to modeling wildfire risk to structures can aid fire risk reduction management activities by identifying areas with elevated wildfire risk and those most vulnerable under extreme weather conditions. ?? 2009 Elsevier B.V.
Fast Drawing of Traffic Sign Using Mobile Mapping System
NASA Astrophysics Data System (ADS)
Yao, Q.; Tan, B.; Huang, Y.
2016-06-01
Traffic sign provides road users with the specified instruction and information to enhance traffic safety. Automatic detection of traffic sign is important for navigation, autonomous driving, transportation asset management, etc. With the advance of laser and imaging sensors, Mobile Mapping System (MMS) becomes widely used in transportation agencies to map the transportation infrastructure. Although many algorithms of traffic sign detection are developed in the literature, they are still a tradeoff between the detection speed and accuracy, especially for the large-scale mobile mapping of both the rural and urban roads. This paper is motivated to efficiently survey traffic signs while mapping the road network and the roadside landscape. Inspired by the manual delineation of traffic sign, a drawing strategy is proposed to quickly approximate the boundary of traffic sign. Both the shape and color prior of the traffic sign are simultaneously involved during the drawing process. The most common speed-limit sign circle and the statistic color model of traffic sign are studied in this paper. Anchor points of traffic sign edge are located with the local maxima of color and gradient difference. Starting with the anchor points, contour of traffic sign is drawn smartly along the most significant direction of color and intensity consistency. The drawing process is also constrained by the curvature feature of the traffic sign circle. The drawing of linear growth is discarded immediately if it fails to form an arc over some steps. The Kalman filter principle is adopted to predict the temporal context of traffic sign. Based on the estimated point,we can predict and double check the traffic sign in consecutive frames.The event probability of having a traffic sign over the consecutive observations is compared with the null hypothesis of no perceptible traffic sign. The temporally salient traffic sign is then detected statistically and automatically as the rare event of having a traffic sign.The proposed algorithm is tested with a diverse set of images that are taken inWuhan, China with theMMS ofWuhan University. Experimental results demonstrate that the proposed algorithm can detect traffic signs at the rate of over 80% in around 10 milliseconds. It is promising for the large-scale traffic sign survey and change detection using the mobile mapping system.
Geology Report: Area 3 Radioactive Waste Management Site DOE/Nevada Test Site, Nye County, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
NSTec Environmental Management
2006-07-01
Surficial geologic studies near the Area 3 Radioactive Waste Management Site (RWMS) were conducted as part of a site characterization program. Studies included evaluation of the potential for future volcanism and Area 3 fault activity that could impact waste disposal operations at the Area 3 RWMS. Future volcanic activity could lead to disruption of the Area 3 RWMS. Local and regional studies of volcanic risk indicate that major changes in regional volcanic activity within the next 1,000 years are not likely. Mapped basalts of Paiute Ridge, Nye Canyon, and nearby Scarp Canyon are Miocene in age. There is a lackmore » of evidence for post-Miocene volcanism in the subsurface of Yucca Flat, and the hazard of basaltic volcanism at the Area 3 RWMS, within the 1,000-year regulatory period, is very low and not a forseeable future event. Studies included a literature review and data analysis to evaluate unclassified published and unpublished information regarding the Area 3 and East Branch Area 3 faults mapped in Area 3 and southern Area 7. Two trenches were excavated along the Area 3 fault to search for evidence of near-surface movement prior to nuclear testing. Allostratigraphic units and fractures were mapped in Trenches ST02 and ST03. The Area 3 fault is a plane of weakness that has undergone strain resulting from stress imposed by natural events and underground nuclear testing. No major vertical displacement on the Area 3 fault since the Early Holocene, and probably since the Middle Pleistocene, can be demonstrated. The lack of major displacement within this time frame and minimal vertical extent of minor fractures suggest that waste disposal operations at the Area 3 RWMS will not be impacted substantially by the Area 3 fault, within the regulatory compliance period. A geomorphic surface map of Yucca Flat utilizes the recent geomorphology and soil characterization work done in adjacent northern Frenchman Flat. The approach taken was to adopt the map unit boundaries (line work) of Swadley and Hoover (1990) and re-label these with map unit designations like those in northern Frenchman Flat (Huckins-Gang et al, 1995a,b,c; Snyder et al, 1995a,b,c,d).« less
78 FR 23893 - Notice of Funds Availability: Inviting Applications for the Market Access Program
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-23
... inviting proposals for the 2014 Market Access Program (MAP). The intended effect of this notice is to... Agricultural Service (FAS). The funding authority for MAP expires at the end of fiscal year 2013. This notice... program funding is reauthorized prior to that time. In the event this program is not reauthorized, or is...
ERIC Educational Resources Information Center
Hartmeyer, Rikke; Bølling, Mads; Bentsen, Peter
2017-01-01
Current research points to Personal Meaning Mapping (PMM) as a method useful in investigating students' prior and current science knowledge. However, studies investigating PMM as a method for exploring specific knowledge dimensions are lacking. Ensuring that students are able to access specific knowledge dimensions is important, especially in…
Viewing or Visualising Which Concept Map Strategy Works Best on Problem-Solving Performance?
ERIC Educational Resources Information Center
Lee, Youngmin; Nelson, David W.
2005-01-01
The purpose of this study was to investigate the effects of two types of maps (generative vs. completed) and the amount of prior knowledge (high vs. low) on well-structured and ill-structured problem-solving performance. Forty-four undergraduates who were registered in an introductory instructional technology course participated in the study.…
Fire scar mapping in a southern African savanna
Andrew T. Hudak; Bruce H. Brockett; Carol A. Wessman
1998-01-01
Multitemporal principal components analyses (PCAs) of pre- and post-burn Landsat Thematic Mapper images were used to map fire scars in Madikwe Game Reserve (MGR), South Africa. Prior to MGR's inception in 1991, when the land was used for extensive cattle ranching, overgrazing and fire suppression lead to bush encroachment. Fire is currently being used to control...
Choosing appropriate subpopulations for modeling tree canopy cover nationwide
Gretchen G. Moisen; John W. Coulston; Barry T. Wilson; Warren B. Cohen; Mark V. Finco
2012-01-01
In prior national mapping efforts, the country has been divided into numerous ecologically similar mapping zones, and individual models have been constructed for each zone. Additionally, a hierarchical approach has been taken within zones to first mask out areas of nonforest, then target models of tree attributes within forested areas only. This results in many models...
USDA-ARS?s Scientific Manuscript database
The resistant line Auburn 623RNR and a number of elite breeding lines derived from it remain the most important source of root-knot nematode (RKN) resistance because they exhibit the highest level of resistance to RKN known to date in Upland cotton (Gossypium hirsutum L). Prior genetic mapping analy...
Analogical and category-based inference: a theoretical integration with Bayesian causal models.
Holyoak, Keith J; Lee, Hee Seung; Lu, Hongjing
2010-11-01
A fundamental issue for theories of human induction is to specify constraints on potential inferences. For inferences based on shared category membership, an analogy, and/or a relational schema, it appears that the basic goal of induction is to make accurate and goal-relevant inferences that are sensitive to uncertainty. People can use source information at various levels of abstraction (including both specific instances and more general categories), coupled with prior causal knowledge, to build a causal model for a target situation, which in turn constrains inferences about the target. We propose a computational theory in the framework of Bayesian inference and test its predictions (parameter-free for the cases we consider) in a series of experiments in which people were asked to assess the probabilities of various causal predictions and attributions about a target on the basis of source knowledge about generative and preventive causes. The theory proved successful in accounting for systematic patterns of judgments about interrelated types of causal inferences, including evidence that analogical inferences are partially dissociable from overall mapping quality.
GPR identification of an early monument at Los Morteros in the Peruvian coastal desert
NASA Astrophysics Data System (ADS)
Sandweiss, Daniel H.; Kelley, Alice R.; Belknap, Daniel F.; Kelley, Joseph T.; Rademaker, Kurt; Reid, David A.
2010-05-01
Los Morteros (8˚39'54″S, 78˚42 '00″W) is located in coastal, northern Peru, one of the six original centers of world civilization. The site consists of a large, sand-covered, isolated prominence situated on a Mid-Holocene shoreline, ˜ 5 km from the present coast. Preceramic archaeological deposits (4040 ± 75 to 4656 ± 60 14C yr BP or ˜ 3600-5500 cal yr BP) cap this feature, which has been identified by prior researchers as a sand-draped, bedrock-cored landform or a relict dune deposit. Because neither explanation is geomorphologically probable, we used ground-penetrating radar (GPR) and high-resolution mapping to assess the mound's interior structure. Our results indicate an anthropogenic origin for Los Morteros, potentially placing it among the earliest monumental structures in prehistoric South America. The extremely arid setting raises new questions about the purpose and the logistics of early mound construction in this region. This work demonstrates the value of an integrated Quaternary sciences approach to assess long-term landscape change and to understand the interaction between humans and the environment.
Soil erosion and causative factors at Vandenberg Air Force Base, California
NASA Technical Reports Server (NTRS)
Butterworth, Joel B.
1988-01-01
Areas of significant soil erosion and unvegetated road cuts were identified and mapped for Vandenberg Air Force Base. One hundred forty-two eroded areas (most greater than 1.2 ha) and 51 road cuts were identified from recent color infrared aerial photography and ground truthed to determine the severity and causes of erosion. Comparison of the present eroded condition of soils (as shown in the 1986 photography) with that in historical aerial photography indicates that most erosion on the base took place prior to 1928. However, at several sites accelerated rates of erosion and sedimentation may be occurring as soils and parent materials are eroded vertically. The most conspicuous erosion is in the northern part of the base, where severe gully, sheet, and mass movement erosion have occurred in soils and in various sedimentary rocks. Past cultivation practices, compounded by highly erodible soils prone to subsurface piping, are probably the main causes. Improper range management practices following cultivation may have also increased runoff and erosion. Aerial photography from 1986 shows that no appreciable headward erosion or gully sidewall collapse have occurred in this area since 1928.
NASA Astrophysics Data System (ADS)
Belhadi, J.; Yousfi, S.; Bouyanfif, H.; El Marssi, M.
2018-04-01
(BiFeO3)(1-x)Λ/(LaFeO3)xΛ superlattices (SLs) with varying x have been grown by pulsed laser deposition on (111) oriented SrTiO3 substrates. In order to obtain good epitaxy and flat samples, a conducting SrRuO3 buffer has been deposited prior to the superlattices to screen the polar mismatch for such (111) SrTiO3 orientation. X-ray diffraction reciprocal space mapping on a different family of planes was collected and evidenced a room temperature structural change at x = 0.5 from a rhombohedral/monoclinic structure for rich BiFeO3 to an orthorhombic symmetry for rich LaFeO3. This symmetry change has been confirmed by Raman spectroscopy and demonstrates the different phase stability compared to similar SLs grown on (100) SrTiO3. The strongly anisotropic strain and oxygen octahedral rotation/tilt system compatibility at the interfaces probably explain the orientation dependence of the phase stability in such superlattices.
On Schrödinger's bridge problem
NASA Astrophysics Data System (ADS)
Friedland, S.
2017-11-01
In the first part of this paper we generalize Georgiou-Pavon's result that a positive square matrix can be scaled uniquely to a column stochastic matrix which maps a given positive probability vector to another given positive probability vector. In the second part we prove that a positive quantum channel can be scaled to another positive quantum channel which maps a given positive definite density matrix to another given positive definite density matrix using Brouwer's fixed point theorem. This result proves the Georgiou-Pavon conjecture for two positive definite density matrices, made in their recent paper. We show that the fixed points are unique for certain pairs of positive definite density matrices. Bibliography: 15 titles.
de Klerk, Helen M; Gilbertson, Jason; Lück-Vogel, Melanie; Kemp, Jaco; Munch, Zahn
2016-11-01
Traditionally, to map environmental features using remote sensing, practitioners will use training data to develop models on various satellite data sets using a number of classification approaches and use test data to select a single 'best performer' from which the final map is made. We use a combination of an omission/commission plot to evaluate various results and compile a probability map based on consistently strong performing models across a range of standard accuracy measures. We suggest that this easy-to-use approach can be applied in any study using remote sensing to map natural features for management action. We demonstrate this approach using optical remote sensing products of different spatial and spectral resolution to map the endemic and threatened flora of quartz patches in the Knersvlakte, South Africa. Quartz patches can be mapped using either SPOT 5 (used due to its relatively fine spatial resolution) or Landsat8 imagery (used because it is freely accessible and has higher spectral resolution). Of the variety of classification algorithms available, we tested maximum likelihood and support vector machine, and applied these to raw spectral data, the first three PCA summaries of the data, and the standard normalised difference vegetation index. We found that there is no 'one size fits all' solution to the choice of a 'best fit' model (i.e. combination of classification algorithm or data sets), which is in agreement with the literature that classifier performance will vary with data properties. We feel this lends support to our suggestion that rather than the identification of a 'single best' model and a map based on this result alone, a probability map based on the range of consistently top performing models provides a rigorous solution to environmental mapping. Copyright © 2016 Elsevier Ltd. All rights reserved.
Temporal Evolution of Chromospheric Oscillations in Flaring Regions: A Pilot Study
NASA Astrophysics Data System (ADS)
Monsue, T.; Hill, F.; Stassun, K. G.
2016-10-01
We have analyzed Hα intensity images obtained at a 1 minute cadence with the Global Oscillation Network Group (GONG) system to investigate the properties of oscillations in the 0-8 mHz frequency band at the location and time of strong M- and X-class flares. For each of three subregions within two flaring active regions, we extracted time series from multiple distinct positions, including the flare core and quieter surrounding areas. The time series were analyzed with a moving power-map analysis to examine power as a function of frequency and time. We find that, in the flare core of all three subregions, the low-frequency power (˜1-2 mHz) is substantially enhanced immediately prior to and after the flare, and that power at all frequencies up to 8 mHz is depleted at flare maximum. This depletion is both frequency- and time-dependent, which probably reflects the changing depths visible during the flare in the bandpass of the filter. These variations are not observed outside the flare cores. The depletion may indicate that acoustic energy is being converted into thermal energy at flare maximum, while the low-frequency enhancement may arise from an instability in the chromosphere and provide an early warning of the flare onset. Dark lanes of reduced wave power are also visible in the power maps, which may arise from the interaction of the acoustic waves and the magnetic field.
TEMPORAL EVOLUTION OF CHROMOSPHERIC OSCILLATIONS IN FLARING REGIONS: A PILOT STUDY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monsue, T.; Stassun, K. G.; Hill, F., E-mail: teresa.monsue@vanderbilt.edu, E-mail: keivan.stassun@vanderbilt.edu, E-mail: hill@email.noao.edu
2016-10-01
We have analyzed H α intensity images obtained at a 1 minute cadence with the Global Oscillation Network Group (GONG) system to investigate the properties of oscillations in the 0–8 mHz frequency band at the location and time of strong M- and X-class flares. For each of three subregions within two flaring active regions, we extracted time series from multiple distinct positions, including the flare core and quieter surrounding areas. The time series were analyzed with a moving power-map analysis to examine power as a function of frequency and time. We find that, in the flare core of all threemore » subregions, the low-frequency power (∼1–2 mHz) is substantially enhanced immediately prior to and after the flare, and that power at all frequencies up to 8 mHz is depleted at flare maximum. This depletion is both frequency- and time-dependent, which probably reflects the changing depths visible during the flare in the bandpass of the filter. These variations are not observed outside the flare cores. The depletion may indicate that acoustic energy is being converted into thermal energy at flare maximum, while the low-frequency enhancement may arise from an instability in the chromosphere and provide an early warning of the flare onset. Dark lanes of reduced wave power are also visible in the power maps, which may arise from the interaction of the acoustic waves and the magnetic field.« less
Qualitative analysis and conceptual mapping of patient experiences in home health care.
Lines, Lisa M; Anderson, Wayne L; Blackmon, Brian D; Pronier, Cristalle R; Allen, Rachael W; Kenyon, Anne E
2018-01-01
This study explored patient experiences in home health care through a literature review, focus groups, and interviews. Our goal was to develop a conceptual map of home health care patient experience domains. The conceptual map identifies technical and personal spheres of care, relating prior studies to new focus group and interview findings and identifying the most important domains of care. Study participants (n = 35) most frequently reported the most important domain as staff who are caring, supportive, patient, empathetic, respectful, and considerate (endorsed by 29% of participants). The conceptual map includes 114 discrete domains.
Comparison of Content Structure and Cognitive Structure in the Learning of Probability.
ERIC Educational Resources Information Center
Geeslin, William E.
Digraphs, graphs, and task analysis were used to map out the content structure of a programed text (SMSG) in elementary probability. Mathematical structure was defined as the relationship between concepts within a set of abstract systems. The word association technique was used to measure the existing relations (cognitive structure) in S's memory…
Wei Wu; Charlesb Hall; Lianjun Zhang
2006-01-01
We predicted the spatial pattern of hourly probability of cloud cover in the Luquillo Experimental Forest (LEF) in North-Eastern Puerto Rico using four different models. The probability of cloud cover (defined as âthe percentage of the area covered by clouds in each pixel on the mapâ in this paper) at any hour and any place is a function of three topographic variables...
Work probability distribution and tossing a biased coin
NASA Astrophysics Data System (ADS)
Saha, Arnab; Bhattacharjee, Jayanta K.; Chakraborty, Sagar
2011-01-01
We show that the rare events present in dissipated work that enters Jarzynski equality, when mapped appropriately to the phenomenon of large deviations found in a biased coin toss, are enough to yield a quantitative work probability distribution for the Jarzynski equality. This allows us to propose a recipe for constructing work probability distribution independent of the details of any relevant system. The underlying framework, developed herein, is expected to be of use in modeling other physical phenomena where rare events play an important role.
The Current Status of Mapping in the World - Spotlight on Australia
NASA Astrophysics Data System (ADS)
Trinder, J.
2014-04-01
Prior to 1950, there was very limited mapping in Australia covering only strategic areas. After World War II, the Federal Government funded the small scale mapping of the whole country. This involved the development of the Australian National Spheroid in 1966, the Australian Geodetic Datum in 1966 and 1984 (AGD66 and AGD84) which were replaced by the Australian Geocentric Datum in 1994 (GDA94). The mapping of the country was completed in 1987 with 100 % of the country mapped at 1:100,000 and 1:250,000 although about half of the 1:100,000 are unpublished products. The Federal Government through Geoscience Australia continues to provide digital data, such as the GEODATA 250K (now series 3). Mapping at larger scales is undertaken by the states and territories, including cadastral mapping. This paper will demonstrate the extent of mapping in Australia as part of the current UN global survey of mapping.
Scanning and georeferencing historical USGS quadrangles
Fishburn, Kristin A.; Davis, Larry R.; Allord, Gregory J.
2017-06-23
The U.S. Geological Survey (USGS) National Geospatial Program is scanning published USGS 1:250,000-scale and larger topographic maps printed between 1884, the inception of the topographic mapping program, and 2006. The goal of this project, which began publishing the Historical Topographic Map Collection in 2011, is to provide access to a digital repository of USGS topographic maps that is available to the public at no cost. For more than 125 years, USGS topographic maps have accurately portrayed the complex geography of the Nation. The USGS is the Nation’s largest producer of traditional topographic maps, and, prior to 2006, USGS topographic maps were created using traditional cartographic methods and printed using a lithographic process. The next generation of topographic maps, US Topo, is being released by the USGS in digital form, and newer technologies make it possible to also deliver historical maps in the same electronic format that is more publicly accessible.
Genetic Influences on Survival Time After Severe Hemorrhage in Inbred Rat Strains
2011-04-12
technique (10, 41) immediately after insertion of the carotid catheter. A second incision was made in the skin in the medial aspect of the thigh to expose...rat strains. Both baseline and post hemorrhage mean arterial pres- sures (MAP) differed among inbred rat strains as did the decrease in MAP produced...measure for a given strain are different (P 0.05) between experiments for that strain. Mean arterial pressure (MAP) was recorded at 1 min prior to
Three-dimensional mapping of the local interstellar medium with composite data
NASA Astrophysics Data System (ADS)
Capitanio, L.; Lallement, R.; Vergely, J. L.; Elyajouri, M.; Monreal-Ibero, A.
2017-10-01
Context. Three-dimensional maps of the Galactic interstellar medium are general astrophysical tools. Reddening maps may be based on the inversion of color excess measurements for individual target stars or on statistical methods using stellar surveys. Three-dimensional maps based on diffuse interstellar bands (DIBs) have also been produced. All methods benefit from the advent of massive surveys and may benefit from Gaia data. Aims: All of the various methods and databases have their own advantages and limitations. Here we present a first attempt to combine different datasets and methods to improve the local maps. Methods: We first updated our previous local dust maps based on a regularized Bayesian inversion of individual color excess data by replacing Hipparcos or photometric distances with Gaia Data Release 1 values when available. Secondly, we complemented this database with a series of ≃5000 color excess values estimated from the strength of the λ15273 DIB toward stars possessing a Gaia parallax. The DIB strengths were extracted from SDSS/APOGEE spectra. Third, we computed a low-resolution map based on a grid of Pan-STARRS reddening measurements by means of a new hierarchical technique and used this map as the prior distribution during the inversion of the two other datasets. Results: The use of Gaia parallaxes introduces significant changes in some areas and globally increases the compactness of the structures. Additional DIB-based data make it possible to assign distances to clouds located behind closer opaque structures and do not introduce contradictory information for the close structures. A more realistic prior distribution instead of a plane-parallel homogeneous distribution helps better define the structures. We validated the results through comparisons with other maps and with soft X-ray data. Conclusions: Our study demonstrates that the combination of various tracers is a potential tool for more accurate maps. An online tool makes it possible to retrieve maps and reddening estimations. Our online tool is available at http://stilism.obspm.fr
User’s guide for MapMark4GUI—A graphical user interface for the MapMark4 R package
Shapiro, Jason
2018-05-29
MapMark4GUI is an R graphical user interface (GUI) developed by the U.S. Geological Survey to support user implementation of the MapMark4 R statistical software package. MapMark4 was developed by the U.S. Geological Survey to implement probability calculations for simulating undiscovered mineral resources in quantitative mineral resource assessments. The GUI provides an easy-to-use tool to input data, run simulations, and format output results for the MapMark4 package. The GUI is written and accessed in the R statistical programming language. This user’s guide includes instructions on installing and running MapMark4GUI and descriptions of the statistical output processes, output files, and test data files.
The accuracy of the National Land Cover Data (NLCD) map is assessed via a probability sampling design incorporating three levels of stratification and two stages of selection. Agreement between the map and reference land-cover labels is defined as a match between the primary or a...
Factors that Influence Fast Mapping in Children Exposed to Spanish and English
ERIC Educational Resources Information Center
Alt, Mary; Meyers, Christina; Figueroa, Cecilia
2013-01-01
Purpose: The purpose of this study was to determine whether children exposed to 2 languages would benefit from the phonotactic probability cues of a single language in the same way as monolingual peers and to determine whether crosslinguistic influence would be present in a fast-mapping task. Method: Two groups of typically developing children…
Polusny, M A; Erbes, C R; Murdoch, M; Arbisi, P A; Thuras, P; Rath, M B
2011-04-01
National Guard troops are at increased risk for post-traumatic stress disorder (PTSD); however, little is known about risk and resilience in this population. The Readiness and Resilience in National Guard Soldiers Study is a prospective, longitudinal investigation of 522 Army National Guard troops deployed to Iraq from March 2006 to July 2007. Participants completed measures of PTSD symptoms and potential risk/protective factors 1 month before deployment. Of these, 81% (n=424) completed measures of PTSD, deployment stressor exposure and post-deployment outcomes 2-3 months after returning from Iraq. New onset of probable PTSD 'diagnosis' was measured by the PTSD Checklist - Military (PCL-M). Independent predictors of new-onset probable PTSD were identified using hierarchical logistic regression analyses. At baseline prior to deployment, 3.7% had probable PTSD. Among soldiers without PTSD symptoms at baseline, 13.8% reported post-deployment new-onset probable PTSD. Hierarchical logistic regression adjusted for gender, age, race/ethnicity and military rank showed that reporting more stressors prior to deployment predicted new-onset probable PTSD [odds ratio (OR) 2.20] as did feeling less prepared for deployment (OR 0.58). After accounting for pre-deployment factors, new-onset probable PTSD was predicted by exposure to combat (OR 2.19) and to combat's aftermath (OR 1.62). Reporting more stressful life events after deployment (OR 1.96) was associated with increased odds of new-onset probable PTSD, while post-deployment social support (OR 0.31) was a significant protective factor in the etiology of PTSD. Combat exposure may be unavoidable in military service members, but other vulnerability and protective factors also predict PTSD and could be targets for prevention strategies.
ERIC Educational Resources Information Center
Dong, Yu Ren
2013-01-01
This article highlights how English language learners' (ELLs) prior knowledge can be used to help learn science vocabulary. The article explains that the concept of prior knowledge needs to encompass the ELL student's native language, previous science learning, native literacy skills, and native cultural knowledge and life experiences.…
Malekpour, Seyed Amir; Pezeshk, Hamid; Sadeghi, Mehdi
2016-11-03
Copy Number Variation (CNV) is envisaged to be a major source of large structural variations in the human genome. In recent years, many studies apply Next Generation Sequencing (NGS) data for the CNV detection. However, still there is a necessity to invent more accurate computational tools. In this study, mate pair NGS data are used for the CNV detection in a Hidden Markov Model (HMM). The proposed HMM has position specific emission probabilities, i.e. a Gaussian mixture distribution. Each component in the Gaussian mixture distribution captures a different type of aberration that is observed in the mate pairs, after being mapped to the reference genome. These aberrations may include any increase (decrease) in the insertion size or change in the direction of mate pairs that are mapped to the reference genome. This HMM with Position-Specific Emission probabilities (PSE-HMM) is utilized for the genome-wide detection of deletions and tandem duplications. The performance of PSE-HMM is evaluated on a simulated dataset and also on a real data of a Yoruban HapMap individual, NA18507. PSE-HMM is effective in taking observation dependencies into account and reaches a high accuracy in detecting genome-wide CNVs. MATLAB programs are available at http://bs.ipm.ir/softwares/PSE-HMM/ .
Correction to “New maps of California to improve tsunami preparedness”
NASA Astrophysics Data System (ADS)
Barberopoulou, Aggeliki; Borrero, Jose C.; Uslu, Burak; Kalligeris, Nikos; Goltz, James D.; Wilson, Rick I.; Synolakis, Costas E.
2009-05-01
In the 21 April issue (Eos, 90(16), 2009), the article titled “New maps of California to improve tsunami preparedness” contained an error in its Figure 2 caption. Figure 2 is a map of Goleta, a city in Santa Barbara County. Thus, the first sentence of the caption should read, “Newly created tsunami inundation maps for Goleta, a city in Santa Barbara County, Calif., show the city's ‘wet line’ in black, representing the highest probable tsunami runup modeled for the region added to average water levels at high tide.” Eos deeply regrets this error.
NASA Astrophysics Data System (ADS)
Chakraborty, A.; Goto, H.
2017-12-01
The 2011 off the Pacific coast of Tohoku earthquake caused severe damage in many areas further inside the mainland because of site-amplification. Furukawa district in Miyagi Prefecture, Japan recorded significant spatial differences in ground motion even at sub-kilometer scales. The site responses in the damage zone far exceeded the levels in the hazard maps. A reason why the mismatch occurred is that mapping follow only the mean value at the measurement locations with no regard to the data uncertainties and thus are not always reliable. Our research objective is to develop a methodology to incorporate data uncertainties in mapping and propose a reliable map. The methodology is based on a hierarchical Bayesian modeling of normally-distributed site responses in space where the mean (μ), site-specific variance (σ2) and between-sites variance(s2) parameters are treated as unknowns with a prior distribution. The observation data is artificially created site responses with varying means and variances for 150 seismic events across 50 locations in one-dimensional space. Spatially auto-correlated random effects were added to the mean (μ) using a conditionally autoregressive (CAR) prior. The inferences on the unknown parameters are done using Markov Chain Monte Carlo methods from the posterior distribution. The goal is to find reliable estimates of μ sensitive to uncertainties. During initial trials, we observed that the tau (=1/s2) parameter of CAR prior controls the μ estimation. Using a constraint, s = 1/(k×σ), five spatial models with varying k-values were created. We define reliability to be measured by the model likelihood and propose the maximum likelihood model to be highly reliable. The model with maximum likelihood was selected using a 5-fold cross-validation technique. The results show that the maximum likelihood model (μ*) follows the site-specific mean at low uncertainties and converges to the model-mean at higher uncertainties (Fig.1). This result is highly significant as it successfully incorporates the effect of data uncertainties in mapping. This novel approach can be applied to any research field using mapping techniques. The methodology is now being applied to real records from a very dense seismic network in Furukawa district, Miyagi Prefecture, Japan to generate a reliable map of the site responses.
NASA Astrophysics Data System (ADS)
Thomas, J. N.; Huard, J.; Masci, F.
2017-02-01
There are many reports on the occurrence of anomalous changes in the ionosphere prior to large earthquakes. However, whether or not these changes are reliable precursors that could be useful for earthquake prediction is controversial within the scientific community. To test a possible statistical relationship between ionospheric disturbances and earthquakes, we compare changes in the total electron content (TEC) of the ionosphere with occurrences of M ≥ 6.0 earthquakes globally for 2000-2014. We use TEC data from the global ionosphere map (GIM) and an earthquake list declustered for aftershocks. For each earthquake, we look for anomalous changes in GIM-TEC within 2.5° latitude and 5.0° longitude of the earthquake location (the spatial resolution of GIM-TEC). Our analysis has not found any statistically significant changes in GIM-TEC prior to earthquakes. Thus, we have found no evidence that would suggest that monitoring changes in GIM-TEC might be useful for predicting earthquakes.
Colonius, Hans; Diederich, Adele
2011-07-01
The concept of a "time window of integration" holds that information from different sensory modalities must not be perceived too far apart in time in order to be integrated into a multisensory perceptual event. Empirical estimates of window width differ widely, however, ranging from 40 to 600 ms depending on context and experimental paradigm. Searching for theoretical derivation of window width, Colonius and Diederich (Front Integr Neurosci 2010) developed a decision-theoretic framework using a decision rule that is based on the prior probability of a common source, the likelihood of temporal disparities between the unimodal signals, and the payoff for making right or wrong decisions. Here, this framework is extended to the focused attention task where subjects are asked to respond to signals from a target modality only. Evoking the framework of the time-window-of-integration (TWIN) model, an explicit expression for optimal window width is obtained. The approach is probed on two published focused attention studies. The first is a saccadic reaction time study assessing the efficiency with which multisensory integration varies as a function of aging. Although the window widths for young and older adults differ by nearly 200 ms, presumably due to their different peripheral processing speeds, neither of them deviates significantly from the optimal values. In the second study, head saccadic reactions times to a perfectly aligned audiovisual stimulus pair had been shown to depend on the prior probability of spatial alignment. Intriguingly, they reflected the magnitude of the time-window widths predicted by our decision-theoretic framework, i.e., a larger time window is associated with a higher prior probability.
NASA Astrophysics Data System (ADS)
Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko
We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio
We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less
Dual-contrast agent photon-counting computed tomography of the heart: initial experience.
Symons, Rolf; Cork, Tyler E; Lakshmanan, Manu N; Evers, Robert; Davies-Venn, Cynthia; Rice, Kelly A; Thomas, Marvin L; Liu, Chia-Ying; Kappler, Steffen; Ulzheimer, Stefan; Sandfort, Veit; Bluemke, David A; Pourmorteza, Amir
2017-08-01
To determine the feasibility of dual-contrast agent imaging of the heart using photon-counting detector (PCD) computed tomography (CT) to simultaneously assess both first-pass and late enhancement of the myocardium. An occlusion-reperfusion canine model of myocardial infarction was used. Gadolinium-based contrast was injected 10 min prior to PCD CT. Iodinated contrast was infused immediately prior to PCD CT, thus capturing late gadolinium enhancement as well as first-pass iodine enhancement. Gadolinium and iodine maps were calculated using a linear material decomposition technique and compared to single-energy (conventional) images. PCD images were compared to in vivo and ex vivo magnetic resonance imaging (MRI) and histology. For infarct versus remote myocardium, contrast-to-noise ratio (CNR) was maximal on late enhancement gadolinium maps (CNR 9.0 ± 0.8, 6.6 ± 0.7, and 0.4 ± 0.4, p < 0.001 for gadolinium maps, single-energy images, and iodine maps, respectively). For infarct versus blood pool, CNR was maximum for iodine maps (CNR 11.8 ± 1.3, 3.8 ± 1.0, and 1.3 ± 0.4, p < 0.001 for iodine maps, gadolinium maps, and single-energy images, respectively). Combined first-pass iodine and late gadolinium maps allowed quantitative separation of blood pool, scar, and remote myocardium. MRI and histology analysis confirmed accurate PCD CT delineation of scar. Simultaneous multi-contrast agent cardiac imaging is feasible with photon-counting detector CT. These initial proof-of-concept results may provide incentives to develop new k-edge contrast agents, to investigate possible interactions between multiple simultaneously administered contrast agents, and to ultimately bring them to clinical practice.
USDA-ARS?s Scientific Manuscript database
A calf model was used to determine if the depletion of CD4 T cells prior to inoculation of Mycobacterium avium subsp. paratuberculosis (Map) would delay development of an immune response to Map and accelerate disease progression. Ileal cannulas were surgically implanted in 5 bull calves at two month...
Code of Federal Regulations, 2010 CFR
2010-04-01
... applicable; (2) General map showing specific location and dimension of a structural project, or specific...-structural project; (5) Written report of the applicant's engineer showing the proposed plan of operation of a structural project; (6) Map of any lands to be acquired or occupied; (7) Estimate of the cost of...
Krammer, Julia; Dutschke, Anja; Kaiser, Clemens G; Schnitzer, Andreas; Gerhardt, Axel; Radosa, Julia C; Brade, Joachim; Schoenberg, Stefan O; Wasser, Klaus
2016-01-01
To evaluate whether tumor localization and method of preoperative biopsy affect sentinel lymph node (SLN) detection after periareolar nuclide injection in breast cancer patients. 767 breast cancer patients were retrospectively included. For lymphscintigraphy periareolar nuclide injection was performed and the SLN was located by gamma camera. Patient and tumor characteristics were correlated to the success rate of SLN mapping. SLN marking failed in 9/61 (14.7%) patients with prior vacuum-assisted biopsy and 80/706 (11.3%) patients with prior core needle biopsy. Individually evaluated, biopsy method (p = 0.4) and tumor localization (p = 0.9) did not significantly affect the SLN detection rate. Patients with a vacuum-assisted biopsy of a tumor in the upper outer quadrant had a higher odds ratio of failing in SLN mapping (OR 3.8, p = 0.09) compared to core needle biopsy in the same localization (OR 0.9, p = 0.5). Tumor localization and preoperative biopsy method do not significantly impact SLN mapping with periareolar nuclide injection. However, the failure risk tends to rise if vacuum-assisted biopsy of a tumor in the upper outer quadrant is performed.
Geostatistical applications in environmental remediation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, R.N.; Purucker, S.T.; Lyon, B.F.
1995-02-01
Geostatistical analysis refers to a collection of statistical methods for addressing data that vary in space. By incorporating spatial information into the analysis, geostatistics has advantages over traditional statistical analysis for problems with a spatial context. Geostatistics has a history of success in earth science applications, and its popularity is increasing in other areas, including environmental remediation. Due to recent advances in computer technology, geostatistical algorithms can be executed at a speed comparable to many standard statistical software packages. When used responsibly, geostatistics is a systematic and defensible tool can be used in various decision frameworks, such as the Datamore » Quality Objectives (DQO) process. At every point in the site, geostatistics can estimate both the concentration level and the probability or risk of exceeding a given value. Using these probability maps can assist in identifying clean-up zones. Given any decision threshold and an acceptable level of risk, the probability maps identify those areas that are estimated to be above or below the acceptable risk. Those areas that are above the threshold are of the most concern with regard to remediation. In addition to estimating clean-up zones, geostatistics can assist in designing cost-effective secondary sampling schemes. Those areas of the probability map with high levels of estimated uncertainty are areas where more secondary sampling should occur. In addition, geostatistics has the ability to incorporate soft data directly into the analysis. These data include historical records, a highly correlated secondary contaminant, or expert judgment. The role of geostatistics in environmental remediation is a tool that in conjunction with other methods can provide a common forum for building consensus.« less
The utility of Bayesian predictive probabilities for interim monitoring of clinical trials
Connor, Jason T.; Ayers, Gregory D; Alvarez, JoAnn
2014-01-01
Background Bayesian predictive probabilities can be used for interim monitoring of clinical trials to estimate the probability of observing a statistically significant treatment effect if the trial were to continue to its predefined maximum sample size. Purpose We explore settings in which Bayesian predictive probabilities are advantageous for interim monitoring compared to Bayesian posterior probabilities, p-values, conditional power, or group sequential methods. Results For interim analyses that address prediction hypotheses, such as futility monitoring and efficacy monitoring with lagged outcomes, only predictive probabilities properly account for the amount of data remaining to be observed in a clinical trial and have the flexibility to incorporate additional information via auxiliary variables. Limitations Computational burdens limit the feasibility of predictive probabilities in many clinical trial settings. The specification of prior distributions brings additional challenges for regulatory approval. Conclusions The use of Bayesian predictive probabilities enables the choice of logical interim stopping rules that closely align with the clinical decision making process. PMID:24872363
Acquired prior knowledge modulates audiovisual integration.
Van Wanrooij, Marc M; Bremen, Peter; John Van Opstal, A
2010-05-01
Orienting responses to audiovisual events in the environment can benefit markedly by the integration of visual and auditory spatial information. However, logically, audiovisual integration would only be considered successful for stimuli that are spatially and temporally aligned, as these would be emitted by a single object in space-time. As humans do not have prior knowledge about whether novel auditory and visual events do indeed emanate from the same object, such information needs to be extracted from a variety of sources. For example, expectation about alignment or misalignment could modulate the strength of multisensory integration. If evidence from previous trials would repeatedly favour aligned audiovisual inputs, the internal state might also assume alignment for the next trial, and hence react to a new audiovisual event as if it were aligned. To test for such a strategy, subjects oriented a head-fixed pointer as fast as possible to a visual flash that was consistently paired, though not always spatially aligned, with a co-occurring broadband sound. We varied the probability of audiovisual alignment between experiments. Reaction times were consistently lower in blocks containing only aligned audiovisual stimuli than in blocks also containing pseudorandomly presented spatially disparate stimuli. Results demonstrate dynamic updating of the subject's prior expectation of audiovisual congruency. We discuss a model of prior probability estimation to explain the results.
[Cognitive and functional decline in the stage previous to the diagnosis of Alzheimers disease].
García-Sánchez, C; Estévez-González, A; Boltes, A; Otermín, P; López-Góngora, M; Gironell, A; Kulisevsky, J
2003-12-01
The decline in the phase prior to diagnosis of Alzheimers disease (AD) is not well known, although this knowledge is necessary to evaluate the efficiency of new drugs that can influence in disease course prior to diagnosis. To contribute to better knowledge of the decline prior to diagnosis, we have investigated the cognitive and functional deterioration for 2-3 years before the probable AD diagnosis was established. We compared results obtained by 17 control subjects and 27 patients at the time of diagnosis of a probable AD with results obtained 2-3 years before (interval of 27.7 4 months). We compared memory functions (logical, recognition, learning and autobiographical memory), naming, visual and visuospatial gnosis, visuoconstructive praxis, verbal fluency and the Mini-Mental State Examination (MMSE), Informant Questionnaire and Blessed's Scale scores. Performance of control subjects did not change. AD patients showed a significant decline in scores, except for verbal fluency. In order of importance, cognitive decline was more marked in scores of learning memory, visuospatial gnosis, autobiographical memory and visuoconstructive praxis. Decline prior to diagnosis of AD is characterized by an important learning memory impairment. Deterioration of visuospatial gnosis and visuoconstructive praxis is greater than deterioration of MMSE and Informant Questionnaire scores.
Exploiting Genome Structure in Association Analysis
Kim, Seyoung
2014-01-01
Abstract A genome-wide association study involves examining a large number of single-nucleotide polymorphisms (SNPs) to identify SNPs that are significantly associated with the given phenotype, while trying to reduce the false positive rate. Although haplotype-based association methods have been proposed to accommodate correlation information across nearby SNPs that are in linkage disequilibrium, none of these methods directly incorporated the structural information such as recombination events along chromosome. In this paper, we propose a new approach called stochastic block lasso for association mapping that exploits prior knowledge on linkage disequilibrium structure in the genome such as recombination rates and distances between adjacent SNPs in order to increase the power of detecting true associations while reducing false positives. Following a typical linear regression framework with the genotypes as inputs and the phenotype as output, our proposed method employs a sparsity-enforcing Laplacian prior for the regression coefficients, augmented by a first-order Markov process along the sequence of SNPs that incorporates the prior information on the linkage disequilibrium structure. The Markov-chain prior models the structural dependencies between a pair of adjacent SNPs, and allows us to look for association SNPs in a coupled manner, combining strength from multiple nearby SNPs. Our results on HapMap-simulated datasets and mouse datasets show that there is a significant advantage in incorporating the prior knowledge on linkage disequilibrium structure for marker identification under whole-genome association. PMID:21548809
Southern pine beetle infestation probability mapping using weights of evidence analysis
Jason B. Grogan; David L. Kulhavy; James C. Kroll
2010-01-01
Weights of Evidence (WofE) spatial analysis was used to predict probability of southern pine beetle (Dendroctonus frontalis) (SPB) infestation in Angelina, Nacogdoches, San Augustine and Shelby Co., TX. Thematic data derived from Landsat imagery (1974â2002 Landsat 1â7) were used. Data layers included: forest covertype, forest age, forest patch size...
Extinction time of a stochastic predator-prey model by the generalized cell mapping method
NASA Astrophysics Data System (ADS)
Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao
2018-03-01
The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.
Assignment of functional activations to probabilistic cytoarchitectonic areas revisited.
Eickhoff, Simon B; Paus, Tomas; Caspers, Svenja; Grosbras, Marie-Helene; Evans, Alan C; Zilles, Karl; Amunts, Katrin
2007-07-01
Probabilistic cytoarchitectonic maps in standard reference space provide a powerful tool for the analysis of structure-function relationships in the human brain. While these microstructurally defined maps have already been successfully used in the analysis of somatosensory, motor or language functions, several conceptual issues in the analysis of structure-function relationships still demand further clarification. In this paper, we demonstrate the principle approaches for anatomical localisation of functional activations based on probabilistic cytoarchitectonic maps by exemplary analysis of an anterior parietal activation evoked by visual presentation of hand gestures. After consideration of the conceptual basis and implementation of volume or local maxima labelling, we comment on some potential interpretational difficulties, limitations and caveats that could be encountered. Extending and supplementing these methods, we then propose a supplementary approach for quantification of structure-function correspondences based on distribution analysis. This approach relates the cytoarchitectonic probabilities observed at a particular functionally defined location to the areal specific null distribution of probabilities across the whole brain (i.e., the full probability map). Importantly, this method avoids the need for a unique classification of voxels to a single cortical area and may increase the comparability between results obtained for different areas. Moreover, as distribution-based labelling quantifies the "central tendency" of an activation with respect to anatomical areas, it will, in combination with the established methods, allow an advanced characterisation of the anatomical substrates of functional activations. Finally, the advantages and disadvantages of the various methods are discussed, focussing on the question of which approach is most appropriate for a particular situation.
Hoard, C.J.; Fowler, K.K.; Kim, M.H.; Menke, C.D.; Morlock, S.E.; Peppler, M.C.; Rachol, C.M.; Whitehead, M.T.
2010-01-01
Digital flood-inundation maps for a 15-mile reach of the Kalamazoo River from Marshall to Battle Creek, Michigan, were created by the U.S. Geological Survey (USGS) in cooperation with the U.S. Environmental Protection Agency to help guide remediation efforts following a crude-oil spill on July 25, 2010. The spill happened on Talmadge Creek, a tributary of the Kalamazoo River near Marshall, during a flood. The floodwaters transported the spilled oil down the Kalamazoo River and deposited oil in impoundments and on the surfaces of islands and flood plains. Six flood-inundation maps were constructed corresponding to the flood stage (884.09 feet) coincident with the oil spill on July 25, 2010, as well as for floods with annual exceedance probabilities of 0.2, 1, 2, 4, and 10 percent. Streamflow at the USGS streamgage at Marshall, Michigan (USGS site ID 04103500), was used to calculate the flood probabilities. From August 13 to 18, 2010, 35 channel cross sections, 17 bridges and 1 dam were surveyed. These data were used to construct a water-surface profile for the July 25, 2010, flood by use of a one-dimensional step-backwater model. The calibrated model was used to estimate water-surface profiles for other flood probabilities. The resulting six flood-inundation maps were created with a geographic information system by combining flood profiles with a 1.2-foot vertical and 10-foot horizontal resolution digital elevation model derived from Light Detection and Ranging data.
NASA Astrophysics Data System (ADS)
Neri, Augusto; Bevilacqua, Andrea; Esposti Ongaro, Tomaso; Isaia, Roberto; Aspinall, Willy P.; Bisson, Marina; Flandoli, Franco; Baxter, Peter J.; Bertagnini, Antonella; Iannuzzi, Enrico; Orsucci, Simone; Pistolesi, Marco; Rosi, Mauro; Vitale, Stefano
2015-04-01
Campi Flegrei (CF) is an example of an active caldera containing densely populated settlements at very high risk of pyroclastic density currents (PDCs). We present here an innovative method for assessing background spatial PDC hazard in a caldera setting with probabilistic invasion maps conditional on the occurrence of an explosive event. The method encompasses the probabilistic assessment of potential vent opening positions, derived in the companion paper, combined with inferences about the spatial density distribution of PDC invasion areas from a simplified flow model, informed by reconstruction of deposits from eruptions in the last 15 ka. The flow model describes the PDC kinematics and accounts for main effects of topography on flow propagation. Structured expert elicitation is used to incorporate certain sources of epistemic uncertainty, and a Monte Carlo approach is adopted to produce a set of probabilistic hazard maps for the whole CF area. Our findings show that, in case of eruption, almost the entire caldera is exposed to invasion with a mean probability of at least 5%, with peaks greater than 50% in some central areas. Some areas outside the caldera are also exposed to this danger, with mean probabilities of invasion of the order of 5-10%. Our analysis suggests that these probability estimates have location-specific uncertainties which can be substantial. The results prove to be robust with respect to alternative elicitation models and allow the influence on hazard mapping of different sources of uncertainty, and of theoretical and numerical assumptions, to be quantified.
The "Clinton" sands in Canton, Dover, Massillon, and Navarre quadrangles, Ohio
Pepper, James Franklin; De Witt, Wallace; Everhart, Gail M.
1953-01-01
The Canton, Dover, Massillon, and Navarre quadrangles cover about 880 square miles in eastern Ohio. Canton is the largest city in the mapped area. In these four quadrangles, the well drillers generally recognize three "Clinton" sands - in descending order, the "stray Clinton", the "red Clinton", and the "white Clinton". The Clinton sands of Ohio are of early Silurian age and probably correlate with the middle and upper part of the Albion sandstone in the Niagara gorge section in western New York.The study of drillers' logs and examination of well samples show that of the three so-called Clinton sands, the red is most readily recognized. The "Packer shell", a probable equivalent of the Clinton formation of New York, and the Queenston shale - the drillers' "red Medina" - are also good units for short distance correlations.Each of the Clinton sands consists of a thin layer that contains long narrow lenses of thicker sand. Although the pattern of the trend of the lenses varies for each of the Clinton sands, the trend generally is westward across the mapped area. It is thought that these lenses represent deposition in channels, probably offshore from a large delta.Production of gas and oil from the so-called Clinton apparently is closely related to the sorting, porosity, and permeability of the sand. Stratigraphic traps contain the oil or gas, and structure appears to be relatively unimportant in localizing the accumulation of the petroleum.East of the mapped area, the Clinton sands have not produced oil or gas in commercial quantities. Several parts of the mapped area may hold additional amounts of gas.
Daniel Goodman’s empirical approach to Bayesian statistics
Gerrodette, Tim; Ward, Eric; Taylor, Rebecca L.; Schwarz, Lisa K.; Eguchi, Tomoharu; Wade, Paul; Himes Boor, Gina
2016-01-01
Bayesian statistics, in contrast to classical statistics, uses probability to represent uncertainty about the state of knowledge. Bayesian statistics has often been associated with the idea that knowledge is subjective and that a probability distribution represents a personal degree of belief. Dr. Daniel Goodman considered this viewpoint problematic for issues of public policy. He sought to ground his Bayesian approach in data, and advocated the construction of a prior as an empirical histogram of “similar” cases. In this way, the posterior distribution that results from a Bayesian analysis combined comparable previous data with case-specific current data, using Bayes’ formula. Goodman championed such a data-based approach, but he acknowledged that it was difficult in practice. If based on a true representation of our knowledge and uncertainty, Goodman argued that risk assessment and decision-making could be an exact science, despite the uncertainties. In his view, Bayesian statistics is a critical component of this science because a Bayesian analysis produces the probabilities of future outcomes. Indeed, Goodman maintained that the Bayesian machinery, following the rules of conditional probability, offered the best legitimate inference from available data. We give an example of an informative prior in a recent study of Steller sea lion spatial use patterns in Alaska.
Maps Showing Seismic Landslide Hazards in Anchorage, Alaska
Jibson, Randall W.; Michael, John A.
2009-01-01
The devastating landslides that accompanied the great 1964 Alaska earthquake showed that seismically triggered landslides are one of the greatest geologic hazards in Anchorage. Maps quantifying seismic landslide hazards are therefore important for planning, zoning, and emergency-response preparation. The accompanying maps portray seismic landslide hazards for the following conditions: (1) deep, translational landslides, which occur only during great subduction-zone earthquakes that have return periods of =~300-900 yr; (2) shallow landslides for a peak ground acceleration (PGA) of 0.69 g, which has a return period of 2,475 yr, or a 2 percent probability of exceedance in 50 yr; and (3) shallow landslides for a PGA of 0.43 g, which has a return period of 475 yr, or a 10 percent probability of exceedance in 50 yr. Deep, translational landslide hazard zones were delineated based on previous studies of such landslides, with some modifications based on field observations of locations of deep landslides. Shallow-landslide hazards were delineated using a Newmark-type displacement analysis for the two probabilistic ground motions modeled.
Maps showing seismic landslide hazards in Anchorage, Alaska
Jibson, Randall W.
2014-01-01
The devastating landslides that accompanied the great 1964 Alaska earthquake showed that seismically triggered landslides are one of the greatest geologic hazards in Anchorage. Maps quantifying seismic landslide hazards are therefore important for planning, zoning, and emergency-response preparation. The accompanying maps portray seismic landslide hazards for the following conditions: (1) deep, translational landslides, which occur only during great subduction-zone earthquakes that have return periods of =300-900 yr; (2) shallow landslides for a peak ground acceleration (PGA) of 0.69 g, which has a return period of 2,475 yr, or a 2 percent probability of exceedance in 50 yr; and (3) shallow landslides for a PGA of 0.43 g, which has a return period of 475 yr, or a 10 percent probability of exceedance in 50 yr. Deep, translational landslide hazards were delineated based on previous studies of such landslides, with some modifications based on field observations of locations of deep landslides. Shallow-landslide hazards were delineated using a Newmark-type displacement analysis for the two probabilistic ground motions modeled.
Improving deep convolutional neural networks with mixed maxout units
Liu, Fu-xian; Li, Long-yue
2017-01-01
Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that “non-maximal features are unable to deliver” and “feature mapping subspace pooling is insufficient,” we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance. PMID:28727737
Recognition of pigment network pattern in dermoscopy images based on fuzzy classification of pixels.
Garcia-Arroyo, Jose Luis; Garcia-Zapirain, Begonya
2018-01-01
One of the most relevant dermoscopic patterns is the pigment network. An innovative method of pattern recognition is presented for its detection in dermoscopy images. It consists of two steps. In the first one, by means of a supervised machine learning process and after performing the extraction of different colour and texture features, a fuzzy classification of pixels into the three categories present in the pattern's definition ("net", "hole" and "other") is carried out. This enables the three corresponding fuzzy sets to be created and, as a result, the three probability images that map them out are generated. In the second step, the pigment network pattern is characterised from a parameterisation process -derived from the system specification- and the subsequent extraction of different features calculated from the combinations of image masks extracted from the probability images, corresponding to the alpha-cuts obtained from the fuzzy sets. The method was tested on a database of 875 images -by far the largest used in the state of the art to detect pigment network- extracted from a public Atlas of Dermoscopy, obtaining AUC results of 0.912 and 88%% accuracy, with 90.71%% sensitivity and 83.44%% specificity. The main contribution of this method is the very design of the algorithm, highly innovative, which could also be used to deal with other pattern recognition problems of a similar nature. Other contributions are: 1. The good performance in discriminating between the pattern and the disturbing artefacts -which means that no prior preprocessing is required in this method- and between the pattern and other dermoscopic patterns; 2. It puts forward a new methodological approach for work of this kind, introducing the system specification as a required step prior to algorithm design and development, being this specification the basis for a required parameterisation -in the form of configurable parameters (with their value ranges) and set threshold values- of the algorithm and the subsequent conducting of the experiments. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Isakson, Steve Wesley
2001-12-01
Well-known principles of physics explain why resolution restrictions occur in images produced by optical diffraction-limited systems. The limitations involved are present in all diffraction-limited imaging systems, including acoustical and microwave. In most circumstances, however, prior knowledge about the object and the imaging system can lead to resolution improvements. In this dissertation I outline a method to incorporate prior information into the process of reconstructing images to superresolve the object beyond the above limitations. This dissertation research develops the details of this methodology. The approach can provide the most-probable global solution employing a finite number of steps in both far-field and near-field images. In addition, in order to overcome the effects of noise present in any imaging system, this technique provides a weighted image that quantifies the likelihood of various imaging solutions. By utilizing Bayesian probability, the procedure is capable of incorporating prior information about both the object and the noise to overcome the resolution limitation present in many imaging systems. Finally I will present an imaging system capable of detecting the evanescent waves missing from far-field systems, thus improving the resolution further.
What you see is what you expect: rapid scene understanding benefits from prior experience.
Greene, Michelle R; Botros, Abraham P; Beck, Diane M; Fei-Fei, Li
2015-05-01
Although we are able to rapidly understand novel scene images, little is known about the mechanisms that support this ability. Theories of optimal coding assert that prior visual experience can be used to ease the computational burden of visual processing. A consequence of this idea is that more probable visual inputs should be facilitated relative to more unlikely stimuli. In three experiments, we compared the perceptions of highly improbable real-world scenes (e.g., an underwater press conference) with common images matched for visual and semantic features. Although the two groups of images could not be distinguished by their low-level visual features, we found profound deficits related to the improbable images: Observers wrote poorer descriptions of these images (Exp. 1), had difficulties classifying the images as unusual (Exp. 2), and even had lower sensitivity to detect these images in noise than to detect their more probable counterparts (Exp. 3). Taken together, these results place a limit on our abilities for rapid scene perception and suggest that perception is facilitated by prior visual experience.
The challenge of modelling and mapping the future distribution and impact of invasive alien species
Robert C. Venette
2015-01-01
Invasions from alien species can jeopardize the economic, environmental or social benefits derived from biological systems. Biosecurity measures seek to protect those systems from accidental or intentional introductions of species that might become injurious. Pest risk maps convey how the probability of invasion by an alien species or the potential consequences of that...
ERIC Educational Resources Information Center
McKean, Cristina; Letts, Carolyn; Howard, David
2013-01-01
Neighbourhood Density (ND) and Phonotactic Probability (PP) influence word learning in children. This influence appears to change over development but the separate developmental trajectories of influence of PP and ND on word learning have not previously been mapped. This study examined the cross-sectional developmental trajectories of influence of…
NASA Technical Reports Server (NTRS)
Potter, Christopher
2015-01-01
This study evaluated the cost-effective and timely use of Landsat imagery to map and monitor emergent aquatic plant biomass and to filter satellite image products for the most probable locations of water hyacinth coverage in the Delta based on field observations collected immediately after satellite image acquisition.
ERIC Educational Resources Information Center
Chen, Tina; Starns, Jeffrey J.; Rotello, Caren M.
2015-01-01
The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are…
NASA Astrophysics Data System (ADS)
Yang, P.; Ng, T. L.; Yang, W.
2015-12-01
Effective water resources management depends on the reliable estimation of the uncertainty of drought events. Confidence intervals (CIs) are commonly applied to quantify this uncertainty. A CI seeks to be at the minimal length necessary to cover the true value of the estimated variable with the desired probability. In drought analysis where two or more variables (e.g., duration and severity) are often used to describe a drought, copulas have been found suitable for representing the joint probability behavior of these variables. However, the comprehensive assessment of the parameter uncertainties of copulas of droughts has been largely ignored, and the few studies that have recognized this issue have not explicitly compared the various methods to produce the best CIs. Thus, the objective of this study to compare the CIs generated using two widely applied uncertainty estimation methods, bootstrapping and Markov Chain Monte Carlo (MCMC). To achieve this objective, (1) the marginal distributions lognormal, Gamma, and Generalized Extreme Value, and the copula functions Clayton, Frank, and Plackett are selected to construct joint probability functions of two drought related variables. (2) The resulting joint functions are then fitted to 200 sets of simulated realizations of drought events with known distribution and extreme parameters and (3) from there, using bootstrapping and MCMC, CIs of the parameters are generated and compared. The effect of an informative prior on the CIs generated by MCMC is also evaluated. CIs are produced for different sample sizes (50, 100, and 200) of the simulated drought events for fitting the joint probability functions. Preliminary results assuming lognormal marginal distributions and the Clayton copula function suggest that for cases with small or medium sample sizes (~50-100), MCMC to be superior method if an informative prior exists. Where an informative prior is unavailable, for small sample sizes (~50), both bootstrapping and MCMC yield the same level of performance, and for medium sample sizes (~100), bootstrapping is better. For cases with a large sample size (~200), there is little difference between the CIs generated using bootstrapping and MCMC regardless of whether or not an informative prior exists.
Minimal entropy approximation for cellular automata
NASA Astrophysics Data System (ADS)
Fukś, Henryk
2014-02-01
We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim.
Identification of land degradation evidences in an organic farm using probability maps (Croatia)
NASA Astrophysics Data System (ADS)
Pereira, Paulo; Bogunovic, Igor; Estebaranz, Ferran
2017-04-01
Land degradation is a biophysical process with important impacts on society, economy and policy. Areas affected by land degradation do not provide services in quality and with capacity to full-field the communities that depends on them (Amaya-Romero et al., 2015; Beyene, 2015; Lanckriet et al., 2015). Agricultural activities are one of the main causes of land degradation (Kraaijvanger and Veldkamp, 2015), especially when they decrease soil organic matter (SOM), a crucial element for soil fertility. In temperate areas, the critical level of SOM concentration in agricultural soils is 3.4%. Below this level there is a potential decrease of soil quality (Loveland and Weeb, 2003). However, no previous work was carried out in other environments, such as the Mediterranean. The spatial distribution of potential degraded land is important to be identified and mapped, in order to identify the areas that need restoration (Brevik et al., 2016; Pereira et al., 2017). The aim of this work is to assess the spatial distribution of areas with evidences of land degradation (SOM bellow 3.4%) using probability maps in an organic farm located in Croatia. In order to find the best method, we compared several probability methods, such as Ordinary Kriging (OK), Simple Kriging (SK), Universal Kriging (UK), Indicator Kriging (IK), Probability Kriging (PK) and Disjunctive Kriging (DK). The study area is located on the Istria peninsula (45°3' N; 14°2' E), with a total area of 182 ha. One hundred eighty-two soil samples (0-30 cm) were collected during July of 2015 and SOM was assessed using wet combustion procedure. The assessment of the best probability method was carried out using leave one out cross validation method. The probability method with the lowest Root Mean Squared Error (RMSE) was the most accurate. The results showed that the best method to predict the probability of potential land degradation was SK with an RMSE of 0.635, followed by DK (RMSE=0.636), UK (RMSE=0.660), OK (RMSE=0.660), IK (RMSE=0.722) and PK (RMSE=1.661). According to the most accurate method, it is observed that the majority of the area studied has a high probability to be degraded. Measures are needed to restore this area. References Amaya-Romero, M., Abd-Elmabod, S., Munoz-Rojas, M., Castellano, G., Ceacero, C., Alvarez, S., Mendez, M., De la Rosa, D. (2015) Evaluating soil threats under climate change scenarios in the Andalusia region, Southern Spain. Land Degradation and Development, 26, 441-449. Beyene, F. (2015) Incentives and challenges in community based rangeland management: Evidence from Eastern Ethiopia. Land Degradation and Development, 26, 502-509. Brevik, E., Calzolari, C., Miller, B., Pereira, P., Kabala, C., Baumgarten, A., Jordán, A. (2016) Historical perspectives and future needs in soil mapping, classification and pedological modelling, Geoderma, 264, Part B, 256-274. Kraaijvanger, R., Veldkamp, T. (2015) Grain productivity, fertilizer response and nutrient balance of farming systems in Tigray, Ethiopia: A Multiprespective view in relation do soil fertility degradation. Land Degradation and Development, 26, 701-710. Lanckriet, S., Derudder, B., Naudts, J., Bauer, H., Deckers, J., Haile, M., Nyssen, J. (2015) A political ecology perspective of land degradation in the North Ethiopian Highlands. Land Degradation and Development, 26, 521-530. Loveland, P., Weeb, J. (2003) Is there a critical level of organic matter in the agricultural soils of temperate regions: a review. Soil & Tillage Research, 70, 1-18. Pereira, P., Brevik, E., Munoz-Rojas, M., Miller, B., Smetanova, A., Depellegrin, D., Misiune, I., Novara, A., Cerda, A. Soil mapping and process modelling for sustainable land management. In: Pereira, P., Brevik, E., Munoz-Rojas, M., Miller, B. (Eds.) Soil mapping and process modelling for sustainable land use management (Elsevier Publishing House) ISBN: 9780128052006
Westerman, Drew A.; Merriman, Katherine R.; De Lanois, Jeanne L.; Berenbrock, Charles
2013-01-01
Precipitation that fell from April 19 through May 3, 2011, resulted in widespread flooding across northern and eastern Arkansas and southern Missouri. The first storm produced a total of approximately 16 inches of precipitation over an 8-day period, and the following storms produced as much as 12 inches of precipitation over a 2-day period. Moderate to major flooding occurred quickly along many streams within Arkansas and Missouri (including the Black, Cache, Illinois, St. Francis, and White Rivers) at levels that had not been seen since the historic 1927 floods. The 2011 flood claimed an estimated 21 lives in Arkansas and Missouri, and damage caused by the flooding resulted in a Federal Disaster Declaration for 59 Arkansas counties that received Federal or State assistance. To further the goal of documenting and understanding floods, the U.S. Geological Survey, in cooperation with the Federal Emergency Management Agency, the U.S. Army Corps of Engineers–Little Rock and Memphis Districts, and Arkansas Natural Resources Commission, conducted a study to summarize meteorological and hydrological conditions before the flood; computed flood-peak magnitudes for 39 streamgages; estimated annual exceedance probabilities for 37 of those streamgages; determined the joint probabilities for 11 streamgages paired to the Mississippi River at Helena, Arkansas, which refers to the probability that locations on two paired streams simultaneously experience floods of a magnitude greater than or equal to a given annual exceedance probability; collected high-water marks; constructed flood-peak inundation maps showing maximum flood extent and water depths; and summarized flood damages and effects. For the period of record used in this report, peak-of-record stage occurred at 24 of the 39 streamgages, and peak-of-record streamflow occurred at 13 of the 30 streamgages where streamflow was determined. Annual exceedance probabilities were estimated to be less than 0.5 percent at three streamgages. The joint probability values for streamgages paired with the Mississippi River at Helena, Ark., streamgage indicate a low probability of concurrent flooding with the paired streamgages. The inundation maps show the flood-peak extent and water depth of flooding for two stream reaches on the White River and two on the Black River; the vicinities of the communities of Holly Grove and Cotton Plant, Ark.; a reach of the White River that includes the crossing of Interstate 40 north of De Valls Bluff, Ark.; and the Tailwaters of Beaver Dam near Eureka Springs, Ark., Table Rock Dam near Branson, Mo., and Bull Shoals Dam near Flippin, Ark. The data and inundation maps can be used for flood response, recovery, and planning efforts by Federal, State, and local agencies.
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
Jarmolowicz, David P; Sofis, Michael J; Darden, Alexandria C
2016-07-01
Although progressive ratio (PR) schedules have been used to explore effects of a range of reinforcer parameters (e.g., magnitude, delay), effects of reinforcer probability remain underexplored. The present project used independently progressing concurrent PR PR schedules to examine effects of reinforcer probability on PR breakpoint (highest completed ratio prior to a session terminating 300s pause) and response allocation. The probability of reinforcement on one lever remained at 100% across all conditions while the probability of reinforcement on the other lever was systematically manipulated (i.e., 100%, 50%, 25%, 12.5%, and a replication of 25%). Breakpoints systematically decreased with decreasing reinforcer probabilities while breakpoints on the control lever remained unchanged. Patterns of switching between the two levers were well described by a choice-by-choice unit price model that accounted for the hyperbolic discounting of the value of probabilistic reinforcers. Copyright © 2016 Elsevier B.V. All rights reserved.
Poster error probability in the Mu-11 Sequential Ranging System
NASA Technical Reports Server (NTRS)
Coyle, C. W.
1981-01-01
An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.
NASA Astrophysics Data System (ADS)
Goto, Shin-itiro; Umeno, Ken
2018-03-01
Maps on a parameter space for expressing distribution functions are exactly derived from the Perron-Frobenius equations for a generalized Boole transform family. Here the generalized Boole transform family is a one-parameter family of maps, where it is defined on a subset of the real line and its probability distribution function is the Cauchy distribution with some parameters. With this reduction, some relations between the statistical picture and the orbital one are shown. From the viewpoint of information geometry, the parameter space can be identified with a statistical manifold, and then it is shown that the derived maps can be characterized. Also, with an induced symplectic structure from a statistical structure, symplectic and information geometric aspects of the derived maps are discussed.
Towards a Global Land Subsidence Map
NASA Astrophysics Data System (ADS)
Erkens, G.; Kooi, H.; Sutanudjaja, E.
2017-12-01
Land subsidence is a global problem, but a global land subsidence map is not available yet. Such map is crucial to raise global awareness of land subsidence, as land subsidence causes extensive damage (probably in the order of billions of dollars annually). Insights in the rates of subsidence are particularly relevant for low lying deltas and coastal zones, for which any further loss in elevation is unwanted. With the global land subsidence map relative sea level rise predictions may be improved, contributing to global flood risk calculations. In this contribution, we discuss the approach and progress we have made so far in making a global land subsidence map. The first results will be presented and discussed, and we give an outlook on the work needed to derive a global land subsidence map.
Biased relevance filtering in the auditory system: A test of confidence-weighted first-impressions.
Mullens, D; Winkler, I; Damaso, K; Heathcote, A; Whitson, L; Provost, A; Todd, J
2016-03-01
Although first-impressions are known to impact decision-making and to have prolonged effects on reasoning, it is less well known that the same type of rapidly formed assumptions can explain biases in automatic relevance filtering outside of deliberate behavior. This paper features two studies in which participants have been asked to ignore sequences of sound while focusing attention on a silent movie. The sequences consisted of blocks, each with a high-probability repetition interrupted by rare acoustic deviations (i.e., a sound of different pitch or duration). The probabilities of the two different sounds alternated across the concatenated blocks within the sequence (i.e., short-to-long and long-to-short). The sound probabilities are rapidly and automatically learned for each block and a perceptual inference is formed predicting the most likely characteristics of the upcoming sound. Deviations elicit a prediction-error signal known as mismatch negativity (MMN). Computational models of MMN generally assume that its elicitation is governed by transition statistics that define what sound attributes are most likely to follow the current sound. MMN amplitude reflects prediction confidence, which is derived from the stability of the current transition statistics. However, our prior research showed that MMN amplitude is modulated by a strong first-impression bias that outweighs transition statistics. Here we test the hypothesis that this bias can be attributed to assumptions about predictable vs. unpredictable nature of each tone within the first encountered context, which is weighted by the stability of that context. The results of Study 1 show that this bias is initially prevented if there is no 1:1 mapping between sound attributes and probability, but it returns once the auditory system determines which properties provide the highest predictive value. The results of Study 2 show that confidence in the first-impression bias drops if assumptions about the temporal stability of the transition-statistics are violated. Both studies provide compelling evidence that the auditory system extrapolates patterns on multiple timescales to adjust its response to prediction-errors, while profoundly distorting the effects of transition-statistics by the assumptions formed on the basis of first-impressions. Copyright © 2016 Elsevier B.V. All rights reserved.
Map and Database of Probable and Possible Quaternary Faults in Afghanistan
Ruleman, C.A.; Crone, A.J.; Machette, M.N.; Haller, K.M.; Rukstales, K.S.
2007-01-01
The U.S. Geological Survey (USGS) with support from the U.S. Agency for International Development (USAID) mission in Afghanistan, has prepared a digital map showing the distribution of probable and suspected Quaternary faults in Afghanistan. This map is a key component of a broader effort to assess and map the country's seismic hazards. Our analyses of remote-sensing imagery reveal a complex array of tectonic features that we interpret to be probable and possible active faults within the country and in the surrounding border region. In our compilation, we have mapped previously recognized active faults in greater detail, and have categorized individual features based on their geomorphic expression. We assigned mapped features to eight newly defined domains, each of which contains features that appear to have similar styles of deformation. The styles of deformation associated with each domain provide insight into the kinematics of the modern tectonism, and define a tectonic framework that helps constrain deformational models of the Alpine-Himalayan orogenic belt. The modern fault movements, deformation, and earthquakes in Afghanistan are driven by the collision between the northward-moving Indian subcontinent and Eurasia. The patterns of probable and possible Quaternary faults generally show that much of the modern tectonic activity is related to transfer of plate-boundary deformation across the country. The left-lateral, strike-slip Chaman fault in southeastern Afghanistan probably has the highest slip rate of any fault in the country; to the north, this slip is distributed onto several fault systems. At the southern margin of the Kabul block, the style of faulting changes from mainly strike-slip motion associated with the boundary between the Indian and Eurasian plates, to transpressional and transtensional faulting. North and northeast of the Kabul block, we recognized a complex pattern of potentially active strike-slip, thrust, and normal faults that form a conjugate shear system in a transpressional region of the Trans-Himalayan orogenic belt. The general patterns and orientations of faults and the styles of deformation that we interpret from the imagery are consistent with the styles of faulting determined from focal mechanisms of historical earthquakes. Northwest-trending strike-slip fault zones are cut and displaced by younger, southeast-verging thrust faults; these relations define the interaction between northwest-southeast-oriented contraction and northwest-directed extrusion in the western Himalaya, Pamir, and Hindu Kush regions. Transpression extends into north-central Afghanistan where north-verging contraction along the east-west-trending Alburz-Marmul fault system interacts with northwest-trending strike-slip faults. Pressure ridges related to thrust faulting and extensional basins bounded by normal faults are located at major stepovers in these northwest-trending strike-slip systems. In contrast, young faulting in central and western Afghanistan indicates that the deformation is dominated by extension where strike-slip fault zones transition into regions of normal faults. In addition to these initial observations, our digital map and database provide a foundation that can be expanded, complemented, and modified as future investigations provide more detailed information about the location, characteristics, and history of movement on Quaternary faults in Afghanistan.
Averaged kick maps: less noise, more signal…and probably less bias
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pražnikar, Jure; Afonine, Pavel V.; Gunčar, Gregor
2009-09-01
Averaged kick maps are the sum of a series of individual kick maps, where each map is calculated from atomic coordinates modified by random shifts. These maps offer the possibility of an improved and less model-biased map interpretation. Use of reliable density maps is crucial for rapid and successful crystal structure determination. Here, the averaged kick (AK) map approach is investigated, its application is generalized and it is compared with other map-calculation methods. AK maps are the sum of a series of kick maps, where each kick map is calculated from atomic coordinates modified by random shifts. As such, theymore » are a numerical analogue of maximum-likelihood maps. AK maps can be unweighted or maximum-likelihood (σ{sub A}) weighted. Analysis shows that they are comparable and correspond better to the final model than σ{sub A} and simulated-annealing maps. The AK maps were challenged by a difficult structure-validation case, in which they were able to clarify the problematic region in the density without the need for model rebuilding. The conclusion is that AK maps can be useful throughout the entire progress of crystal structure determination, offering the possibility of improved map interpretation.« less
Barrdahl, Myrto; Rudolph, Anja; Hopper, John L.; Southey, Melissa C.; Broeks, Annegien; Fasching, Peter A.; Beckmann, Matthias W.; Gago‐Dominguez, Manuela; Castelao, J. Esteban; Guénel, Pascal; Truong, Thérèse; Bojesen, Stig E.; Gapstur, Susan M.; Gaudet, Mia M.; Brenner, Hermann; Arndt, Volker; Brauch, Hiltrud; Hamann, Ute; Mannermaa, Arto; Lambrechts, Diether; Jongen, Lynn; Flesch‐Janys, Dieter; Thoene, Kathrin; Couch, Fergus J.; Giles, Graham G.; Simard, Jacques; Goldberg, Mark S.; Figueroa, Jonine; Michailidou, Kyriaki; Bolla, Manjeet K.; Dennis, Joe; Wang, Qin; Eilber, Ursula; Behrens, Sabine; Czene, Kamila; Hall, Per; Cox, Angela; Cross, Simon; Swerdlow, Anthony; Schoemaker, Minouk J.; Dunning, Alison M.; Kaaks, Rudolf; Pharoah, Paul D.P.; Schmidt, Marjanka; Garcia‐Closas, Montserrat; Easton, Douglas F.; Milne, Roger L.
2017-01-01
Investigating the most likely causal variants identified by fine‐mapping analyses may improve the power to detect gene–environment interactions. We assessed the interplay between 70 single nucleotide polymorphisms identified by genetic fine‐scale mapping of susceptibility loci and 11 epidemiological breast cancer risk factors in relation to breast cancer. Analyses were conducted on up to 58,573 subjects (26,968 cases and 31,605 controls) from the Breast Cancer Association Consortium, in one of the largest studies of its kind. Analyses were carried out separately for estrogen receptor (ER) positive (ER+) and ER negative (ER–) disease. The Bayesian False Discovery Probability (BFDP) was computed to assess the noteworthiness of the results. Four potential gene–environment interactions were identified as noteworthy (BFDP < 0.80) when assuming a true prior interaction probability of 0.01. The strongest interaction result in relation to overall breast cancer risk was found between CFLAR‐rs7558475 and current smoking (ORint = 0.77, 95% CI: 0.67–0.88, p int = 1.8 × 10−4). The interaction with the strongest statistical evidence was found between 5q14‐rs7707921 and alcohol consumption (ORint =1.36, 95% CI: 1.16–1.59, p int = 1.9 × 10−5) in relation to ER– disease risk. The remaining two gene–environment interactions were also identified in relation to ER– breast cancer risk and were found between 3p21‐rs6796502 and age at menarche (ORint = 1.26, 95% CI: 1.12–1.43, p int =1.8 × 10−4) and between 8q23‐rs13267382 and age at first full‐term pregnancy (ORint = 0.89, 95% CI: 0.83–0.95, p int = 5.2 × 10−4). While these results do not suggest any strong gene–environment interactions, our results may still be useful to inform experimental studies. These may in turn, shed light on the potential interactions observed. PMID:28670784
ERIC Educational Resources Information Center
Judson, Eugene
2012-01-01
Groups of children at a science museum were pre- and post-assessed with a type of concept map, known as personal meaning maps, to determine what new understandings, if any, they were gaining from participation in a series of structured hands-on activities about bones and the process of bones healing. Close examination was made regarding whether…
NASA Astrophysics Data System (ADS)
Feng, X.; Sheng, Y.; Condon, A. J.; Paramygin, V. A.; Hall, T.
2012-12-01
A cost effective method, JPM-OS (Joint Probability Method with Optimal Sampling), for determining storm response and inundation return frequencies was developed and applied to quantify the hazard of hurricane storm surges and inundation along the Southwest FL,US coast (Condon and Sheng 2012). The JPM-OS uses piecewise multivariate regression splines coupled with dimension adaptive sparse grids to enable the generation of a base flood elevation (BFE) map. Storms are characterized by their landfall characteristics (pressure deficit, radius to maximum winds, forward speed, heading, and landfall location) and a sparse grid algorithm determines the optimal set of storm parameter combinations so that the inundation from any other storm parameter combination can be determined. The end result is a sample of a few hundred (197 for SW FL) optimal storms which are simulated using a dynamically coupled storm surge / wave modeling system CH3D-SSMS (Sheng et al. 2010). The limited historical climatology (1940 - 2009) is explored to develop probabilistic characterizations of the five storm parameters. The probability distributions are discretized and the inundation response of all parameter combinations is determined by the interpolation in five-dimensional space of the optimal storms. The surge response and the associated joint probability of the parameter combination is used to determine the flood elevation with a 1% annual probability of occurrence. The limited historical data constrains the accuracy of the PDFs of the hurricane characteristics, which in turn affect the accuracy of the BFE maps calculated. To offset the deficiency of limited historical dataset, this study presents a different method for producing coastal inundation maps. Instead of using the historical storm data, here we adopt 33,731 tracks that can represent the storm climatology in North Atlantic basin and SW Florida coasts. This large quantity of hurricane tracks is generated from a new statistical model which had been used for Western North Pacific (WNP) tropical cyclone (TC) genesis (Hall 2011) as well as North Atlantic tropical cyclone genesis (Hall and Jewson 2007). The introduction of these tracks complements the shortage of the historical samples and allows for more reliable PDFs required for implementation of JPM-OS. Using the 33,731 tracks and JPM-OS, an optimal storm ensemble is determined. This approach results in different storms/winds for storm surge and inundation modeling, and produces different Base Flood Elevation maps for coastal regions. Coastal inundation maps produced by the two different methods will be discussed in detail in the poster paper.
Improving diagnostic accuracy of prostate carcinoma by systematic random map-biopsy.
Szabó, J; Hegedûs, G; Bartók, K; Kerényi, T; Végh, A; Romics, I; Szende, B
2000-01-01
Systematic random rectal ultrasound directed map-biopsy of the prostate was performed in 77 RDE (rectal digital examination) positive and 25 RDE negative cases, if applicable. Hypoechoic areas were found in 30% of RDE positive and in 16% of RDE negative cases. The score for carcinoma in the hypoechoic areas was 6.5% in RDE positive and 0% in RDE negative cases, whereas systematic map biopsy detected 62% carcinomas in RDE positive, and 16% carcinomas in RDE negative patients. The probability of positive diagnosis of prostate carcinoma increased in parallel with the number of biopsy samples/case. The importance of systematic map biopsy is emphasized.
Landsat for practical forest type mapping - A test case
NASA Technical Reports Server (NTRS)
Bryant, E.; Dodge, A. G., Jr.; Warren, S. D.
1980-01-01
Computer classified Landsat maps are compared with a recent conventional inventory of forest lands in northern Maine. Over the 196,000 hectare area mapped, estimates of the areas of softwood, mixed wood and hardwood forest obtained by a supervised classification of the Landsat data and a standard inventory based on aerial photointerpretation, probability proportional to prediction, field sampling and a standard forest measurement program are found to agree to within 5%. The cost of the Landsat maps is estimated to be $0.065/hectare. It is concluded that satellite techniques are worth developing for forest inventories, although they are not yet refined enough to be incorporated into current practical inventories.
Allocating Fire Mitigation Funds on the Basis of the Predicted Probabilities of Forest Wildfire
Ronald E. McRoberts; Greg C. Liknes; Mark D. Nelson; Krista M. Gebert; R. James Barbour; Susan L. Odell; Steven C. Yaddof
2005-01-01
A logistic regression model was used with map-based information to predict the probability of forest fire for forested areas of the United States. Model parameters were estimated using a digital layer depicting the locations of wildfires and satellite imagery depicting thermal hotspots. The area of the United States in the upper 50th percentile with respect to...
Marc-André Parisien; Dave R. Junor; Victor G. Kafka
2006-01-01
This study used a rule-based approach to prioritize locations of fuel treatments in the boreal mixedwood forest of western Canada. The burn probability (BP) in and around Prince Albert National Park in Saskatchewan was mapped using the Burn-P3 (Probability, Prediction, and Planning) model. Fuel treatment locations were determined according to three scenarios and five...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Jiamin; Hoffman, Joanne; Zhao, Jocelyn
2016-07-15
Purpose: To develop an automated system for mediastinal lymph node detection and station mapping for chest CT. Methods: The contextual organs, trachea, lungs, and spine are first automatically identified to locate the region of interest (ROI) (mediastinum). The authors employ shape features derived from Hessian analysis, local object scale, and circular transformation that are computed per voxel in the ROI. Eight more anatomical structures are simultaneously segmented by multiatlas label fusion. Spatial priors are defined as the relative multidimensional distance vectors corresponding to each structure. Intensity, shape, and spatial prior features are integrated and parsed by a random forest classifiermore » for lymph node detection. The detected candidates are then segmented by the following curve evolution process. Texture features are computed on the segmented lymph nodes and a support vector machine committee is used for final classification. For lymph node station labeling, based on the segmentation results of the above anatomical structures, the textual definitions of mediastinal lymph node map according to the International Association for the Study of Lung Cancer are converted into patient-specific color-coded CT image, where the lymph node station can be automatically assigned for each detected node. Results: The chest CT volumes from 70 patients with 316 enlarged mediastinal lymph nodes are used for validation. For lymph node detection, their system achieves 88% sensitivity at eight false positives per patient. For lymph node station labeling, 84.5% of lymph nodes are correctly assigned to their stations. Conclusions: Multiple-channel shape, intensity, and spatial prior features aggregated by a random forest classifier improve mediastinal lymph node detection on chest CT. Using the location information of segmented anatomic structures from the multiatlas formulation enables accurate identification of lymph node stations.« less
Spiegelhalter, D J; Freedman, L S
1986-01-01
The 'textbook' approach to determining sample size in a clinical trial has some fundamental weaknesses which we discuss. We describe a new predictive method which takes account of prior clinical opinion about the treatment difference. The method adopts the point of clinical equivalence (determined by interviewing the clinical participants) as the null hypothesis. Decision rules at the end of the study are based on whether the interval estimate of the treatment difference (classical or Bayesian) includes the null hypothesis. The prior distribution is used to predict the probabilities of making the decisions to use one or other treatment or to reserve final judgement. It is recommended that sample size be chosen to control the predicted probability of the last of these decisions. An example is given from a multi-centre trial of superficial bladder cancer.
NASA Astrophysics Data System (ADS)
Papanikolaοu, Ioannis D.; Roberts, Gerald P.; Deligiannakis, Georgios; Sakellariou, Athina; Vassilakis, Emmanuel
2013-06-01
The Sparta Fault system is a major structure approximately 64 km long that bounds the eastern flank of the Taygetos Mountain front (2407 m) and shapes the present-day Sparta basin. It was activated in 464 B.C., devastating the city of Sparta. This fault is examined and described in terms of its geometry, segmentation, drainage pattern and post-glacial throw, emphasising how these parameters vary along strike. Qualitative analysis of long profile catchments shows a significant difference in longitudinal convexity between the central and both the south and north parts of the fault system, leading to the conclusion of varying uplift rate along strike. Catchments are sensitive in differential uplift as it is observed by the calculated differences of the steepness index ksn between the outer (ksn < 83) and central parts (121 < ksn < 138) of the Sparta Fault along strike the fault system. Based on fault throw-rates and the bedrock geology a seismic hazard map has been constructed that extracts a locality specific long-term earthquake recurrence record. Based on this map the town of Sparta would experience a destructive event similar to that in 464 B.C. approximately every 1792 ± 458 years. Since no other major earthquake M ~ 7.0 has been generated by this system since 464 B.C., a future event could be imminent. As a result, not only time-independent but also time-dependent probabilities, which incorporate the concept of the seismic cycle, have been calculated for the town of Sparta, showing a considerably higher time-dependent probability of 3.0 ± 1.5% over the next 30 years compared to the time-independent probability of 1.66%. Half of the hanging wall area of the Sparta Fault can experience intensities ≥ IX, but belongs to the lowest category of seismic risk of the national seismic building code. On view of these relatively high calculated probabilities, a reassessment of the building code might be necessary.
NASA Astrophysics Data System (ADS)
Papanikolaou, Ioannis; Roberts, Gerald; Deligiannakis, Georgios; Sakellariou, Athina; Vassilakis, Emmanuel
2013-04-01
The Sparta Fault system is a major structure approximately 64 km long that bounds the eastern flank of the Taygetos Mountain front (2.407 m) and shapes the present-day Sparta basin. It was activated in 464 B.C., devastating the city of Sparta. This fault is examined and described in terms of its geometry, segmentation, drainage pattern and postglacial throw, emphasizing how these parameters vary along strike. Qualitative analysis of long profile catchments shows a significant difference in longitudinal convexity between the central and both the south and north parts of the fault system, leading to the conclusion of varying uplift rate along strike. Catchments are sensitive in differential uplift as it is observed by the calculated differences of the steepness index ksn between the outer (ksn<83) and central parts (121
Forecasting eruptions of Mauna Loa Volcano, Hawaii
NASA Astrophysics Data System (ADS)
Decker, Robert W.; Klein, Fred W.; Okamura, Arnold T.; Okubo, Paul G.
Past eruption patterns and various kinds of precursors are the two basic ingredients of eruption forecasts. The 39 historical eruptions of Mauna Loa from 1832 to 1984 have intervals as short as 104 days and as long as 9,165 days between the beginning of an eruption and the beginning of the next one. These recurrence times roughly fit a Poisson distribution pattern with a mean recurrence time of 1,459 days, yielding a probability of 22% (P=.22) for an eruption of Mauna Loa during any next year. The long recurrence times since 1950, however, suggest that the probability is not random, and that the current probability for an eruption during the next year may be as low as 6%. Seismicity beneath Mauna Loa increased for about two years prior to the 1975 and 1984 eruptions. Inflation of the summit area took place between eruptions with the highest rates occurring for a year or two before and after the 1975 and 1984 eruptions. Volcanic tremor beneath Mauna Loa began 51 minutes prior to the 1975 eruption and 115 minutes prior to the 1984 eruption. Eruption forecasts were published in 1975, 1976, and 1983. The 1975 and 1983 forecasts, though vaguely worded, were qualitatively correct regarding the timing of the next eruption. The 1976 forecast was more quantitative; it was wrong on timing but accurate on forecasting the location of the 1984 eruption. This paper urges that future forecasts be specific so they can be evaluated quantitatively.
NASA Astrophysics Data System (ADS)
Papaioannou, George; Vasiliades, Lampros; Loukas, Athanasios; Aronica, Giuseppe T.
2017-04-01
Probabilistic flood inundation mapping is performed and analysed at the ungauged Xerias stream reach, Volos, Greece. The study evaluates the uncertainty introduced by the roughness coefficient values on hydraulic models in flood inundation modelling and mapping. The well-established one-dimensional (1-D) hydraulic model, HEC-RAS is selected and linked to Monte-Carlo simulations of hydraulic roughness. Terrestrial Laser Scanner data have been used to produce a high quality DEM for input data uncertainty minimisation and to improve determination accuracy on stream channel topography required by the hydraulic model. Initial Manning's n roughness coefficient values are based on pebble count field surveys and empirical formulas. Various theoretical probability distributions are fitted and evaluated on their accuracy to represent the estimated roughness values. Finally, Latin Hypercube Sampling has been used for generation of different sets of Manning roughness values and flood inundation probability maps have been created with the use of Monte Carlo simulations. Historical flood extent data, from an extreme historical flash flood event, are used for validation of the method. The calibration process is based on a binary wet-dry reasoning with the use of Median Absolute Percentage Error evaluation metric. The results show that the proposed procedure supports probabilistic flood hazard mapping at ungauged rivers and provides water resources managers with valuable information for planning and implementing flood risk mitigation strategies.
Application of remote sensing techniques to the geology of the bonanza volcanic center
NASA Technical Reports Server (NTRS)
Marrs, R. W.
1973-01-01
A program is reported for evaluating remote sensing as an aid to geologic mapping for the past four years. Data tested in this evaluation include color and color infrared photography, multiband photography, low sun-angle photography, thermal infrared scanner imagery, and side-looking airborne radar. The relative utility of color and color infrared photography was tested as it was used to refine geologic maps in previously mapped areas, as field photos while mapping in the field, and in making photogeologic maps prior to field mapping. The latter technique served as a test of the maximum utility of the photography. In this application the photography was used successfully to locate 75% of all faults in a portion of the geologically complex Bonanza volcanic center and to map and correctly identify 93% of all Quaternary deposits and 62% of all areas of Tertiary volcanic outcrop in the area.
Atlas of depth-duration frequency of precipitation annual maxima for Texas
Asquith, William H.; Roussel, Meghan C.
2004-01-01
Ninety-six maps depicting the spatial variation of the depth-duration frequency of precipitation annual maxima for Texas are presented. The recurrence intervals represented are 2, 5, 10, 25, 50, 100, 250, and 500 years. The storm durations represented are 15 and 30 minutes; 1, 2, 3, 6, and 12 hours; and 1, 2, 3, 5, and 7 days. The maps were derived using geographically referenced parameter maps of probability distributions used in previously published research by the U.S. Geological Survey to model the magnitude and frequency of precipitation annual maxima for Texas. The maps in this report apply that research and update depth-duration frequency of precipitation maps available in earlier studies done by the National Weather Service.
Exarchakis, Georgios; Lücke, Jörg
2017-11-01
Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.