A probabilistic NF2 relational algebra for integrated information retrieval and database systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuhr, N.; Roelleke, T.
The integration of information retrieval (IR) and database systems requires a data model which allows for modelling documents as entities, representing uncertainty and vagueness and performing uncertain inference. For this purpose, we present a probabilistic data model based on relations in non-first-normal-form (NF2). Here, tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. Thus, the set of weighted index terms of a document are represented as a probabilistic subrelation. In a similar way, imprecise attribute values are modelled as a set-valued attribute. We redefine the relational operators for this type of relations such thatmore » the result of each operator is again a probabilistic NF2 relation, where the weight of a tuple gives the probability that this tuple belongs to the result. By ordering the tuples according to decreasing probabilities, the model yields a ranking of answers like in most IR models. This effect also can be used for typical database queries involving imprecise attribute values as well as for combinations of database and IR queries.« less
Frahm, Jan-Michael; Pollefeys, Marc Andre Leon; Gallup, David Robert
2015-12-08
Methods of generating a three dimensional representation of an object in a reference plane from a depth map including distances from a reference point to pixels in an image of the object taken from a reference point. Weights are assigned to respective voxels in a three dimensional grid along rays extending from the reference point through the pixels in the image based on the distances in the depth map from the reference point to the respective pixels, and a height map including an array of height values in the reference plane is formed based on the assigned weights. An n-layer height map may be constructed by generating a probabilistic occupancy grid for the voxels and forming an n-dimensional height map comprising an array of layer height values in the reference plane based on the probabilistic occupancy grid.
NASA Astrophysics Data System (ADS)
Kozine, Igor
2018-04-01
The paper suggests looking on probabilistic risk quantities and concepts through the prism of accepting one of the views: whether a true value of risk exists or not. It is argued that discussions until now have been primarily focused on closely related topics that are different from the topic of the current paper. The paper examines operational consequences of adhering to each of the views and contrasts them. It is demonstrated that operational differences on how and what probabilistic measures can be assessed and how they can be interpreted appear tangible. In particular, this concerns prediction intervals, the use of Byes rule, models of complete ignorance, hierarchical models of uncertainty, assignment of probabilities over possibility space and interpretation of derived probabilistic measures. Behavioural implications of favouring the either view are also briefly described.
Dinov, Martin; Leech, Robert
2017-01-01
Part of the process of EEG microstate estimation involves clustering EEG channel data at the global field power (GFP) maxima, very commonly using a modified K-means approach. Clustering has also been done deterministically, despite there being uncertainties in multiple stages of the microstate analysis, including the GFP peak definition, the clustering itself and in the post-clustering assignment of microstates back onto the EEG timecourse of interest. We perform a fully probabilistic microstate clustering and labeling, to account for these sources of uncertainty using the closest probabilistic analog to KM called Fuzzy C-means (FCM). We train softmax multi-layer perceptrons (MLPs) using the KM and FCM-inferred cluster assignments as target labels, to then allow for probabilistic labeling of the full EEG data instead of the usual correlation-based deterministic microstate label assignment typically used. We assess the merits of the probabilistic analysis vs. the deterministic approaches in EEG data recorded while participants perform real or imagined motor movements from a publicly available data set of 109 subjects. Though FCM group template maps that are almost topographically identical to KM were found, there is considerable uncertainty in the subsequent assignment of microstate labels. In general, imagined motor movements are less predictable on a time point-by-time point basis, possibly reflecting the more exploratory nature of the brain state during imagined, compared to during real motor movements. We find that some relationships may be more evident using FCM than using KM and propose that future microstate analysis should preferably be performed probabilistically rather than deterministically, especially in situations such as with brain computer interfaces, where both training and applying models of microstates need to account for uncertainty. Probabilistic neural network-driven microstate assignment has a number of advantages that we have discussed, which are likely to be further developed and exploited in future studies. In conclusion, probabilistic clustering and a probabilistic neural network-driven approach to microstate analysis is likely to better model and reveal details and the variability hidden in current deterministic and binarized microstate assignment and analyses.
Dinov, Martin; Leech, Robert
2017-01-01
Part of the process of EEG microstate estimation involves clustering EEG channel data at the global field power (GFP) maxima, very commonly using a modified K-means approach. Clustering has also been done deterministically, despite there being uncertainties in multiple stages of the microstate analysis, including the GFP peak definition, the clustering itself and in the post-clustering assignment of microstates back onto the EEG timecourse of interest. We perform a fully probabilistic microstate clustering and labeling, to account for these sources of uncertainty using the closest probabilistic analog to KM called Fuzzy C-means (FCM). We train softmax multi-layer perceptrons (MLPs) using the KM and FCM-inferred cluster assignments as target labels, to then allow for probabilistic labeling of the full EEG data instead of the usual correlation-based deterministic microstate label assignment typically used. We assess the merits of the probabilistic analysis vs. the deterministic approaches in EEG data recorded while participants perform real or imagined motor movements from a publicly available data set of 109 subjects. Though FCM group template maps that are almost topographically identical to KM were found, there is considerable uncertainty in the subsequent assignment of microstate labels. In general, imagined motor movements are less predictable on a time point-by-time point basis, possibly reflecting the more exploratory nature of the brain state during imagined, compared to during real motor movements. We find that some relationships may be more evident using FCM than using KM and propose that future microstate analysis should preferably be performed probabilistically rather than deterministically, especially in situations such as with brain computer interfaces, where both training and applying models of microstates need to account for uncertainty. Probabilistic neural network-driven microstate assignment has a number of advantages that we have discussed, which are likely to be further developed and exploited in future studies. In conclusion, probabilistic clustering and a probabilistic neural network-driven approach to microstate analysis is likely to better model and reveal details and the variability hidden in current deterministic and binarized microstate assignment and analyses. PMID:29163110
A Probabilistic Design Method Applied to Smart Composite Structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1995-01-01
A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.
Probabilistic and deterministic evaluation of uncertainty in a local scale multi-risk analysis
NASA Astrophysics Data System (ADS)
Lari, S.; Frattini, P.; Crosta, G. B.
2009-04-01
We performed a probabilistic multi-risk analysis (QPRA) at the local scale for a 420 km2 area surrounding the town of Brescia (Northern Italy). We calculated the expected annual loss in terms of economical damage and life loss, for a set of risk scenarios of flood, earthquake and industrial accident with different occurrence probabilities and different intensities. The territorial unit used for the study was the census parcel, of variable area, for which a large amount of data was available. Due to the lack of information related to the evaluation of the hazards, to the value of the exposed elements (e.g., residential and industrial area, population, lifelines, sensitive elements as schools, hospitals) and to the process-specific vulnerability, and to a lack of knowledge of the processes (floods, industrial accidents, earthquakes), we assigned an uncertainty to the input variables of the analysis. For some variables an homogeneous uncertainty was assigned on the whole study area, as for instance for the number of buildings of various typologies, and for the event occurrence probability. In other cases, as for phenomena intensity (e.g.,depth of water during flood) and probability of impact, the uncertainty was defined in relation to the census parcel area. In fact assuming some variables homogeneously diffused or averaged on the census parcels, we introduce a larger error for larger parcels. We propagated the uncertainty in the analysis using three different models, describing the reliability of the output (risk) as a function of the uncertainty of the inputs (scenarios and vulnerability functions). We developed a probabilistic approach based on Monte Carlo simulation, and two deterministic models, namely First Order Second Moment (FOSM) and Point Estimate (PE). In general, similar values of expected losses are obtained with the three models. The uncertainty of the final risk value is in the three cases around the 30% of the expected value. Each of the models, nevertheless, requires different assumptions and computational efforts, and provides results with different level of detail.
PREDICT: Privacy and Security Enhancing Dynamic Information Monitoring
2015-08-03
consisting of global server-side probabilistic assignment by an untrusted server using cloaked locations, followed by feedback-loop guided local...12], consisting of global server-side probabilistic assignment by an untrusted server using cloaked locations, followed by feedback-loop guided...these methods achieve high sensing coverage with low cost using cloaked locations [3]. In follow-on work, the issue of mobility is addressed. Task
A Technique for Developing Probabilistic Properties of Earth Materials
1988-04-01
Department of Civil Engineering. Responsibility for coordi- nating this program was assigned to Mr. A. E . Jackson, Jr., GD, under the supervision of Dr...assuming deformation as a right circular cylinder E = expected value F = ratio of the between sample variance and the within sample variance F = area...radial strain = true radial strain rT e = axial strainz = number of increments in the covariance analysis VL = loading Poisson’s ratio VUN = unloading
Probabilistic Surface Characterization for Safe Landing Hazard Detection and Avoidance (HDA)
NASA Technical Reports Server (NTRS)
Johnson, Andrew E. (Inventor); Ivanov, Tonislav I. (Inventor); Huertas, Andres (Inventor)
2015-01-01
Apparatuses, systems, computer programs and methods for performing hazard detection and avoidance for landing vehicles are provided. Hazard assessment takes into consideration the geometry of the lander. Safety probabilities are computed for a plurality of pixels in a digital elevation map. The safety probabilities are combined for pixels associated with one or more aim points and orientations. A worst case probability value is assigned to each of the one or more aim points and orientations.
Business Planning in the Light of Neuro-fuzzy and Predictive Forecasting
NASA Astrophysics Data System (ADS)
Chakrabarti, Prasun; Basu, Jayanta Kumar; Kim, Tai-Hoon
In this paper we have pointed out gain sensing on forecast based techniques.We have cited an idea of neural based gain forecasting. Testing of sequence of gain pattern is also verifies using statsistical analysis of fuzzy value assignment. The paper also suggests realization of stable gain condition using K-Means clustering of data mining. A new concept of 3D based gain sensing has been pointed out. The paper also reveals what type of trend analysis can be observed for probabilistic gain prediction.
Finding models to detect Alzheimer's disease by fusing structural and neuropsychological information
NASA Astrophysics Data System (ADS)
Giraldo, Diana L.; García-Arteaga, Juan D.; Velasco, Nelson; Romero, Eduardo
2015-12-01
Alzheimer's disease (AD) is a neurodegenerative disease that affects higher brain functions. Initial diagnosis of AD is based on the patient's clinical history and a battery of neuropsychological tests. The accuracy of the diagnosis is highly dependent on the examiner's skills and on the evolution of a variable clinical frame. This work presents an automatic strategy that learns probabilistic brain models for different stages of the disease, reducing the complexity, parameter adjustment and computational costs. The proposed method starts by setting a probabilistic class description using the information stored in the neuropsychological test, followed by constructing the different structural class models using membership values from the learned probabilistic functions. These models are then used as a reference frame for the classification problem: a new case is assigned to a particular class simply by projecting to the different models. The validation was performed using a leave-one-out cross-validation, two classes were used: Normal Control (NC) subjects and patients diagnosed with mild AD. In this experiment it is possible to achieve a sensibility and specificity of 80% and 79% respectively.
Probability and possibility-based representations of uncertainty in fault tree analysis.
Flage, Roger; Baraldi, Piero; Zio, Enrico; Aven, Terje
2013-01-01
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic-possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility-probability (probability-possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context. © 2012 Society for Risk Analysis.
Probabilistic reasoning in data analysis.
Sirovich, Lawrence
2011-09-20
This Teaching Resource provides lecture notes, slides, and a student assignment for a lecture on probabilistic reasoning in the analysis of biological data. General probabilistic frameworks are introduced, and a number of standard probability distributions are described using simple intuitive ideas. Particular attention is focused on random arrivals that are independent of prior history (Markovian events), with an emphasis on waiting times, Poisson processes, and Poisson probability distributions. The use of these various probability distributions is applied to biomedical problems, including several classic experimental studies.
Upgrades to the Probabilistic NAS Platform Air Traffic Simulation Software
NASA Technical Reports Server (NTRS)
Hunter, George; Boisvert, Benjamin
2013-01-01
This document is the final report for the project entitled "Upgrades to the Probabilistic NAS Platform Air Traffic Simulation Software." This report consists of 17 sections which document the results of the several subtasks of this effort. The Probabilistic NAS Platform (PNP) is an air operations simulation platform developed and maintained by the Saab Sensis Corporation. The improvements made to the PNP simulation include the following: an airborne distributed separation assurance capability, a required time of arrival assignment and conformance capability, and a tactical and strategic weather avoidance capability.
NASA Technical Reports Server (NTRS)
Onwubiko, Chinyere; Onyebueke, Landon
1996-01-01
This program report is the final report covering all the work done on this project. The goal of this project is technology transfer of methodologies to improve design process. The specific objectives are: 1. To learn and understand the Probabilistic design analysis using NESSUS. 2. To assign Design Projects to either undergraduate or graduate students on the application of NESSUS. 3. To integrate the application of NESSUS into some selected senior level courses in Civil and Mechanical Engineering curricula. 4. To develop courseware in Probabilistic Design methodology to be included in a graduate level Design Methodology course. 5. To study the relationship between the Probabilistic design methodology and Axiomatic design methodology.
A probabilistic seismic model for the European Arctic
NASA Astrophysics Data System (ADS)
Hauser, Juerg; Dyer, Kathleen M.; Pasyanos, Michael E.; Bungum, Hilmar; Faleide, Jan I.; Clark, Stephen A.; Schweitzer, Johannes
2011-01-01
The development of three-dimensional seismic models for the crust and upper mantle has traditionally focused on finding one model that provides the best fit to the data while observing some regularization constraints. In contrast to this, the inversion employed here fits the data in a probabilistic sense and thus provides a quantitative measure of model uncertainty. Our probabilistic model is based on two sources of information: (1) prior information, which is independent from the data, and (2) different geophysical data sets, including thickness constraints, velocity profiles, gravity data, surface wave group velocities, and regional body wave traveltimes. We use a Markov chain Monte Carlo (MCMC) algorithm to sample models from the prior distribution, the set of plausible models, and test them against the data to generate the posterior distribution, the ensemble of models that fit the data with assigned uncertainties. While being computationally more expensive, such a probabilistic inversion provides a more complete picture of solution space and allows us to combine various data sets. The complex geology of the European Arctic, encompassing oceanic crust, continental shelf regions, rift basins and old cratonic crust, as well as the nonuniform coverage of the region by data with varying degrees of uncertainty, makes it a challenging setting for any imaging technique and, therefore, an ideal environment for demonstrating the practical advantages of a probabilistic approach. Maps of depth to basement and depth to Moho derived from the posterior distribution are in good agreement with previously published maps and interpretations of the regional tectonic setting. The predicted uncertainties, which are as important as the absolute values, correlate well with the variations in data coverage and quality in the region. A practical advantage of our probabilistic model is that it can provide estimates for the uncertainties of observables due to model uncertainties. We will demonstrate how this can be used for the formulation of earthquake location algorithms that take model uncertainties into account when estimating location uncertainties.
Groth, Katrina M.; Smith, Curtis L.; Swiler, Laura P.
2014-04-05
In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existingmore » HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.« less
Stimulus discriminability may bias value-based probabilistic learning.
Schutte, Iris; Slagter, Heleen A; Collins, Anne G E; Frank, Michael J; Kenemans, J Leon
2017-01-01
Reinforcement learning tasks are often used to assess participants' tendency to learn more from the positive or more from the negative consequences of one's action. However, this assessment often requires comparison in learning performance across different task conditions, which may differ in the relative salience or discriminability of the stimuli associated with more and less rewarding outcomes, respectively. To address this issue, in a first set of studies, participants were subjected to two versions of a common probabilistic learning task. The two versions differed with respect to the stimulus (Hiragana) characters associated with reward probability. The assignment of character to reward probability was fixed within version but reversed between versions. We found that performance was highly influenced by task version, which could be explained by the relative perceptual discriminability of characters assigned to high or low reward probabilities, as assessed by a separate discrimination experiment. Participants were more reliable in selecting rewarding characters that were more discriminable, leading to differences in learning curves and their sensitivity to reward probability. This difference in experienced reinforcement history was accompanied by performance biases in a test phase assessing ability to learn from positive vs. negative outcomes. In a subsequent large-scale web-based experiment, this impact of task version on learning and test measures was replicated and extended. Collectively, these findings imply a key role for perceptual factors in guiding reward learning and underscore the need to control stimulus discriminability when making inferences about individual differences in reinforcement learning.
Probabilistic resource allocation system with self-adaptive capability
NASA Technical Reports Server (NTRS)
Yufik, Yan M. (Inventor)
1996-01-01
A probabilistic resource allocation system is disclosed containing a low capacity computational module (Short Term Memory or STM) and a self-organizing associative network (Long Term Memory or LTM) where nodes represent elementary resources, terminal end nodes represent goals, and directed links represent the order of resource association in different allocation episodes. Goals and their priorities are indicated by the user, and allocation decisions are made in the STM, while candidate associations of resources are supplied by the LTM based on the association strength (reliability). Reliability values are automatically assigned to the network links based on the frequency and relative success of exercising those links in the previous allocation decisions. Accumulation of allocation history in the form of an associative network in the LTM reduces computational demands on subsequent allocations. For this purpose, the network automatically partitions itself into strongly associated high reliability packets, allowing fast approximate computation and display of allocation solutions satisfying the overall reliability and other user-imposed constraints. System performance improves in time due to modification of network parameters and partitioning criteria based on the performance feedback.
Processing of probabilistic information in weight perception and motor prediction.
Trampenau, Leif; van Eimeren, Thilo; Kuhtz-Buschbeck, Johann
2017-02-01
We studied the effects of probabilistic cues, i.e., of information of limited certainty, in the context of an action task (GL: grip-lift) and of a perceptual task (WP: weight perception). Normal subjects (n = 22) saw four different probabilistic visual cues, each of which announced the likely weight of an object. In the GL task, the object was grasped and lifted with a pinch grip, and the peak force rates indicated that the grip and load forces were scaled predictively according to the probabilistic information. The WP task provided the expected heaviness related to each probabilistic cue; the participants gradually adjusted the object's weight until its heaviness matched the expected weight for a given cue. Subjects were randomly assigned to two groups: one started with the GL task and the other one with the WP task. The four different probabilistic cues influenced weight adjustments in the WP task and peak force rates in the GL task in a similar manner. The interpretation and utilization of the probabilistic information was critically influenced by the initial task. Participants who started with the WP task classified the four probabilistic cues into four distinct categories and applied these categories to the subsequent GL task. On the other side, participants who started with the GL task applied three distinct categories to the four cues and retained this classification in the following WP task. The initial strategy, once established, determined the way how the probabilistic information was interpreted and implemented.
NASA Astrophysics Data System (ADS)
Lowe, R.; Ballester, J.; Robine, J.; Herrmann, F. R.; Jupp, T. E.; Stephenson, D.; Rodó, X.
2013-12-01
Users of climate information often require probabilistic information on which to base their decisions. However, communicating information contained within a probabilistic forecast presents a challenge. In this paper we demonstrate a novel visualisation technique to display ternary probabilistic forecasts on a map in order to inform decision making. In this method, ternary probabilistic forecasts, which assign probabilities to a set of three outcomes (e.g. low, medium, and high risk), are considered as a point in a triangle of barycentric coordinates. This allows a unique colour to be assigned to each forecast from a continuum of colours defined on the triangle. Colour saturation increases with information gain relative to the reference forecast (i.e. the long term average). This provides additional information to decision makers compared with conventional methods used in seasonal climate forecasting, where one colour is used to represent one forecast category on a forecast map (e.g. red = ';dry'). We use the tool to present climate-related mortality projections across Europe. Temperature and humidity are related to human mortality via location-specific transfer functions, calculated using historical data. Daily mortality data at the NUTS2 level for 16 countries in Europe were obtain from 1998-2005. Transfer functions were calculated for 54 aggregations in Europe, defined using criteria related to population and climatological similarities. Aggregations are restricted to fall within political boundaries to avoid problems related to varying adaptation policies between countries. A statistical model is fit to cold and warm tails to estimate future mortality using forecast temperatures, in a Bayesian probabilistic framework. Using predefined categories of temperature-related mortality risk, we present maps of probabilistic projections for human mortality at seasonal to decadal time scales. We demonstrate the information gained from using this technique compared to more traditional methods to display ternary probabilistic forecasts. This technique allows decision makers to identify areas where the model predicts with certainty area-specific heat waves or cold snaps, in order to effectively target resources to those areas most at risk, for a given season or year. It is hoped that this visualisation tool will facilitate the interpretation of the probabilistic forecasts not only for public health decision makers but also within a multi-sectoral climate service framework.
Fast algorithm for probabilistic bone edge detection (FAPBED)
NASA Astrophysics Data System (ADS)
Scepanovic, Danilo; Kirshtein, Joshua; Jain, Ameet K.; Taylor, Russell H.
2005-04-01
The registration of preoperative CT to intra-operative reality systems is a crucial step in Computer Assisted Orthopedic Surgery (CAOS). The intra-operative sensors include 3D digitizers, fiducials, X-rays and Ultrasound (US). FAPBED is designed to process CT volumes for registration to tracked US data. Tracked US is advantageous because it is real time, noninvasive, and non-ionizing, but it is also known to have inherent inaccuracies which create the need to develop a framework that is robust to various uncertainties, and can be useful in US-CT registration. Furthermore, conventional registration methods depend on accurate and absolute segmentation. Our proposed probabilistic framework addresses the segmentation-registration duality, wherein exact segmentation is not a prerequisite to achieve accurate registration. In this paper, we develop a method for fast and automatic probabilistic bone surface (edge) detection in CT images. Various features that influence the likelihood of the surface at each spatial coordinate are combined using a simple probabilistic framework, which strikes a fair balance between a high-level understanding of features in an image and the low-level number crunching of standard image processing techniques. The algorithm evaluates different features for detecting the probability of a bone surface at each voxel, and compounds the results of these methods to yield a final, low-noise, probability map of bone surfaces in the volume. Such a probability map can then be used in conjunction with a similar map from tracked intra-operative US to achieve accurate registration. Eight sample pelvic CT scans were used to extract feature parameters and validate the final probability maps. An un-optimized fully automatic Matlab code runs in five minutes per CT volume on average, and was validated by comparison against hand-segmented gold standards. The mean probability assigned to nonzero surface points was 0.8, while nonzero non-surface points had a mean value of 0.38 indicating clear identification of surface points on average. The segmentation was also sufficiently crisp, with a full width at half maximum (FWHM) value of 1.51 voxels.
Impact of refining the assessment of dietary exposure to cadmium in the European adult population.
Ferrari, Pietro; Arcella, Davide; Heraud, Fanny; Cappé, Stefano; Fabiansson, Stefan
2013-01-01
Exposure assessment constitutes an important step in any risk assessment of potentially harmful substances present in food. The European Food Safety Authority (EFSA) first assessed dietary exposure to cadmium in Europe using a deterministic framework, resulting in mean values of exposure in the range of health-based guidance values. Since then, the characterisation of foods has been refined to better match occurrence and consumption data, and a new strategy to handle left-censoring in occurrence data was devised. A probabilistic assessment was performed and compared with deterministic estimates, using occurrence values at the European level and consumption data from 14 national dietary surveys. Mean estimates in the probabilistic assessment ranged from 1.38 (95% CI = 1.35-1.44) to 2.08 (1.99-2.23) µg kg⁻¹ bodyweight (bw) week⁻¹ across the different surveys, which were less than 10% lower than deterministic (middle bound) mean values that ranged from 1.50 to 2.20 µg kg⁻¹ bw week⁻¹. Probabilistic 95th percentile estimates of dietary exposure ranged from 2.65 (2.57-2.72) to 4.99 (4.62-5.38) µg kg⁻¹ bw week⁻¹, which were, with the exception of one survey, between 3% and 17% higher than middle-bound deterministic estimates. Overall, the proportion of subjects exceeding the tolerable weekly intake of 2.5 µg kg⁻¹ bw ranged from 14.8% (13.6-16.0%) to 31.2% (29.7-32.5%) according to the probabilistic assessment. The results of this work indicate that mean values of dietary exposure to cadmium in the European population were of similar magnitude using determinist or probabilistic assessments. For higher exposure levels, probabilistic estimates were almost consistently larger than deterministic counterparts, thus reflecting the impact of using the full distribution of occurrence values to determine exposure levels. It is considered prudent to use probabilistic methodology should exposure estimates be close to or exceeding health-based guidance values.
Streaming fragment assignment for real-time analysis of sequencing experiments
Roberts, Adam; Pachter, Lior
2013-01-01
We present eXpress, a software package for highly efficient probabilistic assignment of ambiguously mapping sequenced fragments. eXpress uses a streaming algorithm with linear run time and constant memory use. It can determine abundances of sequenced molecules in real time, and can be applied to ChIP-seq, metagenomics and other large-scale sequencing data. We demonstrate its use on RNA-seq data, showing greater efficiency than other quantification methods. PMID:23160280
Probabilistic dual heuristic programming-based adaptive critic
NASA Astrophysics Data System (ADS)
Herzallah, Randa
2010-02-01
Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.
Integer Linear Programming for Constrained Multi-Aspect Committee Review Assignment
Karimzadehgan, Maryam; Zhai, ChengXiang
2011-01-01
Automatic review assignment can significantly improve the productivity of many people such as conference organizers, journal editors and grant administrators. A general setup of the review assignment problem involves assigning a set of reviewers on a committee to a set of documents to be reviewed under the constraint of review quota so that the reviewers assigned to a document can collectively cover multiple topic aspects of the document. No previous work has addressed such a setup of committee review assignments while also considering matching multiple aspects of topics and expertise. In this paper, we tackle the problem of committee review assignment with multi-aspect expertise matching by casting it as an integer linear programming problem. The proposed algorithm can naturally accommodate any probabilistic or deterministic method for modeling multiple aspects to automate committee review assignments. Evaluation using a multi-aspect review assignment test set constructed using ACM SIGIR publications shows that the proposed algorithm is effective and efficient for committee review assignments based on multi-aspect expertise matching. PMID:22711970
Probabilistic peak detection in CE-LIF for STR DNA typing.
Woldegebriel, Michael; van Asten, Arian; Kloosterman, Ate; Vivó-Truyols, Gabriel
2017-07-01
In this work, we present a novel probabilistic peak detection algorithm based on a Bayesian framework for forensic DNA analysis. The proposed method aims at an exhaustive use of raw electropherogram data from a laser-induced fluorescence multi-CE system. As the raw data are informative up to a single data point, the conventional threshold-based approaches discard relevant forensic information early in the data analysis pipeline. Our proposed method assigns a posterior probability reflecting the data point's relevance with respect to peak detection criteria. Peaks of low intensity generated from a truly existing allele can thus constitute evidential value instead of fully discarding them and contemplating a potential allele drop-out. This way of working utilizes the information available within each individual data point and thus avoids making early (binary) decisions on the data analysis that can lead to error propagation. The proposed method was tested and compared to the application of a set threshold as is current practice in forensic STR DNA profiling. The new method was found to yield a significant improvement in the number of alleles identified, regardless of the peak heights and deviation from Gaussian shape. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Probabilistic modeling of children's handwriting
NASA Astrophysics Data System (ADS)
Puri, Mukta; Srihari, Sargur N.; Hanson, Lisa
2013-12-01
There is little work done in the analysis of children's handwriting, which can be useful in developing automatic evaluation systems and in quantifying handwriting individuality. We consider the statistical analysis of children's handwriting in early grades. Samples of handwriting of children in Grades 2-4 who were taught the Zaner-Bloser style were considered. The commonly occurring word "and" written in cursive style as well as hand-print were extracted from extended writing. The samples were assigned feature values by human examiners using a truthing tool. The human examiners looked at how the children constructed letter formations in their writing, looking for similarities and differences from the instructions taught in the handwriting copy book. These similarities and differences were measured using a feature space distance measure. Results indicate that the handwriting develops towards more conformity with the class characteristics of the Zaner-Bloser copybook which, with practice, is the expected result. Bayesian networks were learnt from the data to enable answering various probabilistic queries, such as determining students who may continue to produce letter formations as taught during lessons in school and determining the students who will develop a different and/or variation of the those letter formations and the number of different types of letter formations.
NASA Astrophysics Data System (ADS)
Klügel, J.
2006-12-01
Deterministic scenario-based seismic hazard analysis has a long tradition in earthquake engineering for developing the design basis of critical infrastructures like dams, transport infrastructures, chemical plants and nuclear power plants. For many applications besides of the design of infrastructures it is of interest to assess the efficiency of the design measures taken. These applications require a method allowing to perform a meaningful quantitative risk analysis. A new method for a probabilistic scenario-based seismic risk analysis has been developed based on a probabilistic extension of proven deterministic methods like the MCE- methodology. The input data required for the method are entirely based on the information which is necessary to perform any meaningful seismic hazard analysis. The method is based on the probabilistic risk analysis approach common for applications in nuclear technology developed originally by Kaplan & Garrick (1981). It is based (1) on a classification of earthquake events into different size classes (by magnitude), (2) the evaluation of the frequency of occurrence of events, assigned to the different classes (frequency of initiating events, (3) the development of bounding critical scenarios assigned to each class based on the solution of an optimization problem and (4) in the evaluation of the conditional probability of exceedance of critical design parameters (vulnerability analysis). The advantage of the method in comparison with traditional PSHA consists in (1) its flexibility, allowing to use different probabilistic models for earthquake occurrence as well as to incorporate advanced physical models into the analysis, (2) in the mathematically consistent treatment of uncertainties, and (3) in the explicit consideration of the lifetime of the critical structure as a criterion to formulate different risk goals. The method was applied for the evaluation of the risk of production interruption losses of a nuclear power plant during its residual lifetime.
Li, Zhixi; Peck, Kyung K.; Brennan, Nicole P.; Jenabi, Mehrnaz; Hsu, Meier; Zhang, Zhigang; Holodny, Andrei I.; Young, Robert J.
2014-01-01
Purpose The purpose of this study was to compare the deterministic and probabilistic tracking methods of diffusion tensor white matter fiber tractography in patients with brain tumors. Materials and Methods We identified 29 patients with left brain tumors <2 cm from the arcuate fasciculus who underwent pre-operative language fMRI and DTI. The arcuate fasciculus was reconstructed using a deterministic Fiber Assignment by Continuous Tracking (FACT) algorithm and a probabilistic method based on an extended Monte Carlo Random Walk algorithm. Tracking was controlled using two ROIs corresponding to Broca’s and Wernicke’s areas. Tracts in tumoraffected hemispheres were examined for extension between Broca’s and Wernicke’s areas, anterior-posterior length and volume, and compared with the normal contralateral tracts. Results Probabilistic tracts displayed more complete anterior extension to Broca’s area than did FACT tracts on the tumor-affected and normal sides (p < 0.0001). The median length ratio for tumor: normal sides was greater for probabilistic tracts than FACT tracts (p < 0.0001). The median tract volume ratio for tumor: normal sides was also greater for probabilistic tracts than FACT tracts (p = 0.01). Conclusion Probabilistic tractography reconstructs the arcuate fasciculus more completely and performs better through areas of tumor and/or edema. The FACT algorithm tends to underestimate the anterior-most fibers of the arcuate fasciculus, which are crossed by primary motor fibers. PMID:25328583
Patel, Ameera X; Bullmore, Edward T
2016-11-15
Connectome mapping using techniques such as functional magnetic resonance imaging (fMRI) has become a focus of systems neuroscience. There remain many statistical challenges in analysis of functional connectivity and network architecture from BOLD fMRI multivariate time series. One key statistic for any time series is its (effective) degrees of freedom, df, which will generally be less than the number of time points (or nominal degrees of freedom, N). If we know the df, then probabilistic inference on other fMRI statistics, such as the correlation between two voxel or regional time series, is feasible. However, we currently lack good estimators of df in fMRI time series, especially after the degrees of freedom of the "raw" data have been modified substantially by denoising algorithms for head movement. Here, we used a wavelet-based method both to denoise fMRI data and to estimate the (effective) df of the denoised process. We show that seed voxel correlations corrected for locally variable df could be tested for false positive connectivity with better control over Type I error and greater specificity of anatomical mapping than probabilistic connectivity maps using the nominal degrees of freedom. We also show that wavelet despiked statistics can be used to estimate all pairwise correlations between a set of regional nodes, assign a P value to each edge, and then iteratively add edges to the graph in order of increasing P. These probabilistically thresholded graphs are likely more robust to regional variation in head movement effects than comparable graphs constructed by thresholding correlations. Finally, we show that time-windowed estimates of df can be used for probabilistic connectivity testing or dynamic network analysis so that apparent changes in the functional connectome are appropriately corrected for the effects of transient noise bursts. Wavelet despiking is both an algorithm for fMRI time series denoising and an estimator of the (effective) df of denoised fMRI time series. Accurate estimation of df offers many potential advantages for probabilistically thresholding functional connectivity and network statistics tested in the context of spatially variant and non-stationary noise. Code for wavelet despiking, seed correlational testing and probabilistic graph construction is freely available to download as part of the BrainWavelet Toolbox at www.brainwavelet.org. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Dopamine neurons learn relative chosen value from probabilistic rewards
Lak, Armin; Stauffer, William R; Schultz, Wolfram
2016-01-01
Economic theories posit reward probability as one of the factors defining reward value. Individuals learn the value of cues that predict probabilistic rewards from experienced reward frequencies. Building on the notion that responses of dopamine neurons increase with reward probability and expected value, we asked how dopamine neurons in monkeys acquire this value signal that may represent an economic decision variable. We found in a Pavlovian learning task that reward probability-dependent value signals arose from experienced reward frequencies. We then assessed neuronal response acquisition during choices among probabilistic rewards. Here, dopamine responses became sensitive to the value of both chosen and unchosen options. Both experiments showed also the novelty responses of dopamine neurones that decreased as learning advanced. These results show that dopamine neurons acquire predictive value signals from the frequency of experienced rewards. This flexible and fast signal reflects a specific decision variable and could update neuronal decision mechanisms. DOI: http://dx.doi.org/10.7554/eLife.18044.001 PMID:27787196
10 CFR 436.24 - Uncertainty analyses.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle cost...
10 CFR 436.24 - Uncertainty analyses.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle cost...
10 CFR 436.24 - Uncertainty analyses.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle cost...
10 CFR 436.24 - Uncertainty analyses.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle cost...
NASA Astrophysics Data System (ADS)
Zhang, Shaojie; Zhao, Luqiang; Delgado-Tellez, Ricardo; Bao, Hongjun
2018-03-01
Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs) of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs < 1 is tested for each pixel in n simulations which are integrated in a unique parameter. This parameter links the landslide probability to the uncertainties of soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.
Prioritizing Project Risks Using AHP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thibadeau, Barbara M
2007-01-01
This essay introduces the Analytic Hierarchy Process (AHP) as a method by which to rank project risks, in terms of importance as well as likelihood. AHP is way to handle quantifiable and/or intangible criteria in the decision making process. It is a multi-objective multi-criteria decision-making approach that is based on the idea of pair-wise comparisons of alternatives with respect to a given criterion (e.g., which alternative, A or B, is preferred and by how much more is it preferred) or with respect to an objective (e.g., which is more important, A or B, and by how much more is itmore » important). This approach was pioneered by Thomas Saaty in the late 1970's. It has been suggested that a successful project is one that successfully manages risk and that project management is the management of uncertainty. Risk management relies on the quantification of uncertainty which, in turn, is predicated upon the accuracy of probabilistic approaches (in terms of likelihood as well as magnitude). In many cases, the appropriate probability distribution (or probability value) is unknown. And, researchers have shown that probability values are not made very accurately, that the use of verbal expressions is not a suitable alternative, that there is great variability in the use and interpretation of these values and that there is a great reluctance to assign them in the first place. Data from an ongoing project is used to show that AHP can be used to obtain these values, thus overcoming some of the problems associated with the direct assignment of discrete probability values. A novel method by which to calculate the consistency of the data is introduced. The AHP approach is easily implemented and, typically, offers results that are consistent with the decision maker's intuition.« less
Global/local methods for probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Wu, Y.-T.
1993-01-01
A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.
Global/local methods for probabilistic structural analysis
NASA Astrophysics Data System (ADS)
Millwater, H. R.; Wu, Y.-T.
1993-04-01
A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.
Social and monetary reward learning engage overlapping neural substrates.
Lin, Alice; Adolphs, Ralph; Rangel, Antonio
2012-03-01
Learning to make choices that yield rewarding outcomes requires the computation of three distinct signals: stimulus values that are used to guide choices at the time of decision making, experienced utility signals that are used to evaluate the outcomes of those decisions and prediction errors that are used to update the values assigned to stimuli during reward learning. Here we investigated whether monetary and social rewards involve overlapping neural substrates during these computations. Subjects engaged in two probabilistic reward learning tasks that were identical except that rewards were either social (pictures of smiling or angry people) or monetary (gaining or losing money). We found substantial overlap between the two types of rewards for all components of the learning process: a common area of ventromedial prefrontal cortex (vmPFC) correlated with stimulus value at the time of choice and another common area of vmPFC correlated with reward magnitude and common areas in the striatum correlated with prediction errors. Taken together, the findings support the hypothesis that shared anatomical substrates are involved in the computation of both monetary and social rewards. © The Author (2011). Published by Oxford University Press.
Elucidating the nutritional dynamics of fungi using stable isotopes.
Mayor, Jordan R; Schuur, Edward A G; Henkel, Terry W
2009-02-01
Mycorrhizal and saprotrophic (SAP) fungi are essential to terrestrial element cycling due to their uptake of mineral nutrients and decomposition of detritus. Linking these ecological roles to specific fungi is necessary to improve our understanding of global nutrient cycling, fungal ecophysiology, and forest ecology. Using discriminant analyses of nitrogen (delta(15)N) and carbon (delta(13)C) isotope values from 813 fungi across 23 sites, we verified collector-based categorizations as either ectomycorrhizal (ECM) or SAP in > 91% of the fungi, and provided probabilistic assignments for an additional 27 fungi of unknown ecological role. As sites ranged from boreal tundra to tropical rainforest, we were able to show that fungal delta(13)C (26 sites) and delta(15)N (32 sites) values could be predicted by climate or latitude as previously shown in plant and soil analyses. Fungal delta(13)C values are likely reflecting differences in C-source between ECM and SAP fungi, whereas (15)N enrichment of ECM fungi relative to SAP fungi suggests that ECM fungi are consistently delivering (15)N depleted N to host trees across a range of ecosystem types.
Diagnosis of students' ability in a statistical course based on Rasch probabilistic outcome
NASA Astrophysics Data System (ADS)
Mahmud, Zamalia; Ramli, Wan Syahira Wan; Sapri, Shamsiah; Ahmad, Sanizah
2017-06-01
Measuring students' ability and performance are important in assessing how well students have learned and mastered the statistical courses. Any improvement in learning will depend on the student's approaches to learning, which are relevant to some factors of learning, namely assessment methods carrying out tasks consisting of quizzes, tests, assignment and final examination. This study has attempted an alternative approach to measure students' ability in an undergraduate statistical course based on the Rasch probabilistic model. Firstly, this study aims to explore the learning outcome patterns of students in a statistics course (Applied Probability and Statistics) based on an Entrance-Exit survey. This is followed by investigating students' perceived learning ability based on four Course Learning Outcomes (CLOs) and students' actual learning ability based on their final examination scores. Rasch analysis revealed that students perceived themselves as lacking the ability to understand about 95% of the statistics concepts at the beginning of the class but eventually they had a good understanding at the end of the 14 weeks class. In terms of students' performance in their final examination, their ability in understanding the topics varies at different probability values given the ability of the students and difficulty of the questions. Majority found the probability and counting rules topic to be the most difficult to learn.
Spectrum-to-Spectrum Searching Using a Proteome-wide Spectral Library*
Yen, Chia-Yu; Houel, Stephane; Ahn, Natalie G.; Old, William M.
2011-01-01
The unambiguous assignment of tandem mass spectra (MS/MS) to peptide sequences remains a key unsolved problem in proteomics. Spectral library search strategies have emerged as a promising alternative for peptide identification, in which MS/MS spectra are directly compared against a reference library of confidently assigned spectra. Two problems relate to library size. First, reference spectral libraries are limited to rediscovery of previously identified peptides and are not applicable to new peptides, because of their incomplete coverage of the human proteome. Second, problems arise when searching a spectral library the size of the entire human proteome. We observed that traditional dot product scoring methods do not scale well with spectral library size, showing reduction in sensitivity when library size is increased. We show that this problem can be addressed by optimizing scoring metrics for spectrum-to-spectrum searches with large spectral libraries. MS/MS spectra for the 1.3 million predicted tryptic peptides in the human proteome are simulated using a kinetic fragmentation model (MassAnalyzer version2.1) to create a proteome-wide simulated spectral library. Searches of the simulated library increase MS/MS assignments by 24% compared with Mascot, when using probabilistic and rank based scoring methods. The proteome-wide coverage of the simulated library leads to 11% increase in unique peptide assignments, compared with parallel searches of a reference spectral library. Further improvement is attained when reference spectra and simulated spectra are combined into a hybrid spectral library, yielding 52% increased MS/MS assignments compared with Mascot searches. Our study demonstrates the advantages of using probabilistic and rank based scores to improve performance of spectrum-to-spectrum search strategies. PMID:21532008
An ontology-based nurse call management system (oNCS) with probabilistic priority assessment
2011-01-01
Background The current, place-oriented nurse call systems are very static. A patient can only make calls with a button which is fixed to a wall of a room. Moreover, the system does not take into account various factors specific to a situation. In the future, there will be an evolution to a mobile button for each patient so that they can walk around freely and still make calls. The system would become person-oriented and the available context information should be taken into account to assign the correct nurse to a call. The aim of this research is (1) the design of a software platform that supports the transition to mobile and wireless nurse call buttons in hospitals and residential care and (2) the design of a sophisticated nurse call algorithm. This algorithm dynamically adapts to the situation at hand by taking the profile information of staff members and patients into account. Additionally, the priority of a call probabilistically depends on the risk factors, assigned to a patient. Methods The ontology-based Nurse Call System (oNCS) was developed as an extension of a Context-Aware Service Platform. An ontology is used to manage the profile information. Rules implement the novel nurse call algorithm that takes all this information into account. Probabilistic reasoning algorithms are designed to determine the priority of a call based on the risk factors of the patient. Results The oNCS system is evaluated through a prototype implementation and simulations, based on a detailed dataset obtained from Ghent University Hospital. The arrival times of nurses at the location of a call, the workload distribution of calls amongst nurses and the assignment of priorities to calls are compared for the oNCS system and the current, place-oriented nurse call system. Additionally, the performance of the system is discussed. Conclusions The execution time of the nurse call algorithm is on average 50.333 ms. Moreover, the oNCS system significantly improves the assignment of nurses to calls. Calls generally have a nurse present faster and the workload-distribution amongst the nurses improves. PMID:21294860
PROBABILISTIC CROSS-IDENTIFICATION IN CROWDED FIELDS AS AN ASSIGNMENT PROBLEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budavári, Tamás; Basu, Amitabh, E-mail: budavari@jhu.edu, E-mail: basu.amitabh@jhu.edu
2016-10-01
One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.
Probabilistic Cross-identification in Crowded Fields as an Assignment Problem
NASA Astrophysics Data System (ADS)
Budavári, Tamás; Basu, Amitabh
2016-10-01
One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.
Eddy, Sean R.
2008-01-01
Sequence database searches require accurate estimation of the statistical significance of scores. Optimal local sequence alignment scores follow Gumbel distributions, but determining an important parameter of the distribution (λ) requires time-consuming computational simulation. Moreover, optimal alignment scores are less powerful than probabilistic scores that integrate over alignment uncertainty (“Forward” scores), but the expected distribution of Forward scores remains unknown. Here, I conjecture that both expected score distributions have simple, predictable forms when full probabilistic modeling methods are used. For a probabilistic model of local sequence alignment, optimal alignment bit scores (“Viterbi” scores) are Gumbel-distributed with constant λ = log 2, and the high scoring tail of Forward scores is exponential with the same constant λ. Simulation studies support these conjectures over a wide range of profile/sequence comparisons, using 9,318 profile-hidden Markov models from the Pfam database. This enables efficient and accurate determination of expectation values (E-values) for both Viterbi and Forward scores for probabilistic local alignments. PMID:18516236
Bayesian modeling of consumer behavior in the presence of anonymous visits
NASA Astrophysics Data System (ADS)
Novak, Julie Esther
Tailoring content to consumers has become a hallmark of marketing and digital media, particularly as it has become easier to identify customers across usage or purchase occasions. However, across a wide variety of contexts, companies find that customers do not consistently identify themselves, leaving a substantial fraction of anonymous visits. We develop a Bayesian hierarchical model that allows us to probabilistically assign anonymous sessions to users. These probabilistic assignments take into account a customer's demographic information, frequency of visitation, activities taken when visiting, and times of arrival. We present two studies, one with synthetic and one with real data, where we demonstrate improved performance over two popular practices (nearest-neighbor matching and deleting the anonymous visits) due to increased efficiency and reduced bias driven by the non-ignorability of which types of events are more likely to be anonymous. Using our proposed model, we avoid potential bias in understanding the effect of a firm's marketing on its customers, improve inference about the total number of customers in the dataset, and provide more precise targeted marketing to both previously observed and unobserved customers.
Kherfi, Mohammed Lamine; Ziou, Djemel
2006-04-01
In content-based image retrieval, understanding the user's needs is a challenging task that requires integrating him in the process of retrieval. Relevance feedback (RF) has proven to be an effective tool for taking the user's judgement into account. In this paper, we present a new RF framework based on a feature selection algorithm that nicely combines the advantages of a probabilistic formulation with those of using both the positive example (PE) and the negative example (NE). Through interaction with the user, our algorithm learns the importance he assigns to image features, and then applies the results obtained to define similarity measures that correspond better to his judgement. The use of the NE allows images undesired by the user to be discarded, thereby improving retrieval accuracy. As for the probabilistic formulation of the problem, it presents a multitude of advantages and opens the door to more modeling possibilities that achieve a good feature selection. It makes it possible to cluster the query data into classes, choose the probability law that best models each class, model missing data, and support queries with multiple PE and/or NE classes. The basic principle of our algorithm is to assign more importance to features with a high likelihood and those which distinguish well between PE classes and NE classes. The proposed algorithm was validated separately and in image retrieval context, and the experiments show that it performs a good feature selection and contributes to improving retrieval effectiveness.
Adaptive role switching promotes fairness in networked ultimatum game.
Wu, Te; Fu, Feng; Zhang, Yanling; Wang, Long
2013-01-01
In recent years, mechanisms favoring fair split in the ultimatum game have attracted growing interests because of its practical implications for international bargains. In this game, two players are randomly assigned two different roles respectively to split an offer: the proposer suggests how to split and the responder decides whether or not to accept it. Only when both agree is the offer successfully split; otherwise both get nothing. It is of importance and interest to break the symmetry in role assignment especially when the game is repeatedly played in a heterogeneous population. Here we consider an adaptive role assignment: whenever the split fails, the two players switch their roles probabilistically. The results show that this simple feedback mechanism proves much more effective at promoting fairness than other alternatives (where, for example, the role assignment is based on the number of neighbors).
Calibration of Predictor Models Using Multiple Validation Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.
Integrated Campaign Probabilistic Cost, Schedule, Performance, and Value for Program Office Support
NASA Technical Reports Server (NTRS)
Cornelius, David; Sasamoto, Washito; Daugherty, Kevin; Deacon, Shaun
2012-01-01
This paper describes an integrated assessment tool developed at NASA Langley Research Center that incorporates probabilistic analysis of life cycle cost, schedule, launch performance, on-orbit performance, and value across a series of planned space-based missions, or campaign. Originally designed as an aid in planning the execution of missions to accomplish the National Research Council 2007 Earth Science Decadal Survey, it utilizes Monte Carlo simulation of a series of space missions for assessment of resource requirements and expected return on investment. Interactions between simulated missions are incorporated, such as competition for launch site manifest, to capture unexpected and non-linear system behaviors. A novel value model is utilized to provide an assessment of the probabilistic return on investment. A demonstration case is discussed to illustrate the tool utility.
Axelsson, Jan; Riklund, Katrine; Nyberg, Lars; Dayan, Peter; Bäckman, Lars
2017-01-01
Probabilistic reward learning is characterised by individual differences that become acute in aging. This may be due to age-related dopamine (DA) decline affecting neural processing in striatum, prefrontal cortex, or both. We examined this by administering a probabilistic reward learning task to younger and older adults, and combining computational modelling of behaviour, fMRI and PET measurements of DA D1 availability. We found that anticipatory value signals in ventromedial prefrontal cortex (vmPFC) were attenuated in older adults. The strength of this signal predicted performance beyond age and was modulated by D1 availability in nucleus accumbens. These results uncover that a value-anticipation mechanism in vmPFC declines in aging, and that this mechanism is associated with DA D1 receptor availability. PMID:28870286
Quantum Tasks with Non-maximally Quantum Channels via Positive Operator-Valued Measurement
NASA Astrophysics Data System (ADS)
Peng, Jia-Yin; Luo, Ming-Xing; Mo, Zhi-Wen
2013-01-01
By using a proper positive operator-valued measure (POVM), we present two new schemes for probabilistic transmission with non-maximally four-particle cluster states. In the first scheme, we demonstrate that two non-maximally four-particle cluster states can be used to realize probabilistically sharing an unknown three-particle GHZ-type state within either distant agent's place. In the second protocol, we demonstrate that a non-maximally four-particle cluster state can be used to teleport an arbitrary unknown multi-particle state in a probabilistic manner with appropriate unitary operations and POVM. Moreover the total success probability of these two schemes are also worked out.
Event-Based Media Enrichment Using an Adaptive Probabilistic Hypergraph Model.
Liu, Xueliang; Wang, Meng; Yin, Bao-Cai; Huet, Benoit; Li, Xuelong
2015-11-01
Nowadays, with the continual development of digital capture technologies and social media services, a vast number of media documents are captured and shared online to help attendees record their experience during events. In this paper, we present a method combining semantic inference and multimodal analysis for automatically finding media content to illustrate events using an adaptive probabilistic hypergraph model. In this model, media items are taken as vertices in the weighted hypergraph and the task of enriching media to illustrate events is formulated as a ranking problem. In our method, each hyperedge is constructed using the K-nearest neighbors of a given media document. We also employ a probabilistic representation, which assigns each vertex to a hyperedge in a probabilistic way, to further exploit the correlation among media data. Furthermore, we optimize the hypergraph weights in a regularization framework, which is solved as a second-order cone problem. The approach is initiated by seed media and then used to rank the media documents using a transductive inference process. The results obtained from validating the approach on an event dataset collected from EventMedia demonstrate the effectiveness of the proposed approach.
Process for computing geometric perturbations for probabilistic analysis
Fitch, Simeon H. K. [Charlottesville, VA; Riha, David S [San Antonio, TX; Thacker, Ben H [San Antonio, TX
2012-04-10
A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.
Probabilistic Structural Analysis of SSME Turbopump Blades: Probabilistic Geometry Effects
NASA Technical Reports Server (NTRS)
Nagpal, V. K.
1985-01-01
A probabilistic study was initiated to evaluate the precisions of the geometric and material properties tolerances on the structural response of turbopump blades. To complete this study, a number of important probabilistic variables were identified which are conceived to affect the structural response of the blade. In addition, a methodology was developed to statistically quantify the influence of these probabilistic variables in an optimized way. The identified variables include random geometric and material properties perturbations, different loadings and a probabilistic combination of these loadings. Influences of these probabilistic variables are planned to be quantified by evaluating the blade structural response. Studies of the geometric perturbations were conducted for a flat plate geometry as well as for a space shuttle main engine blade geometry using a special purpose code which uses the finite element approach. Analyses indicate that the variances of the perturbations about given mean values have significant influence on the response.
Recent developments of the NESSUS probabilistic structural analysis computer program
NASA Technical Reports Server (NTRS)
Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.
1992-01-01
The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.
Relative Gains, Losses, and Reference Points in Probabilistic Choice in Rats
Marshall, Andrew T.; Kirkpatrick, Kimberly
2015-01-01
Theoretical reference points have been proposed to differentiate probabilistic gains from probabilistic losses in humans, but such a phenomenon in non-human animals has yet to be thoroughly elucidated. Three experiments evaluated the effect of reward magnitude on probabilistic choice in rats, seeking to determine reference point use by examining the effect of previous outcome magnitude(s) on subsequent choice behavior. Rats were trained to choose between an outcome that always delivered reward (low-uncertainty choice) and one that probabilistically delivered reward (high-uncertainty). The probability of high-uncertainty outcome receipt and the magnitudes of low-uncertainty and high-uncertainty outcomes were manipulated within and between experiments. Both the low- and high-uncertainty outcomes involved variable reward magnitudes, so that either a smaller or larger magnitude was probabilistically delivered, as well as reward omission following high-uncertainty choices. In Experiments 1 and 2, the between groups factor was the magnitude of the high-uncertainty-smaller (H-S) and high-uncertainty-larger (H-L) outcome, respectively. The H-S magnitude manipulation differentiated the groups, while the H-L magnitude manipulation did not. Experiment 3 showed that manipulating the probability of differential losses as well as the expected value of the low-uncertainty choice produced systematic effects on choice behavior. The results suggest that the reference point for probabilistic gains and losses was the expected value of the low-uncertainty choice. Current theories of probabilistic choice behavior have difficulty accounting for the present results, so an integrated theoretical framework is proposed. Overall, the present results have implications for understanding individual differences and corresponding underlying mechanisms of probabilistic choice behavior. PMID:25658448
Conjoint-measurement framework for the study of probabilistic information processing.
NASA Technical Reports Server (NTRS)
Wallsten, T. S.
1972-01-01
The theory of conjoint measurement described by Krantz et al. (1971) is shown to indicate how a descriptive model of human processing of probabilistic information built around Bayes' rule is to be tested and how it is to be used to obtain subjective scale values. Specific relationships concerning these scale values are shown to emerge, and the theoretical prospects resulting from this development are discussed.
NASA Technical Reports Server (NTRS)
Canfield, R. C.; Ricchiazzi, P. J.
1980-01-01
An approximate probabilistic radiative transfer equation and the statistical equilibrium equations are simultaneously solved for a model hydrogen atom consisting of three bound levels and ionization continuum. The transfer equation for L-alpha, L-beta, H-alpha, and the Lyman continuum is explicitly solved assuming complete redistribution. The accuracy of this approach is tested by comparing source functions and radiative loss rates to values obtained with a method that solves the exact transfer equation. Two recent model solar-flare chromospheres are used for this test. It is shown that for the test atmospheres the probabilistic method gives values of the radiative loss rate that are characteristically good to a factor of 2. The advantage of this probabilistic approach is that it retains a description of the dominant physical processes of radiative transfer in the complete redistribution case, yet it achieves a major reduction in computational requirements.
Identifying a Probabilistic Boolean Threshold Network From Samples.
Melkman, Avraham A; Cheng, Xiaoqing; Ching, Wai-Ki; Akutsu, Tatsuya
2018-04-01
This paper studies the problem of exactly identifying the structure of a probabilistic Boolean network (PBN) from a given set of samples, where PBNs are probabilistic extensions of Boolean networks. Cheng et al. studied the problem while focusing on PBNs consisting of pairs of AND/OR functions. This paper considers PBNs consisting of Boolean threshold functions while focusing on those threshold functions that have unit coefficients. The treatment of Boolean threshold functions, and triplets and -tuplets of such functions, necessitates a deepening of the theoretical analyses. It is shown that wide classes of PBNs with such threshold functions can be exactly identified from samples under reasonable constraints, which include: 1) PBNs in which any number of threshold functions can be assigned provided that all have the same number of input variables and 2) PBNs consisting of pairs of threshold functions with different numbers of input variables. It is also shown that the problem of deciding the equivalence of two Boolean threshold functions is solvable in pseudopolynomial time but remains co-NP complete.
Application of a stochastic snowmelt model for probabilistic decisionmaking
NASA Technical Reports Server (NTRS)
Mccuen, R. H.
1983-01-01
A stochastic form of the snowmelt runoff model that can be used for probabilistic decision-making was developed. The use of probabilistic streamflow predictions instead of single valued deterministic predictions leads to greater accuracy in decisions. While the accuracy of the output function is important in decisionmaking, it is also important to understand the relative importance of the coefficients. Therefore, a sensitivity analysis was made for each of the coefficients.
NASA Technical Reports Server (NTRS)
Boyce, L.
1992-01-01
A probabilistic general material strength degradation model has been developed for structural components of aerospace propulsion systems subjected to diverse random effects. The model has been implemented in two FORTRAN programs, PROMISS (Probabilistic Material Strength Simulator) and PROMISC (Probabilistic Material Strength Calibrator). PROMISS calculates the random lifetime strength of an aerospace propulsion component due to as many as eighteen diverse random effects. Results are presented in the form of probability density functions and cumulative distribution functions of lifetime strength. PROMISC calibrates the model by calculating the values of empirical material constants.
Almansa, Carmen; Martínez-Paz, José M
2011-03-01
Cost-benefit analysis is a standard methodological platform for public investment evaluation. In high environmental impact projects, with a long-term effect on future generations, the choice of discount rate and time horizon is of particular relevance, because it can lead to very different profitability assessments. This paper describes some recent approaches to environmental discounting and applies them, together with a number of classical procedures, to the economic evaluation of a plant for the desalination of irrigation return water from intensive farming, aimed at halting the degradation of an area of great ecological value, the Mar Menor, in South Eastern Spain. A Monte Carlo procedure is used in four CBA approaches and three time horizons to carry out a probabilistic sensitivity analysis designed to integrate the views of an international panel of experts in environmental discounting with the uncertainty affecting the market price of the project's main output, i.e., irrigation water for a water-deprived area. The results show which discounting scenarios most accurately estimate the socio-environmental profitability of the project while also considering the risk associated with these two key parameters. The analysis also provides some methodological findings regarding ways of assessing financial and environmental profitability in decisions concerning public investment in the environment. Copyright © 2010 Elsevier B.V. All rights reserved.
Impact of Missing Data for Body Mass Index in an Epidemiologic Study.
Razzaghi, Hilda; Tinker, Sarah C; Herring, Amy H; Howards, Penelope P; Waller, D Kim; Johnson, Candice Y
2016-07-01
Objective To assess the potential impact of missing data on body mass index (BMI) on the association between prepregnancy obesity and specific birth defects. Methods Data from the National Birth Defects Prevention Study (NBDPS) were analyzed. We assessed the factors associated with missing BMI data among mothers of infants without birth defects. Four analytic methods were then used to assess the impact of missing BMI data on the association between maternal prepregnancy obesity and three birth defects; spina bifida, gastroschisis, and cleft lip with/without cleft palate. The analytic methods were: (1) complete case analysis; (2) assignment of missing values to either obese or normal BMI; (3) multiple imputation; and (4) probabilistic sensitivity analysis. Logistic regression was used to estimate crude and adjusted odds ratios (aOR) and 95 % confidence intervals (CI). Results Of NBDPS control mothers 4.6 % were missing BMI data, and most of the missing values were attributable to missing height (~90 %). Missing BMI data was associated with birth outside of the US (aOR 8.6; 95 % CI 5.5, 13.4), interview in Spanish (aOR 2.4; 95 % CI 1.8, 3.2), Hispanic ethnicity (aOR 2.0; 95 % CI 1.2, 3.4), and <12 years education (aOR 2.3; 95 % CI 1.7, 3.1). Overall the results of the multiple imputation and probabilistic sensitivity analysis were similar to the complete case analysis. Conclusions Although in some scenarios missing BMI data can bias the magnitude of association, it does not appear likely to have impacted conclusions from a traditional complete case analysis of these data.
Sayers, Adrian; Ben-Shlomo, Yoav; Blom, Ashley W; Steele, Fiona
2016-01-01
Abstract Studies involving the use of probabilistic record linkage are becoming increasingly common. However, the methods underpinning probabilistic record linkage are not widely taught or understood, and therefore these studies can appear to be a ‘black box’ research tool. In this article, we aim to describe the process of probabilistic record linkage through a simple exemplar. We first introduce the concept of deterministic linkage and contrast this with probabilistic linkage. We illustrate each step of the process using a simple exemplar and describe the data structure required to perform a probabilistic linkage. We describe the process of calculating and interpreting matched weights and how to convert matched weights into posterior probabilities of a match using Bayes theorem. We conclude this article with a brief discussion of some of the computational demands of record linkage, how you might assess the quality of your linkage algorithm, and how epidemiologists can maximize the value of their record-linked research using robust record linkage methods. PMID:26686842
Weighing costs and losses: A decision making game using probabilistic forecasts
NASA Astrophysics Data System (ADS)
Werner, Micha; Ramos, Maria-Helena; Wetterhall, Frederik; Cranston, Michael; van Andel, Schalk-Jan; Pappenberger, Florian; Verkade, Jan
2017-04-01
Probabilistic forecasts are increasingly recognised as an effective and reliable tool to communicate uncertainties. The economic value of probabilistic forecasts has been demonstrated by several authors, showing the benefit to using probabilistic forecasts over deterministic forecasts in several sectors, including flood and drought warning, hydropower, and agriculture. Probabilistic forecasting is also central to the emerging concept of risk-based decision making, and underlies emerging paradigms such as impact-based forecasting. Although the economic value of probabilistic forecasts is easily demonstrated in academic works, its evaluation in practice is more complex. The practical use of probabilistic forecasts requires decision makers to weigh the cost of an appropriate response to a probabilistic warning against the projected loss that would occur if the event forecast becomes reality. In this paper, we present the results of a simple game that aims to explore how decision makers are influenced by the costs required for taking a response and the potential losses they face in case the forecast flood event occurs. Participants play the role of one of three possible different shop owners. Each type of shop has losses of quite different magnitude, should a flood event occur. The shop owners are presented with several forecasts, each with a probability of a flood event occurring, which would inundate their shop and lead to those losses. In response, they have to decide if they want to do nothing, raise temporary defences, or relocate their inventory. Each action comes at a cost; and the different shop owners therefore have quite different cost/loss ratios. The game was played on four occasions. Players were attendees of the ensemble hydro-meteorological forecasting session of the 2016 EGU Assembly, professionals participating at two other conferences related to hydrometeorology, and a group of students. All audiences were familiar with the principles of forecasting and water-related risks, and one of the audiences comprised a group of experts in probabilistic forecasting. Results show that the different shop owners do take the costs of taking action and the potential losses into account in their decisions. Shop owners with a low cost/loss ratio were found to be more inclined to take actions based on the forecasts, though the absolute value of the losses also increased the willingness to take action. Little differentiation was found between the different groups of players.
Effects of delay and probability combinations on discounting in humans
Cox, David J.; Dallery, Jesse
2017-01-01
To determine discount rates, researchers typically adjust the amount of an immediate or certain option relative to a delayed or uncertain option. Because this adjusting amount method can be relatively time consuming, researchers have developed more efficient procedures. One such procedure is a 5-trial adjusting delay procedure, which measures the delay at which an amount of money loses half of its value (e.g., $1000 is valued at $500 with a 10-year delay to its receipt). Experiment 1 (n = 212) used 5-trial adjusting delay or probability tasks to measure delay discounting of losses, probabilistic gains, and probabilistic losses. Experiment 2 (n = 98) assessed combined probabilistic and delayed alternatives. In both experiments, we compared results from 5-trial adjusting delay or probability tasks to traditional adjusting amount procedures. Results suggest both procedures produced similar rates of probability and delay discounting in six out of seven comparisons. A magnitude effect consistent with previous research was observed for probabilistic gains and losses, but not for delayed losses. Results also suggest that delay and probability interact to determine the value of money. Five-trial methods may allow researchers to assess discounting more efficiently as well as study more complex choice scenarios. PMID:27498073
A Discounting Framework for Choice With Delayed and Probabilistic Rewards
Green, Leonard; Myerson, Joel
2005-01-01
When choosing between delayed or uncertain outcomes, individuals discount the value of such outcomes on the basis of the expected time to or the likelihood of their occurrence. In an integrative review of the expanding experimental literature on discounting, the authors show that although the same form of hyperbola-like function describes discounting of both delayed and probabilistic outcomes, a variety of recent findings are inconsistent with a single-process account. The authors also review studies that compare discounting in different populations and discuss the theoretical and practical implications of the findings. The present effort illustrates the value of studying choice involving both delayed and probabilistic outcomes within a general discounting framework that uses similar experimental procedures and a common analytical approach. PMID:15367080
Probabilistic analysis of preload in the abutment screw of a dental implant complex.
Guda, Teja; Ross, Thomas A; Lang, Lisa A; Millwater, Harry R
2008-09-01
Screw loosening is a problem for a percentage of implants. A probabilistic analysis to determine the cumulative probability distribution of the preload, the probability of obtaining an optimal preload, and the probabilistic sensitivities identifying important variables is lacking. The purpose of this study was to examine the inherent variability of material properties, surface interactions, and applied torque in an implant system to determine the probability of obtaining desired preload values and to identify the significant variables that affect the preload. Using software programs, an abutment screw was subjected to a tightening torque and the preload was determined from finite element (FE) analysis. The FE model was integrated with probabilistic analysis software. Two probabilistic analysis methods (advanced mean value and Monte Carlo sampling) were applied to determine the cumulative distribution function (CDF) of preload. The coefficient of friction, elastic moduli, Poisson's ratios, and applied torque were modeled as random variables and defined by probability distributions. Separate probability distributions were determined for the coefficient of friction in well-lubricated and dry environments. The probabilistic analyses were performed and the cumulative distribution of preload was determined for each environment. A distinct difference was seen between the preload probability distributions generated in a dry environment (normal distribution, mean (SD): 347 (61.9) N) compared to a well-lubricated environment (normal distribution, mean (SD): 616 (92.2) N). The probability of obtaining a preload value within the target range was approximately 54% for the well-lubricated environment and only 0.02% for the dry environment. The preload is predominately affected by the applied torque and coefficient of friction between the screw threads and implant bore at lower and middle values of the preload CDF, and by the applied torque and the elastic modulus of the abutment screw at high values of the preload CDF. Lubrication at the threaded surfaces between the abutment screw and implant bore affects the preload developed in the implant complex. For the well-lubricated surfaces, only approximately 50% of implants will have preload values within the generally accepted range. This probability can be improved by applying a higher torque than normally recommended or a more closely controlled torque than typically achieved. It is also suggested that materials with higher elastic moduli be used in the manufacture of the abutment screw to achieve a higher preload.
Fully probabilistic control design in an adaptive critic framework.
Herzallah, Randa; Kárný, Miroslav
2011-12-01
Optimal stochastic controller pushes the closed-loop behavior as close as possible to the desired one. The fully probabilistic design (FPD) uses probabilistic description of the desired closed loop and minimizes Kullback-Leibler divergence of the closed-loop description to the desired one. Practical exploitation of the fully probabilistic design control theory continues to be hindered by the computational complexities involved in numerically solving the associated stochastic dynamic programming problem; in particular, very hard multivariate integration and an approximate interpolation of the involved multivariate functions. This paper proposes a new fully probabilistic control algorithm that uses the adaptive critic methods to circumvent the need for explicitly evaluating the optimal value function, thereby dramatically reducing computational requirements. This is a main contribution of this paper. Copyright © 2011 Elsevier Ltd. All rights reserved.
Willingness-to-pay for a probabilistic flood forecast: a risk-based decision-making game
NASA Astrophysics Data System (ADS)
Arnal, Louise; Ramos, Maria-Helena; Coughlan de Perez, Erin; Cloke, Hannah Louise; Stephens, Elisabeth; Wetterhall, Fredrik; van Andel, Schalk Jan; Pappenberger, Florian
2016-08-01
Probabilistic hydro-meteorological forecasts have over the last decades been used more frequently to communicate forecast uncertainty. This uncertainty is twofold, as it constitutes both an added value and a challenge for the forecaster and the user of the forecasts. Many authors have demonstrated the added (economic) value of probabilistic over deterministic forecasts across the water sector (e.g. flood protection, hydroelectric power management and navigation). However, the richness of the information is also a source of challenges for operational uses, due partially to the difficulty in transforming the probability of occurrence of an event into a binary decision. This paper presents the results of a risk-based decision-making game on the topic of flood protection mitigation, called "How much are you prepared to pay for a forecast?". The game was played at several workshops in 2015, which were attended by operational forecasters and academics working in the field of hydro-meteorology. The aim of this game was to better understand the role of probabilistic forecasts in decision-making processes and their perceived value by decision-makers. Based on the participants' willingness-to-pay for a forecast, the results of the game show that the value (or the usefulness) of a forecast depends on several factors, including the way users perceive the quality of their forecasts and link it to the perception of their own performances as decision-makers.
NASA Astrophysics Data System (ADS)
Peeters, L. J.; Mallants, D.; Turnadge, C.
2017-12-01
Groundwater impact assessments are increasingly being undertaken in a probabilistic framework whereby various sources of uncertainty (model parameters, model structure, boundary conditions, and calibration data) are taken into account. This has resulted in groundwater impact metrics being presented as probability density functions and/or cumulative distribution functions, spatial maps displaying isolines of percentile values for specific metrics, etc. Groundwater management on the other hand typically uses single values (i.e., in a deterministic framework) to evaluate what decisions are required to protect groundwater resources. For instance, in New South Wales, Australia, a nominal drawdown value of two metres is specified by the NSW Aquifer Interference Policy as trigger-level threshold. In many cases, when drawdowns induced by groundwater extraction exceed two metres, "make-good" provisions are enacted (such as the surrendering of extraction licenses). The information obtained from a quantitative uncertainty analysis can be used to guide decision making in several ways. Two examples are discussed here: the first of which would not require modification of existing "deterministic" trigger or guideline values, whereas the second example assumes that the regulatory criteria are also expressed in probabilistic terms. The first example is a straightforward interpretation of calculated percentile values for specific impact metrics. The second examples goes a step further, as the previous deterministic thresholds do not currently allow for a probabilistic interpretation; e.g., there is no statement that "the probability of exceeding the threshold shall not be larger than 50%". It would indeed be sensible to have a set of thresholds with an associated acceptable probability of exceedance (or probability of not exceeding a threshold) that decreases as the impact increases. We here illustrate how both the prediction uncertainty and management rules can be expressed in a probabilistic framework, using groundwater metrics derived for a highly stressed groundwater system.
Affective and cognitive factors influencing sensitivity to probabilistic information.
Tyszka, Tadeusz; Sawicki, Przemyslaw
2011-11-01
In study 1 different groups of female students were randomly assigned to one of four probabilistic information formats. Five different levels of probability of a genetic disease in an unborn child were presented to participants (within-subject factor). After the presentation of the probability level, participants were requested to indicate the acceptable level of pain they would tolerate to avoid the disease (in their unborn child), their subjective evaluation of the disease risk, and their subjective evaluation of being worried by this risk. The results of study 1 confirmed the hypothesis that an experience-based probability format decreases the subjective sense of worry about the disease, thus, presumably, weakening the tendency to overrate the probability of rare events. Study 2 showed that for the emotionally laden stimuli, the experience-based probability format resulted in higher sensitivity to probability variations than other formats of probabilistic information. These advantages of the experience-based probability format are interpreted in terms of two systems of information processing: the rational deliberative versus the affective experiential and the principle of stimulus-response compatibility. © 2011 Society for Risk Analysis.
Centralized Multi-Sensor Square Root Cubature Joint Probabilistic Data Association
Liu, Jun; Li, Gang; Qi, Lin; Li, Yaowen; He, You
2017-01-01
This paper focuses on the tracking problem of multiple targets with multiple sensors in a nonlinear cluttered environment. To avoid Jacobian matrix computation and scaling parameter adjustment, improve numerical stability, and acquire more accurate estimated results for centralized nonlinear tracking, a novel centralized multi-sensor square root cubature joint probabilistic data association algorithm (CMSCJPDA) is proposed. Firstly, the multi-sensor tracking problem is decomposed into several single-sensor multi-target tracking problems, which are sequentially processed during the estimation. Then, in each sensor, the assignment of its measurements to target tracks is accomplished on the basis of joint probabilistic data association (JPDA), and a weighted probability fusion method with square root version of a cubature Kalman filter (SRCKF) is utilized to estimate the targets’ state. With the measurements in all sensors processed CMSCJPDA is derived and the global estimated state is achieved. Experimental results show that CMSCJPDA is superior to the state-of-the-art algorithms in the aspects of tracking accuracy, numerical stability, and computational cost, which provides a new idea to solve multi-sensor tracking problems. PMID:29113085
Centralized Multi-Sensor Square Root Cubature Joint Probabilistic Data Association.
Liu, Yu; Liu, Jun; Li, Gang; Qi, Lin; Li, Yaowen; He, You
2017-11-05
This paper focuses on the tracking problem of multiple targets with multiple sensors in a nonlinear cluttered environment. To avoid Jacobian matrix computation and scaling parameter adjustment, improve numerical stability, and acquire more accurate estimated results for centralized nonlinear tracking, a novel centralized multi-sensor square root cubature joint probabilistic data association algorithm (CMSCJPDA) is proposed. Firstly, the multi-sensor tracking problem is decomposed into several single-sensor multi-target tracking problems, which are sequentially processed during the estimation. Then, in each sensor, the assignment of its measurements to target tracks is accomplished on the basis of joint probabilistic data association (JPDA), and a weighted probability fusion method with square root version of a cubature Kalman filter (SRCKF) is utilized to estimate the targets' state. With the measurements in all sensors processed CMSCJPDA is derived and the global estimated state is achieved. Experimental results show that CMSCJPDA is superior to the state-of-the-art algorithms in the aspects of tracking accuracy, numerical stability, and computational cost, which provides a new idea to solve multi-sensor tracking problems.
White, Shannon L.; Miller, William L.; Dowell, Stephanie A.; Bartron, Meredith L.; Wagner, Tyler
2018-01-01
Due to increased anthropogenic pressures on many fish populations, supplementing wild populations with captive‐raised individuals has become an increasingly common management practice. Stocking programs can be controversial due to uncertainty about the long‐term fitness effects of genetic introgression on wild populations. In particular, introgression between hatchery and wild individuals can cause declines in wild population fitness, resiliency, and adaptive potential, and contribute to local population extirpation. However, low survival and fitness of captive‐raised individuals can minimize the long‐term genetic consequences of stocking in wild populations, and to date the prevalence of introgression in actively stocked ecosystems has not been rigorously evaluated. We quantified the extent of introgression in 30 populations of wild brook trout (Salvelinus fontinalis) in a Pennsylvania watershed, and examined the correlation between introgression and 11 environmental covariates. Genetic assignment tests were used to determine the origin (wild vs. captive‐raised) for 1742 wild‐caught and 300 hatchery brook trout. To avoid assignment biases, individuals were assigned to two simulated populations that represented the average allele frequencies in wild and hatchery groups. Fish with intermediate probabilities of wild ancestry were classified as introgressed, with threshold values determined through simulation. Even with reoccurring stocking at most sites, over 93% of wild‐caught individuals probabilistically assigned to wild origin, and only 5.6% of wild‐caught fish assigned to introgressed. Models examining environmental drivers of introgression explained less than 3% of the among‐population variability, and all estimated effects were highly uncertain. This was not surprising given overall low introgression observed in this study. Our results suggest that introgression of hatchery‐derived genotypes can occur at low rates, even in actively stocked ecosystems and across a range of habitats. However, a cautious approach to stocking may still be warranted, as the potential effects of stocking on wild population fitness and the mechanisms limiting introgression are not known.
PROBABILISTIC SAFETY ASSESSMENT OF OPERATIONAL ACCIDENTS AT THE WASTE ISOLATION PILOT PLANT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rucker, D.F.
2000-09-01
This report presents a probabilistic safety assessment of radioactive doses as consequences from accident scenarios to complement the deterministic assessment presented in the Waste Isolation Pilot Plant (WIPP) Safety Analysis Report (SAR). The International Council of Radiation Protection (ICRP) recommends both assessments be conducted to ensure that ''an adequate level of safety has been achieved and that no major contributors to risk are overlooked'' (ICRP 1993). To that end, the probabilistic assessment for the WIPP accident scenarios addresses the wide range of assumptions, e.g. the range of values representing the radioactive source of an accident, that could possibly have beenmore » overlooked by the SAR. Routine releases of radionuclides from the WIPP repository to the environment during the waste emplacement operations are expected to be essentially zero. In contrast, potential accidental releases from postulated accident scenarios during waste handling and emplacement could be substantial, which necessitates the need for radiological air monitoring and confinement barriers (DOE 1999). The WIPP Safety Analysis Report (SAR) calculated doses from accidental releases to the on-site (at 100 m from the source) and off-site (at the Exclusive Use Boundary and Site Boundary) public by a deterministic approach. This approach, as demonstrated in the SAR, uses single-point values of key parameters to assess the 50-year, whole-body committed effective dose equivalent (CEDE). The basic assumptions used in the SAR to formulate the CEDE are retained for this report's probabilistic assessment. However, for the probabilistic assessment, single-point parameter values were replaced with probability density functions (PDF) and were sampled over an expected range. Monte Carlo simulations were run, in which 10,000 iterations were performed by randomly selecting one value for each parameter and calculating the dose. Statistical information was then derived from the 10,000 iteration batch, which included 5%, 50%, and 95% dose likelihood, and the sensitivity of each assumption to the calculated doses. As one would intuitively expect, the doses from the probabilistic assessment for most scenarios were found to be much less than the deterministic assessment. The lower dose of the probabilistic assessment can be attributed to a ''smearing'' of values from the high and low end of the PDF spectrum of the various input parameters. The analysis also found a potential weakness in the deterministic analysis used in the SAR, a detail on drum loading was not taken into consideration. Waste emplacement operations thus far have handled drums from each shipment as a single unit, i.e. drums from each shipment are kept together. Shipments typically come from a single waste stream, and therefore the curie loading of each drum can be considered nearly identical to that of its neighbor. Calculations show that if there are large numbers of drums used in the accident scenario assessment, e.g. 28 drums in the waste hoist failure scenario (CH5), then the probabilistic dose assessment calculations will diverge from the deterministically determined doses. As it is currently calculated, the deterministic dose assessment assumes one drum loaded to the maximum allowable (80 PE-Ci), and the remaining are 10% of the maximum. The effective average of drum curie content is therefore less in the deterministic assessment than the probabilistic assessment for a large number of drums. EEG recommends that the WIPP SAR calculations be revisited and updated to include a probabilistic safety assessment.« less
Probabilistic exposure assessment to face and oral care cosmetic products by the French population.
Bernard, A; Dornic, N; Roudot, Ac; Ficheux, As
2018-01-01
Cosmetic exposure data for face and mouth are limited in Europe. The aim of the study was to assess the exposure to face cosmetics using recent French consumption data (Ficheux et al., 2016b, 2015). Exposure was assessed using a probabilistic method for thirty one face products from four lines of products: cleanser, care, make-up and make-up remover products and two oral care products. Probabilistic exposure was assessed for different subpopulation according to sex and age in adults and children. Pregnant women were also studied. The levels of exposure to moisturizing cream, lip balm, mascara, eyeliner, cream foundation, toothpaste and mouthwash were higher than the values currently used by the Scientific Committee on Consumer Safety (SCCS). Exposure values found for eye shadow, lipstick, lotion and milk (make-up remover) were lower than SCCS values. These new French exposure values will be useful for safety assessors and for safety agencies in order to protect the general population and the at risk populations. Copyright © 2017. Published by Elsevier Ltd.
Effects of delay and probability combinations on discounting in humans.
Cox, David J; Dallery, Jesse
2016-10-01
To determine discount rates, researchers typically adjust the amount of an immediate or certain option relative to a delayed or uncertain option. Because this adjusting amount method can be relatively time consuming, researchers have developed more efficient procedures. One such procedure is a 5-trial adjusting delay procedure, which measures the delay at which an amount of money loses half of its value (e.g., $1000 is valued at $500 with a 10-year delay to its receipt). Experiment 1 (n=212) used 5-trial adjusting delay or probability tasks to measure delay discounting of losses, probabilistic gains, and probabilistic losses. Experiment 2 (n=98) assessed combined probabilistic and delayed alternatives. In both experiments, we compared results from 5-trial adjusting delay or probability tasks to traditional adjusting amount procedures. Results suggest both procedures produced similar rates of probability and delay discounting in six out of seven comparisons. A magnitude effect consistent with previous research was observed for probabilistic gains and losses, but not for delayed losses. Results also suggest that delay and probability interact to determine the value of money. Five-trial methods may allow researchers to assess discounting more efficiently as well as study more complex choice scenarios. Copyright © 2016 Elsevier B.V. All rights reserved.
Cascão, Angela Maria; Jorge, Maria Helena Prado de Mello; Costa, Antonio José Leal; Kale, Pauline Lorena
2016-01-01
Ill-defined causes of death are common among the elderly owing to the high frequency of comorbidities and, consequently, to the difficulty in defining the underlying cause of death. To analyze the validity and reliability of the "primary diagnosis" in hospitalization to recover the information on the underlying cause of death in natural deaths among the elderly whose deaths were originally assigned to "ill-defined cause" in their Death Certificate. The hospitalizations occurred in the state of Rio de Janeiro, in 2006. The databases obtained in the Information Systems on Mortality and Hospitalization were probabilistically linked. The following data were calculated for hospitalizations of the elderly that evolved into deaths with a natural cause: concordance percentages, Kappa coefficient, sensitivity, specificity, and the positive predictive value of the primary diagnosis. Deaths related to "ill-defined causes" were assigned to a new cause, which was defined based on the primary diagnosis. The reliability of the primary diagnosis was good, according to the total percentage of consistency (50.2%), and fair, according to the Kappa coefficient (k = 0.4; p < 0.0001). Diseases related to the circulatory system and neoplasia occurred with the highest frequency among the deaths and the hospitalizations and presented a higher consistency of positive predictive values per chapter and grouping of the International Classification of Diseases. The recovery of the information on the primary cause occurred in 22.6% of the deaths with ill-defined causes (n = 14). The methodology developed and applied for the recovery of the information on the natural cause of death among the elderly in this study had the advantage of effectiveness and the reduction of costs compared to an investigation of the death that is recommended in situations of non-linked and low positive predictive values. Monitoring the mortality profile by the cause of death is necessary to periodically update the predictive values.
Aldridge, Robert W; Shaji, Kunju; Hayward, Andrew C; Abubakar, Ibrahim
2015-01-01
The Enhanced Matching System (EMS) is a probabilistic record linkage program developed by the tuberculosis section at Public Health England to match data for individuals across two datasets. This paper outlines how EMS works and investigates its accuracy for linkage across public health datasets. EMS is a configurable Microsoft SQL Server database program. To examine the accuracy of EMS, two public health databases were matched using National Health Service (NHS) numbers as a gold standard unique identifier. Probabilistic linkage was then performed on the same two datasets without inclusion of NHS number. Sensitivity analyses were carried out to examine the effect of varying matching process parameters. Exact matching using NHS number between two datasets (containing 5931 and 1759 records) identified 1071 matched pairs. EMS probabilistic linkage identified 1068 record pairs. The sensitivity of probabilistic linkage was calculated as 99.5% (95%CI: 98.9, 99.8), specificity 100.0% (95%CI: 99.9, 100.0), positive predictive value 99.8% (95%CI: 99.3, 100.0), and negative predictive value 99.9% (95%CI: 99.8, 100.0). Probabilistic matching was most accurate when including address variables and using the automatically generated threshold for determining links with manual review. With the establishment of national electronic datasets across health and social care, EMS enables previously unanswerable research questions to be tackled with confidence in the accuracy of the linkage process. In scenarios where a small sample is being matched into a very large database (such as national records of hospital attendance) then, compared to results presented in this analysis, the positive predictive value or sensitivity may drop according to the prevalence of matches between databases. Despite this possible limitation, probabilistic linkage has great potential to be used where exact matching using a common identifier is not possible, including in low-income settings, and for vulnerable groups such as homeless populations, where the absence of unique identifiers and lower data quality has historically hindered the ability to identify individuals across datasets.
Comparison of the economic impact of different wind power forecast systems for producers
NASA Astrophysics Data System (ADS)
Alessandrini, S.; Davò, F.; Sperati, S.; Benini, M.; Delle Monache, L.
2014-05-01
Deterministic forecasts of wind production for the next 72 h at a single wind farm or at the regional level are among the main end-users requirement. However, for an optimal management of wind power production and distribution it is important to provide, together with a deterministic prediction, a probabilistic one. A deterministic forecast consists of a single value for each time in the future for the variable to be predicted, while probabilistic forecasting informs on probabilities for potential future events. This means providing information about uncertainty (i.e. a forecast of the PDF of power) in addition to the commonly provided single-valued power prediction. A significant probabilistic application is related to the trading of energy in day-ahead electricity markets. It has been shown that, when trading future wind energy production, using probabilistic wind power predictions can lead to higher benefits than those obtained by using deterministic forecasts alone. In fact, by using probabilistic forecasting it is possible to solve economic model equations trying to optimize the revenue for the producer depending, for example, on the specific penalties for forecast errors valid in that market. In this work we have applied a probabilistic wind power forecast systems based on the "analog ensemble" method for bidding wind energy during the day-ahead market in the case of a wind farm located in Italy. The actual hourly income for the plant is computed considering the actual selling energy prices and penalties proportional to the unbalancing, defined as the difference between the day-ahead offered energy and the actual production. The economic benefit of using a probabilistic approach for the day-ahead energy bidding are evaluated, resulting in an increase of 23% of the annual income for a wind farm owner in the case of knowing "a priori" the future energy prices. The uncertainty on price forecasting partly reduces the economic benefit gained by using a probabilistic energy forecast system.
Hunnicutt, Jacob N; Ulbricht, Christine M; Chrysanthopoulou, Stavroula A; Lapane, Kate L
2016-12-01
We systematically reviewed pharmacoepidemiologic and comparative effectiveness studies that use probabilistic bias analysis to quantify the effects of systematic error including confounding, misclassification, and selection bias on study results. We found articles published between 2010 and October 2015 through a citation search using Web of Science and Google Scholar and a keyword search using PubMed and Scopus. Eligibility of studies was assessed by one reviewer. Three reviewers independently abstracted data from eligible studies. Fifteen studies used probabilistic bias analysis and were eligible for data abstraction-nine simulated an unmeasured confounder and six simulated misclassification. The majority of studies simulating an unmeasured confounder did not specify the range of plausible estimates for the bias parameters. Studies simulating misclassification were in general clearer when reporting the plausible distribution of bias parameters. Regardless of the bias simulated, the probability distributions assigned to bias parameters, number of simulated iterations, sensitivity analyses, and diagnostics were not discussed in the majority of studies. Despite the prevalence and concern of bias in pharmacoepidemiologic and comparative effectiveness studies, probabilistic bias analysis to quantitatively model the effect of bias was not widely used. The quality of reporting and use of this technique varied and was often unclear. Further discussion and dissemination of the technique are warranted. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Concurrent Probabilistic Simulation of High Temperature Composite Structural Response
NASA Technical Reports Server (NTRS)
Abdi, Frank
1996-01-01
A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.
Memory Indexing: A Novel Method for Tracing Memory Processes in Complex Cognitive Tasks
ERIC Educational Resources Information Center
Renkewitz, Frank; Jahn, Georg
2012-01-01
We validate an eye-tracking method applicable for studying memory processes in complex cognitive tasks. The method is tested with a task on probabilistic inferences from memory. It provides valuable data on the time course of processing, thus clarifying previous results on heuristic probabilistic inference. Participants learned cue values of…
Scientific assessment of accuracy, skill and reliability of ocean probabilistic forecast products.
NASA Astrophysics Data System (ADS)
Wei, M.; Rowley, C. D.; Barron, C. N.; Hogan, P. J.
2016-02-01
As ocean operational centers are increasingly adopting and generating probabilistic forecast products for their customers with valuable forecast uncertainties, how to assess and measure these complicated probabilistic forecast products objectively is challenging. The first challenge is how to deal with the huge amount of the data from the ensemble forecasts. The second one is how to describe the scientific quality of probabilistic products. In fact, probabilistic forecast accuracy, skills, reliability, resolutions are different attributes of a forecast system. We briefly introduce some of the fundamental metrics such as the Reliability Diagram, Reliability, Resolution, Brier Score (BS), Brier Skill Score (BSS), Ranked Probability Score (RPS), Ranked Probability Skill Score (RPSS), Continuous Ranked Probability Score (CRPS), and Continuous Ranked Probability Skill Score (CRPSS). The values and significance of these metrics are demonstrated for the forecasts from the US Navy's regional ensemble system with different ensemble members. The advantages and differences of these metrics are studied and clarified.
Probabilistic Structural Analysis Program
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.
2010-01-01
NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.
The $10 trillion value of better information about the transient climate response.
Hope, Chris
2015-11-13
How much is better information about climate change worth? Here, I use PAGE09, a probabilistic integrated assessment model, to find the optimal paths of CO(2) emissions over time and to calculate the value of better information about one aspect of climate change, the transient climate response (TCR). Approximately halving the uncertainty range for TCR has a net present value of about $10.3 trillion (year 2005 US$) if accomplished in time for emissions to be adjusted in 2020, falling to $9.7 trillion if accomplished by 2030. Probabilistic integrated assessment modelling is the only method we have for making estimates like these for the value of better information about the science and impacts of climate change. © 2015 The Author(s).
Satellite-map position estimation for the Mars rover
NASA Technical Reports Server (NTRS)
Hayashi, Akira; Dean, Thomas
1989-01-01
A method for locating the Mars rover using an elevation map generated from satellite data is described. In exploring its environment, the rover is assumed to generate a local rover-centered elevation map that can be used to extract information about the relative position and orientation of landmarks corresponding to local maxima. These landmarks are integrated into a stochastic map which is then matched with the satellite map to obtain an estimate of the robot's current location. The landmarks are not explicitly represented in the satellite map. The results of the matching algorithm correspond to a probabilistic assessment of whether or not the robot is located within a given region of the satellite map. By assigning a probabilistic interpretation to the information stored in the satellite map, researchers are able to provide a precise characterization of the results computed by the matching algorithm.
Hanson, Stanley L.; Perkins, David M.
1995-01-01
The construction of a probabilistic ground-motion hazard map for a region follows a sequence of analyses beginning with the selection of an earthquake catalog and ending with the mapping of calculated probabilistic ground-motion values (Hanson and others, 1992). An integral part of this process is the creation of sources used for the calculation of earthquake recurrence rates and ground motions. These sources consist of areas and lines that are representative of geologic or tectonic features and faults. After the design of the sources, it is necessary to arrange the coordinate points in a particular order compatible with the input format for the SEISRISK-III program (Bender and Perkins, 1987). Source zones are usually modeled as a point-rupture source. Where applicable, linear rupture sources are modeled with articulated lines, representing known faults, or a field of parallel lines, representing a generalized distribution of hypothetical faults. Based on the distribution of earthquakes throughout the individual source zones (or a collection of several sources), earthquake recurrence rates are computed for each of the sources, and a minimum and maximum magnitude is assigned. Over a period of time from 1978 to 1980 several conferences were held by the USGS to solicit information on regions of the United States for the purpose of creating source zones for computation of probabilistic ground motions (Thenhaus, 1983). As a result of these regional meetings and previous work in the Pacific Northwest, (Perkins and others, 1980), California continental shelf, (Thenhaus and others, 1980), and the Eastern outer continental shelf, (Perkins and others, 1979) a consensus set of source zones was agreed upon and subsequently used to produce a national ground motion hazard map for the United States (Algermissen and others, 1982). In this report and on the accompanying disk we provide a complete list of source areas and line sources as used for the 1982 and later 1990 seismic hazard maps for the conterminous U.S. and Alaska. These source zones are represented in the input form required for the hazard program SEISRISK-III, and they include the attenuation table and several other input parameter lines normally found at the beginning of an input data set for SEISRISK-III.
Probabilistic description of probable maximum precipitation
NASA Astrophysics Data System (ADS)
Ben Alaya, Mohamed Ali; Zwiers, Francis W.; Zhang, Xuebin
2017-04-01
Probable Maximum Precipitation (PMP) is the key parameter used to estimate probable Maximum Flood (PMF). PMP and PMF are important for dam safety and civil engineering purposes. Even if the current knowledge of storm mechanisms remains insufficient to properly evaluate limiting values of extreme precipitation, PMP estimation methods are still based on deterministic consideration, and give only single values. This study aims to provide a probabilistic description of the PMP based on the commonly used method, the so-called moisture maximization. To this end, a probabilistic bivariate extreme values model is proposed to address the limitations of traditional PMP estimates via moisture maximization namely: (i) the inability to evaluate uncertainty and to provide a range PMP values, (ii) the interpretation that a maximum of a data series as a physical upper limit (iii) and the assumption that a PMP event has maximum moisture availability. Results from simulation outputs of the Canadian Regional Climate Model CanRCM4 over North America reveal the high uncertainties inherent in PMP estimates and the non-validity of the assumption that PMP events have maximum moisture availability. This later assumption leads to overestimation of the PMP by an average of about 15% over North America, which may have serious implications for engineering design.
How much are you prepared to PAY for a forecast?
NASA Astrophysics Data System (ADS)
Arnal, Louise; Coughlan, Erin; Ramos, Maria-Helena; Pappenberger, Florian; Wetterhall, Fredrik; Bachofen, Carina; van Andel, Schalk Jan
2015-04-01
Probabilistic hydro-meteorological forecasts are a crucial element of the decision-making chain in the field of flood prevention. The operational use of probabilistic forecasts is increasingly promoted through the development of new novel state-of-the-art forecast methods and numerical skill is continuously increasing. However, the value of such forecasts for flood early-warning systems is a topic of diverging opinions. Indeed, the word value, when applied to flood forecasting, is multifaceted. It refers, not only to the raw cost of acquiring and maintaining a probabilistic forecasting system (in terms of human and financial resources, data volume and computational time), but also and most importantly perhaps, to the use of such products. This game aims at investigating this point. It is a willingness to pay game, embedded in a risk-based decision-making experiment. Based on a ``Red Cross/Red Crescent, Climate Centre'' game, it is a contribution to the international Hydrologic Ensemble Prediction Experiment (HEPEX). A limited number of probabilistic forecasts will be auctioned to the participants; the price of these forecasts being market driven. All participants (irrespective of having bought or not a forecast set) will then be taken through a decision-making process to issue warnings for extreme rainfall. This game will promote discussions around the topic of the value of forecasts for decision-making in the field of flood prevention.
A note on probabilistic models over strings: the linear algebra approach.
Bouchard-Côté, Alexandre
2013-12-01
Probabilistic models over strings have played a key role in developing methods that take into consideration indels as phylogenetically informative events. There is an extensive literature on using automata and transducers on phylogenies to do inference on these probabilistic models, in which an important theoretical question is the complexity of computing the normalization of a class of string-valued graphical models. This question has been investigated using tools from combinatorics, dynamic programming, and graph theory, and has practical applications in Bayesian phylogenetics. In this work, we revisit this theoretical question from a different point of view, based on linear algebra. The main contribution is a set of results based on this linear algebra view that facilitate the analysis and design of inference algorithms on string-valued graphical models. As an illustration, we use this method to give a new elementary proof of a known result on the complexity of inference on the "TKF91" model, a well-known probabilistic model over strings. Compared to previous work, our proving method is easier to extend to other models, since it relies on a novel weak condition, triangular transducers, which is easy to establish in practice. The linear algebra view provides a concise way of describing transducer algorithms and their compositions, opens the possibility of transferring fast linear algebra libraries (for example, based on GPUs), as well as low rank matrix approximation methods, to string-valued inference problems.
Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.
Herzallah, Randa
2015-03-01
Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.
Combining Different Privacy-Preserving Record Linkage Methods for Hospital Admission Data.
Stausberg, Jürgen; Waldenburger, Andreas; Borgs, Christian; Schnell, Rainer
2017-01-01
Record linkage (RL) is the process of identifying pairs of records that correspond to the same entity, for example the same patient. The basic approach assigns to each pair of records a similarity weight, and then determines a certain threshold, above which the two records are considered to be a match. Three different RL methods were applied under privacy-preserving conditions on hospital admission data: deterministic RL (DRL), probabilistic RL (PRL), and Bloom filters. The patient characteristics like names were one-way encrypted (DRL, PRL) or transformed to a cryptographic longterm key (Bloom filters). Based on one year of hospital admissions, the data set was split randomly in 30 thousand new and 1,5 million known patients. With the combination of the three RL-methods, a positive predictive value of 83 % (95 %-confidence interval 65 %-94 %) was attained. Thus, the application of the presented combination of RL-methods seem to be suited for other applications of population-based research.
[Early warning on measles through the neural networks].
Yu, Bin; Ding, Chun; Wei, Shan-bo; Chen, Bang-hua; Liu, Pu-lin; Luo, Tong-yong; Wang, Jia-gang; Pan, Zhi-wei; Lu, Jun-an
2011-01-01
To discuss the effects on early warning of measles, using the neural networks. Based on the available data through monthly and weekly reports on measles from January 1986 to August 2006 in Wuhan city. The modal was developed using the neural networks to predict and analyze the prevalence and incidence of measles. When the dynamic time series modal was established with back propagation (BP) networks consisting of two layers, if p was assigned as 9, the convergence speed was acceptable and the correlation coefficient was equal to 0.85. It was more acceptable for monthly forecasting the specific value, but better for weekly forecasting the classification under probabilistic neural networks (PNN). When data was big enough to serve the purpose, it seemed more feasible for early warning using the two-layer BP networks. However, when data was not enough, then PNN could be used for the purpose of prediction. This method seemed feasible to be used in the system for early warning.
Arons, Alexander M M; Krabbe, Paul F M
2013-02-01
Interest is rising in measuring subjective health outcomes, such as treatment outcomes that are not directly quantifiable (functional disability, symptoms, complaints, side effects and health-related quality of life). Health economists in particular have applied probabilistic choice models in the area of health evaluation. They increasingly use discrete choice models based on random utility theory to derive values for healthcare goods or services. Recent attempts have been made to use discrete choice models as an alternative method to derive values for health states. In this article, various probabilistic choice models are described according to their underlying theory. A historical overview traces their development and applications in diverse fields. The discussion highlights some theoretical and technical aspects of the choice models and their similarity and dissimilarity. The objective of the article is to elucidate the position of each model and their applications for health-state valuation.
POPPER, a simple programming language for probabilistic semantic inference in medicine.
Robson, Barry
2015-01-01
Our previous reports described the use of the Hyperbolic Dirac Net (HDN) as a method for probabilistic inference from medical data, and a proposed probabilistic medical Semantic Web (SW) language Q-UEL to provide that data. Rather like a traditional Bayes Net, that HDN provided estimates of joint and conditional probabilities, and was static, with no need for evolution due to "reasoning". Use of the SW will require, however, (a) at least the semantic triple with more elaborate relations than conditional ones, as seen in use of most verbs and prepositions, and (b) rules for logical, grammatical, and definitional manipulation that can generate changes in the inference net. Here is described the simple POPPER language for medical inference. It can be automatically written by Q-UEL, or by hand. Based on studies with our medical students, it is believed that a tool like this may help in medical education and that a physician unfamiliar with SW science can understand it. It is here used to explore the considerable challenges of assigning probabilities, and not least what the meaning and utility of inference net evolution would be for a physician. Copyright © 2014 Elsevier Ltd. All rights reserved.
Error Discounting in Probabilistic Category Learning
Craig, Stewart; Lewandowsky, Stephan; Little, Daniel R.
2011-01-01
Some current theories of probabilistic categorization assume that people gradually attenuate their learning in response to unavoidable error. However, existing evidence for this error discounting is sparse and open to alternative interpretations. We report two probabilistic-categorization experiments that investigated error discounting by shifting feedback probabilities to new values after different amounts of training. In both experiments, responding gradually became less responsive to errors, and learning was slowed for some time after the feedback shift. Both results are indicative of error discounting. Quantitative modeling of the data revealed that adding a mechanism for error discounting significantly improved the fits of an exemplar-based and a rule-based associative learning model, as well as of a recency-based model of categorization. We conclude that error discounting is an important component of probabilistic learning. PMID:21355666
Technical Report 1205: A Simple Probabilistic Combat Model
2016-07-08
This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons assigned...the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at either a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yersak, Alexander S., E-mail: alexander.yersak@colorado.edu; Lee, Yung-Cheng
Pinhole defects in atomic layer deposition (ALD) coatings were measured in an area of 30 cm{sup 2} in an ALD reactor, and these defects were represented by a probabilistic cluster model instead of a single defect density value with number of defects over area. With the probabilistic cluster model, the pinhole defects were simulated over a manufacturing scale surface area of ∼1 m{sup 2}. Large-area pinhole defect simulations were used to develop an improved and enhanced design method for ALD-based devices. A flexible thermal ground plane (FTGP) device requiring ALD hermetic coatings was used as an example. Using a single defectmore » density value, it was determined that for an application with operation temperatures higher than 60 °C, the FTGP device would not be possible. The new probabilistic cluster model shows that up to 40.3% of the FTGP would be acceptable. With this new approach the manufacturing yield of ALD-enabled or other thin film based devices with different design configurations can be determined. It is important to guide process optimization and control and design for manufacturability.« less
Ciffroy, Philippe; Charlatchka, Rayna; Ferreira, Daniel; Marang, Laura
2013-07-01
The biotic ligand model (BLM) theoretically enables the derivation of environmental quality standards that are based on true bioavailable fractions of metals. Several physicochemical variables (especially pH, major cations, dissolved organic carbon, and dissolved metal concentrations) must, however, be assigned to run the BLM, but they are highly variable in time and space in natural systems. This article describes probabilistic approaches for integrating such variability during the derivation of risk indexes. To describe each variable using a probability density function (PDF), several methods were combined to 1) treat censored data (i.e., data below the limit of detection), 2) incorporate the uncertainty of the solid-to-liquid partitioning of metals, and 3) detect outliers. From a probabilistic perspective, 2 alternative approaches that are based on log-normal and Γ distributions were tested to estimate the probability of the predicted environmental concentration (PEC) exceeding the predicted non-effect concentration (PNEC), i.e., p(PEC/PNEC>1). The probabilistic approach was tested on 4 real-case studies based on Cu-related data collected from stations on the Loire and Moselle rivers. The approach described in this article is based on BLM tools that are freely available for end-users (i.e., the Bio-Met software) and on accessible statistical data treatments. This approach could be used by stakeholders who are involved in risk assessments of metals for improving site-specific studies. Copyright © 2013 SETAC.
Probabilistic analysis of a materially nonlinear structure
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Wu, Y.-T.; Fossum, A. F.
1990-01-01
A probabilistic finite element program is used to perform probabilistic analysis of a materially nonlinear structure. The program used in this study is NESSUS (Numerical Evaluation of Stochastic Structure Under Stress), under development at Southwest Research Institute. The cumulative distribution function (CDF) of the radial stress of a thick-walled cylinder under internal pressure is computed and compared with the analytical solution. In addition, sensitivity factors showing the relative importance of the input random variables are calculated. Significant plasticity is present in this problem and has a pronounced effect on the probabilistic results. The random input variables are the material yield stress and internal pressure with Weibull and normal distributions, respectively. The results verify the ability of NESSUS to compute the CDF and sensitivity factors of a materially nonlinear structure. In addition, the ability of the Advanced Mean Value (AMV) procedure to assess the probabilistic behavior of structures which exhibit a highly nonlinear response is shown. Thus, the AMV procedure can be applied with confidence to other structures which exhibit nonlinear behavior.
Toward a Probabilistic Phenological Model for Wheat Growing Degree Days (GDD)
NASA Astrophysics Data System (ADS)
Rahmani, E.; Hense, A.
2017-12-01
Are there deterministic relations between phenological and climate parameters? The answer is surely `No'. This answer motivated us to solve the problem through probabilistic theories. Thus, we developed a probabilistic phenological model which has the advantage of giving additional information in terms of uncertainty. To that aim, we turned to a statistical analysis named survival analysis. Survival analysis deals with death in biological organisms and failure in mechanical systems. In survival analysis literature, death or failure is considered as an event. By event, in this research we mean ripening date of wheat. We will assume only one event in this special case. By time, we mean the growing duration from sowing to ripening as lifetime for wheat which is a function of GDD. To be more precise we will try to perform the probabilistic forecast for wheat ripening. The probability value will change between 0 and 1. Here, the survivor function gives the probability that the not ripened wheat survives longer than a specific time or will survive to the end of its lifetime as a ripened crop. The survival function at each station is determined by fitting a normal distribution to the GDD as the function of growth duration. Verification of the models obtained is done using CRPS skill score (CRPSS). The positive values of CRPSS indicate the large superiority of the probabilistic phonologic survival model to the deterministic models. These results demonstrate that considering uncertainties in modeling are beneficial, meaningful and necessary. We believe that probabilistic phenological models have the potential to help reduce the vulnerability of agricultural production systems to climate change thereby increasing food security.
Localization of the lumbar discs using machine learning and exact probabilistic inference.
Oktay, Ayse Betul; Akgul, Yusuf Sinan
2011-01-01
We propose a novel fully automatic approach to localize the lumbar intervertebral discs in MR images with PHOG based SVM and a probabilistic graphical model. At the local level, our method assigns a score to each pixel in target image that indicates whether it is a disc center or not. At the global level, we define a chain-like graphical model that represents the lumbar intervertebral discs and we use an exact inference algorithm to localize the discs. Our main contributions are the employment of the SVM with the PHOG based descriptor which is robust against variations of the discs and a graphical model that reflects the linear nature of the vertebral column. Our inference algorithm runs in polynomial time and produces globally optimal results. The developed system is validated on a real spine MRI dataset and the final localization results are favorable compared to the results reported in the literature.
A Hough Transform Global Probabilistic Approach to Multiple-Subject Diffusion MRI Tractography
Aganj, Iman; Lenglet, Christophe; Jahanshad, Neda; Yacoub, Essa; Harel, Noam; Thompson, Paul M.; Sapiro, Guillermo
2011-01-01
A global probabilistic fiber tracking approach based on the voting process provided by the Hough transform is introduced in this work. The proposed framework tests candidate 3D curves in the volume, assigning to each one a score computed from the diffusion images, and then selects the curves with the highest scores as the potential anatomical connections. The algorithm avoids local minima by performing an exhaustive search at the desired resolution. The technique is easily extended to multiple subjects, considering a single representative volume where the registered high-angular resolution diffusion images (HARDI) from all the subjects are non-linearly combined, thereby obtaining population-representative tracts. The tractography algorithm is run only once for the multiple subjects, and no tract alignment is necessary. We present experimental results on HARDI volumes, ranging from simulated and 1.5T physical phantoms to 7T and 4T human brain and 7T monkey brain datasets. PMID:21376655
Maritime Threat Detection Using Probabilistic Graphical Models
2012-01-01
CRF, unlike an HMM, can represent local features, and does not require feature concatenation. MLNs For MLNs, we used Alchemy ( Alchemy 2011), an...open source statistical relational learning and probabilistic inferencing package. Alchemy supports generative and discriminative weight learning, and...that Alchemy creates a new formula for every possible combination of the values for a1 and a2 that fit the type specified in their predicate
The purpose of this SOP is to describe the procedures undertaken to calculate the ingestion exposure using composite food chemical residue values from the day of direct measurements. The calculation is based on the probabilistic approach. This SOP uses data that have been proper...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moses, Alan M.; Chiang, Derek Y.; Pollard, Daniel A.
2004-10-28
We introduce a method (MONKEY) to identify conserved transcription-factor binding sites in multispecies alignments. MONKEY employs probabilistic models of factor specificity and binding site evolution, on which basis we compute the likelihood that putative sites are conserved and assign statistical significance to each hit. Using genomes from the genus Saccharomyces, we illustrate how the significance of real sites increases with evolutionary distance and explore the relationship between conservation and function.
NASA Astrophysics Data System (ADS)
Shi, Lei; Wei, Jia-Hua; Li, Yun-Xia; Ma, Li-Hua; Xue, Yang; Luo, Jun-Wen
2017-04-01
We propose a novel scheme to probabilistically transmit an arbitrary unknown two-qubit quantum state via Positive Operator-Valued Measurement with the help of two partially entangled states. In this scheme, the teleportation with two senders and two receives can be realized when the information of non-maximally entangled states is only available for the senders. Furthermore, the concrete implementation processes of this proposal are presented, meanwhile the classical communication cost and the successful probability of our scheme are calculated. Supported by the National Natural Science Foundation of China under Grant Nos. 60974037, 61134008, 11074307, and 61273202
Global assessment of predictability of water availability: A bivariate probabilistic Budyko analysis
NASA Astrophysics Data System (ADS)
Wang, Weiguang; Fu, Jianyu
2018-02-01
Estimating continental water availability is of great importance for water resources management, in terms of maintaining ecosystem integrity and sustaining society development. To more accurately quantify the predictability of water availability, on the basis of univariate probabilistic Budyko framework, a bivariate probabilistic Budyko approach was developed using copula-based joint distribution model for considering the dependence between parameter ω of Wang-Tang's equation and the Normalized Difference Vegetation Index (NDVI), and was applied globally. The results indicate the predictive performance in global water availability is conditional on the climatic condition. In comparison with simple univariate distribution, the bivariate one produces the lower interquartile range under the same global dataset, especially in the regions with higher NDVI values, highlighting the importance of developing the joint distribution by taking into account the dependence structure of parameter ω and NDVI, which can provide more accurate probabilistic evaluation of water availability.
Discounting of food, sex, and money.
Holt, Daniel D; Newquist, Matthew H; Smits, Rochelle R; Tiry, Andrew M
2014-06-01
Discounting is a useful framework for understanding choice involving a range of delayed and probabilistic outcomes (e.g., money, food, drugs), but relatively few studies have examined how people discount other commodities (e.g., entertainment, sex). Using a novel discounting task, where the length of a line represented the value of an outcome and was adjusted using a staircase procedure, we replicated previous findings showing that individuals discount delayed and probabilistic outcomes in a manner well described by a hyperbola-like function. In addition, we found strong positive correlations between discounting rates of delayed, but not probabilistic, outcomes. This suggests that discounting of delayed outcomes may be relatively predictable across outcome types but that discounting of probabilistic outcomes may depend more on specific contexts. The generality of delay discounting and potential context dependence of probability discounting may provide important information regarding factors contributing to choice behavior.
Maximizing Statistical Power When Verifying Probabilistic Forecasts of Hydrometeorological Events
NASA Astrophysics Data System (ADS)
DeChant, C. M.; Moradkhani, H.
2014-12-01
Hydrometeorological events (i.e. floods, droughts, precipitation) are increasingly being forecasted probabilistically, owing to the uncertainties in the underlying causes of the phenomenon. In these forecasts, the probability of the event, over some lead time, is estimated based on some model simulations or predictive indicators. By issuing probabilistic forecasts, agencies may communicate the uncertainty in the event occurring. Assuming that the assigned probability of the event is correct, which is referred to as a reliable forecast, the end user may perform some risk management based on the potential damages resulting from the event. Alternatively, an unreliable forecast may give false impressions of the actual risk, leading to improper decision making when protecting resources from extreme events. Due to this requisite for reliable forecasts to perform effective risk management, this study takes a renewed look at reliability assessment in event forecasts. Illustrative experiments will be presented, showing deficiencies in the commonly available approaches (Brier Score, Reliability Diagram). Overall, it is shown that the conventional reliability assessment techniques do not maximize the ability to distinguish between a reliable and unreliable forecast. In this regard, a theoretical formulation of the probabilistic event forecast verification framework will be presented. From this analysis, hypothesis testing with the Poisson-Binomial distribution is the most exact model available for the verification framework, and therefore maximizes one's ability to distinguish between a reliable and unreliable forecast. Application of this verification system was also examined within a real forecasting case study, highlighting the additional statistical power provided with the use of the Poisson-Binomial distribution.
Probabilistic Metrology Attains Macroscopic Cloning of Quantum Clocks
NASA Astrophysics Data System (ADS)
Gendra, B.; Calsamiglia, J.; Muñoz-Tapia, R.; Bagan, E.; Chiribella, G.
2014-12-01
It has recently been shown that probabilistic protocols based on postselection boost the performances of the replication of quantum clocks and phase estimation. Here we demonstrate that the improvements in these two tasks have to match exactly in the macroscopic limit where the number of clones grows to infinity, preserving the equivalence between asymptotic cloning and state estimation for arbitrary values of the success probability. Remarkably, the cloning fidelity depends critically on the number of rationally independent eigenvalues of the clock Hamiltonian. We also prove that probabilistic metrology can simulate cloning in the macroscopic limit for arbitrary sets of states when the performance of the simulation is measured by testing small groups of clones.
Probabilistic simulation of the human factor in structural reliability
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1993-01-01
A formal approach is described in an attempt to computationally simulate the probable ranges of uncertainties of the human factor in structural probabilistic assessments. A multi-factor interaction equation (MFIE) model has been adopted for this purpose. Human factors such as marital status, professional status, home life, job satisfaction, work load and health, are considered to demonstrate the concept. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Suitability of the MFIE in the subsequently probabilistic sensitivity studies are performed to assess the validity of the whole approach. Results obtained show that the uncertainties for no error range from five to thirty percent for the most optimistic case.
Probabilistic simulation of the human factor in structural reliability
NASA Astrophysics Data System (ADS)
Chamis, Christos C.; Singhal, Surendra N.
1994-09-01
The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).
Singer, D.A.
2006-01-01
A probabilistic neural network is employed to classify 1610 mineral deposits into 18 types using tonnage, average Cu, Mo, Ag, Au, Zn, and Pb grades, and six generalized rock types. The purpose is to examine whether neural networks might serve for integrating geoscience information available in large mineral databases to classify sites by deposit type. Successful classifications of 805 deposits not used in training - 87% with grouped porphyry copper deposits - and the nature of misclassifications demonstrate the power of probabilistic neural networks and the value of quantitative mineral-deposit models. The results also suggest that neural networks can classify deposits as well as experienced economic geologists. ?? International Association for Mathematical Geology 2006.
Probabilistic Simulation of the Human Factor in Structural Reliability
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Singhal, Surendra N.
1994-01-01
The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).
Catastrophe loss modelling of storm-surge flood risk in eastern England.
Muir Wood, Robert; Drayton, Michael; Berger, Agnete; Burgess, Paul; Wright, Tom
2005-06-15
Probabilistic catastrophe loss modelling techniques, comprising a large stochastic set of potential storm-surge flood events, each assigned an annual rate of occurrence, have been employed for quantifying risk in the coastal flood plain of eastern England. Based on the tracks of the causative extratropical cyclones, historical storm-surge events are categorized into three classes, with distinct windfields and surge geographies. Extreme combinations of "tide with surge" are then generated for an extreme value distribution developed for each class. Fragility curves are used to determine the probability and magnitude of breaching relative to water levels and wave action for each section of sea defence. Based on the time-history of water levels in the surge, and the simulated configuration of breaching, flow is time-stepped through the defences and propagated into the flood plain using a 50 m horizontal-resolution digital elevation model. Based on the values and locations of the building stock in the flood plain, losses are calculated using vulnerability functions linking flood depth and flood velocity to measures of property loss. The outputs from this model for a UK insurance industry portfolio include "loss exceedence probabilities" as well as "average annualized losses", which can be employed for calculating coastal flood risk premiums in each postcode.
NASA Astrophysics Data System (ADS)
Okolelova, Ella; Shibaeva, Marina; Shalnev, Oleg
2018-03-01
The article analyses risks in high-rise construction in terms of investment value with account of the maximum probable loss in case of risk event. The authors scrutinized the risks of high-rise construction in regions with various geographic, climatic and socio-economic conditions that may influence the project environment. Risk classification is presented in general terms, that includes aggregated characteristics of risks being common for many regions. Cluster analysis tools, that allow considering generalized groups of risk depending on their qualitative and quantitative features, were used in order to model the influence of the risk factors on the implementation of investment project. For convenience of further calculations, each type of risk is assigned a separate code with the number of the cluster and the subtype of risk. This approach and the coding of risk factors makes it possible to build a risk matrix, which greatly facilitates the task of determining the degree of impact of risks. The authors clarified and expanded the concept of the price risk, which is defined as the expected value of the event, 105 which extends the capabilities of the model, allows estimating an interval of the probability of occurrence and also using other probabilistic methods of calculation.
Fusar-Poli, P; Schultze-Lutter, F
2016-02-01
Prediction of psychosis in patients at clinical high risk (CHR) has become a mainstream focus of clinical and research interest worldwide. When using CHR instruments for clinical purposes, the predicted outcome is but only a probability; and, consequently, any therapeutic action following the assessment is based on probabilistic prognostic reasoning. Yet, probabilistic reasoning makes considerable demands on the clinicians. We provide here a scholarly practical guide summarising the key concepts to support clinicians with probabilistic prognostic reasoning in the CHR state. We review risk or cumulative incidence of psychosis in, person-time rate of psychosis, Kaplan-Meier estimates of psychosis risk, measures of prognostic accuracy, sensitivity and specificity in receiver operator characteristic curves, positive and negative predictive values, Bayes' theorem, likelihood ratios, potentials and limits of real-life applications of prognostic probabilistic reasoning in the CHR state. Understanding basic measures used for prognostic probabilistic reasoning is a prerequisite for successfully implementing the early detection and prevention of psychosis in clinical practice. Future refinement of these measures for CHR patients may actually influence risk management, especially as regards initiating or withholding treatment. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Diagnostic Validity of an Automated Probabilistic Tractography in Amnestic Mild Cognitive Impairment
Jung, Won Sang; Um, Yoo Hyun; Kang, Dong Woo; Lee, Chang Uk; Woo, Young Sup; Bahk, Won-Myong
2018-01-01
Objective Although several prior works showed the white matter (WM) integrity changes in amnestic mild cognitive impairment (aMCI) and Alzheimer’s disease, it is still unclear the diagnostic accuracy of the WM integrity measurements using diffusion tensor imaging (DTI) in discriminating aMCI from normal controls. The aim of this study is to explore diagnostic validity of whole brain automated probabilistic tractography in discriminating aMCI from normal controls. Methods One hundred-two subjects (50 aMCI and 52 normal controls) were included and underwent DTI scans. Whole brain WM tracts were reconstructed with automated probabilistic tractography. Fractional anisotropy (FA) and mean diffusivity (MD) values of the memory related WM tracts were measured and compared between the aMCI and the normal control groups. In addition, the diagnostic validities of these WM tracts were evaluated. Results Decreased FA and increased MD values of memory related WM tracts were observed in the aMCI group compared with the control group. Among FA and MD value of each tract, the FA value of left cingulum angular bundle showed the highest area under the curve (AUC) of 0.85 with a sensitivity of 88.2%, a specificity of 76.9% in differentiating MCI patients from control subjects. Furthermore, the combination FA values of WM integrity measures of memory related WM tracts showed AUC value of 0.98, a sensitivity of 96%, a specificity of 94.2%. Conclusion Our results with good diagnostic validity of WM integrity measurements suggest DTI might be promising neuroimaging tool for early detection of aMCI and AD patients. PMID:29739127
Gao, Xiang; Lin, Huaiying; Revanna, Kashi; Dong, Qunfeng
2017-05-10
Species-level classification for 16S rRNA gene sequences remains a serious challenge for microbiome researchers, because existing taxonomic classification tools for 16S rRNA gene sequences either do not provide species-level classification, or their classification results are unreliable. The unreliable results are due to the limitations in the existing methods which either lack solid probabilistic-based criteria to evaluate the confidence of their taxonomic assignments, or use nucleotide k-mer frequency as the proxy for sequence similarity measurement. We have developed a method that shows significantly improved species-level classification results over existing methods. Our method calculates true sequence similarity between query sequences and database hits using pairwise sequence alignment. Taxonomic classifications are assigned from the species to the phylum levels based on the lowest common ancestors of multiple database hits for each query sequence, and further classification reliabilities are evaluated by bootstrap confidence scores. The novelty of our method is that the contribution of each database hit to the taxonomic assignment of the query sequence is weighted by a Bayesian posterior probability based upon the degree of sequence similarity of the database hit to the query sequence. Our method does not need any training datasets specific for different taxonomic groups. Instead only a reference database is required for aligning to the query sequences, making our method easily applicable for different regions of the 16S rRNA gene or other phylogenetic marker genes. Reliable species-level classification for 16S rRNA or other phylogenetic marker genes is critical for microbiome research. Our software shows significantly higher classification accuracy than the existing tools and we provide probabilistic-based confidence scores to evaluate the reliability of our taxonomic classification assignments based on multiple database matches to query sequences. Despite its higher computational costs, our method is still suitable for analyzing large-scale microbiome datasets for practical purposes. Furthermore, our method can be applied for taxonomic classification of any phylogenetic marker gene sequences. Our software, called BLCA, is freely available at https://github.com/qunfengdong/BLCA .
Middleton, John; Vaks, Jeffrey E
2007-04-01
Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.
NASA Astrophysics Data System (ADS)
Papaioannou, George; Vasiliades, Lampros; Loukas, Athanasios; Aronica, Giuseppe T.
2017-04-01
Probabilistic flood inundation mapping is performed and analysed at the ungauged Xerias stream reach, Volos, Greece. The study evaluates the uncertainty introduced by the roughness coefficient values on hydraulic models in flood inundation modelling and mapping. The well-established one-dimensional (1-D) hydraulic model, HEC-RAS is selected and linked to Monte-Carlo simulations of hydraulic roughness. Terrestrial Laser Scanner data have been used to produce a high quality DEM for input data uncertainty minimisation and to improve determination accuracy on stream channel topography required by the hydraulic model. Initial Manning's n roughness coefficient values are based on pebble count field surveys and empirical formulas. Various theoretical probability distributions are fitted and evaluated on their accuracy to represent the estimated roughness values. Finally, Latin Hypercube Sampling has been used for generation of different sets of Manning roughness values and flood inundation probability maps have been created with the use of Monte Carlo simulations. Historical flood extent data, from an extreme historical flash flood event, are used for validation of the method. The calibration process is based on a binary wet-dry reasoning with the use of Median Absolute Percentage Error evaluation metric. The results show that the proposed procedure supports probabilistic flood hazard mapping at ungauged rivers and provides water resources managers with valuable information for planning and implementing flood risk mitigation strategies.
B-value and slip rate sensitivity analysis for PGA value in Lembang fault and Cimandiri fault area
NASA Astrophysics Data System (ADS)
Pratama, Cecep; Ito, Takeo; Meilano, Irwan; Nugraha, Andri Dian
2017-07-01
We examine slip rate and b-value contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedence in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi and Bandung using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Uncertainty and coefficient of variation from slip rate and b-value in Lembang and Cimandiri Fault area have been calculated. We observe that seismic hazard estimates are sensitive to fault slip rate and b-value with uncertainty result are 0.25 g dan 0.1-0.2 g, respectively. For specific site, we found seismic hazard estimate are 0.49 + 0.13 g with COV 27% and 0.39 + 0.05 g with COV 13% for Sukabumi and Bandung, respectively.
NASA Technical Reports Server (NTRS)
Merchant, D. H.
1976-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Wei, E-mail: zhoux123@umn.edu
2013-06-15
We consider the value function of a stochastic optimal control of degenerate diffusion processes in a domain D. We study the smoothness of the value function, under the assumption of the non-degeneracy of the diffusion term along the normal to the boundary and an interior condition weaker than the non-degeneracy of the diffusion term. When the diffusion term, drift term, discount factor, running payoff and terminal payoff are all in the class of C{sup 1,1}( D-bar ) , the value function turns out to be the unique solution in the class of C{sub loc}{sup 1,1}(D) Intersection C{sup 0,1}( D-bar )more » to the associated degenerate Bellman equation with Dirichlet boundary data. Our approach is probabilistic.« less
Size Evolution and Stochastic Models: Explaining Ostracod Size through Probabilistic Distributions
NASA Astrophysics Data System (ADS)
Krawczyk, M.; Decker, S.; Heim, N. A.; Payne, J.
2014-12-01
The biovolume of animals has functioned as an important benchmark for measuring evolution throughout geologic time. In our project, we examined the observed average body size of ostracods over time in order to understand the mechanism of size evolution in these marine organisms. The body size of ostracods has varied since the beginning of the Ordovician, where the first true ostracods appeared. We created a stochastic branching model to create possible evolutionary trees of ostracod size. Using stratigraphic ranges for ostracods compiled from over 750 genera in the Treatise on Invertebrate Paleontology, we calculated overall speciation and extinction rates for our model. At each timestep in our model, new lineages can evolve or existing lineages can become extinct. Newly evolved lineages are assigned sizes based on their parent genera. We parameterized our model to generate neutral and directional changes in ostracod size to compare with the observed data. New sizes were chosen via a normal distribution, and the neutral model selected new sizes differentials centered on zero, allowing for an equal chance of larger or smaller ostracods at each speciation. Conversely, the directional model centered the distribution on a negative value, giving a larger chance of smaller ostracods. Our data strongly suggests that the overall direction of ostracod evolution has been following a model that directionally pushes mean ostracod size down, shying away from a neutral model. Our model was able to match the magnitude of size decrease. Our models had a constant linear decrease while the actual data had a much more rapid initial rate followed by a constant size. The nuance of the observed trends ultimately suggests a more complex method of size evolution. In conclusion, probabilistic methods can provide valuable insight into possible evolutionary mechanisms determining size evolution in ostracods.
2014-01-01
Background To undertake an economic evaluation of rivaroxaban relative to the standard of care for stroke prevention in patients with non-valvular atrial fibrillation (AF) in Greece. Methods An existing Markov model designed to reflect the natural progression of AF patients through different health states, in the course of three month cycles, was adapted to the Greek setting. The analysis was undertaken from a payer perspective. Baseline event rates and efficacy data were obtained from the ROCKET-AF trial for rivaroxaban and vitamin-K-antagonists (VKAs). Utility values for events were based on literature. A treatment-related disutility of 0.05 was applied to the VKA arm. Costs assigned to each health state reflect the year 2013. An incremental cost effectiveness ratio (ICER) was calculated where the outcome was quality-adjusted-life year (QALY) and life-years gained. Probabilistic analysis was undertaken to deal with uncertainty. The horizon of analysis was over patient life time and both cost and outcomes were discounted at 3.5%. Results Based on safety-on-treatment data, rivaroxaban was associated with a 0.22 increment in QALYs compared to VKA. The average total lifetime cost of rivaroxaban-treated patients was €239 lower compared to VKA. Rivaroxaban was associated with additional drug acquisition cost (€4,033) and reduced monitoring cost (-€3,929). Therefore, rivaroxaban was a dominant alternative over VKA. Probabilistic analysis revealed that there is a 100% probability of rivaroxaban being cost-effective versus VKA at a willingness to pay threshold of €30,000/QALY gained. Conclusion Rivaroxaban may represent for payers a dominant option for the prevention of thromboembolic events in moderate to high risk AF patients in Greece. PMID:24512351
A Unified Probabilistic Framework for Dose-Response Assessment of Human Health Effects.
Chiu, Weihsueh A; Slob, Wout
2015-12-01
When chemical health hazards have been identified, probabilistic dose-response assessment ("hazard characterization") quantifies uncertainty and/or variability in toxicity as a function of human exposure. Existing probabilistic approaches differ for different types of endpoints or modes-of-action, lacking a unifying framework. We developed a unified framework for probabilistic dose-response assessment. We established a framework based on four principles: a) individual and population dose responses are distinct; b) dose-response relationships for all (including quantal) endpoints can be recast as relating to an underlying continuous measure of response at the individual level; c) for effects relevant to humans, "effect metrics" can be specified to define "toxicologically equivalent" sizes for this underlying individual response; and d) dose-response assessment requires making adjustments and accounting for uncertainty and variability. We then derived a step-by-step probabilistic approach for dose-response assessment of animal toxicology data similar to how nonprobabilistic reference doses are derived, illustrating the approach with example non-cancer and cancer datasets. Probabilistically derived exposure limits are based on estimating a "target human dose" (HDMI), which requires risk management-informed choices for the magnitude (M) of individual effect being protected against, the remaining incidence (I) of individuals with effects ≥ M in the population, and the percent confidence. In the example datasets, probabilistically derived 90% confidence intervals for HDMI values span a 40- to 60-fold range, where I = 1% of the population experiences ≥ M = 1%-10% effect sizes. Although some implementation challenges remain, this unified probabilistic framework can provide substantially more complete and transparent characterization of chemical hazards and support better-informed risk management decisions.
Ambiguity and Uncertainty in Probabilistic Inference.
1983-09-01
whether one was to judge the like- lihood that the majority or minority position was true . In order to sample a wide range of values of n and p, 40...AFD-A133 418 AMBIGUITY AND UNCERTAINTY IN PROBABILISTIC INFERENCE i/i U CLRS (U) CHICGO UNIT’ IL CENTER FOR DECISION RESERCH H J EINHORN ET AL. SEP...been demonstrated experimentally (Becker & Brownson, 1964; Yates & Zukowski, 1976). On the other hand, the process by which such second-order uncertainty
Dynamic shaping of dopamine signals during probabilistic Pavlovian conditioning.
Hart, Andrew S; Clark, Jeremy J; Phillips, Paul E M
2015-01-01
Cue- and reward-evoked phasic dopamine activity during Pavlovian and operant conditioning paradigms is well correlated with reward-prediction errors from formal reinforcement learning models, which feature teaching signals in the form of discrepancies between actual and expected reward outcomes. Additionally, in learning tasks where conditioned cues probabilistically predict rewards, dopamine neurons show sustained cue-evoked responses that are correlated with the variance of reward and are maximal to cues predicting rewards with a probability of 0.5. Therefore, it has been suggested that sustained dopamine activity after cue presentation encodes the uncertainty of impending reward delivery. In the current study we examined the acquisition and maintenance of these neural correlates using fast-scan cyclic voltammetry in rats implanted with carbon fiber electrodes in the nucleus accumbens core during probabilistic Pavlovian conditioning. The advantage of this technique is that we can sample from the same animal and recording location throughout learning with single trial resolution. We report that dopamine release in the nucleus accumbens core contains correlates of both expected value and variance. A quantitative analysis of these signals throughout learning, and during the ongoing updating process after learning in probabilistic conditions, demonstrates that these correlates are dynamically encoded during these phases. Peak CS-evoked responses are correlated with expected value and predominate during early learning while a variance-correlated sustained CS signal develops during the post-asymptotic updating phase. Copyright © 2014 Elsevier Inc. All rights reserved.
Transactional problem content in cost discounting: parallel effects for probability and delay.
Jones, Stephen; Oaksford, Mike
2011-05-01
Four experiments investigated the effects of transactional content on temporal and probabilistic discounting of costs. Kusev, van Schaik, Ayton, Dent, and Chater (2009) have shown that content other than gambles can alter decision-making behavior even when associated value and probabilities are held constant. Transactions were hypothesized to lead to similar effects because the cost to a purchaser always has a linked gain, the purchased commodity. Gain amount has opposite effects on delay and probabilistic discounting (e.g., Benzion, Rapoport, & Yagil, 1989; Green, Myerson, & Ostaszewski, 1999), a finding that is not consistent with descriptive decision theory (Kahneman & Tversky, 1979; Loewenstein & Prelec, 1992). However, little or no effect on discounting has been observed for losses or costs. Experiment 1, using transactions, showed parallel effects for temporal and probabilistic discounting: Smaller amounts were discounted more than large amounts. As the cost rises, people value the commodity more, and they consequently discount less. Experiment 2 ruled out a possible methodological cause for this effect. Experiment 3 replicated Experiment 1. Experiment 4, using gambles, showed no effect for temporal discounting, because of the absence of the linked gain, but the same effect for probabilistic discounting, because prospects implicitly introduce a linked gain (Green et al., 1999; Prelec & Loewenstein, 1991). As found by Kusev et al. (2009), these findings are not consistent with decision theory and suggest closer attention should be paid to the effects of content on decision making.
Quantum formalism for classical statistics
NASA Astrophysics Data System (ADS)
Wetterich, C.
2018-06-01
In static classical statistical systems the problem of information transport from a boundary to the bulk finds a simple description in terms of wave functions or density matrices. While the transfer matrix formalism is a type of Heisenberg picture for this problem, we develop here the associated Schrödinger picture that keeps track of the local probabilistic information. The transport of the probabilistic information between neighboring hypersurfaces obeys a linear evolution equation, and therefore the superposition principle for the possible solutions. Operators are associated to local observables, with rules for the computation of expectation values similar to quantum mechanics. We discuss how non-commutativity naturally arises in this setting. Also other features characteristic of quantum mechanics, such as complex structure, change of basis or symmetry transformations, can be found in classical statistics once formulated in terms of wave functions or density matrices. We construct for every quantum system an equivalent classical statistical system, such that time in quantum mechanics corresponds to the location of hypersurfaces in the classical probabilistic ensemble. For suitable choices of local observables in the classical statistical system one can, in principle, compute all expectation values and correlations of observables in the quantum system from the local probabilistic information of the associated classical statistical system. Realizing a static memory material as a quantum simulator for a given quantum system is not a matter of principle, but rather of practical simplicity.
An improved probabilistic account of counterfactual reasoning.
Lucas, Christopher G; Kemp, Charles
2015-10-01
When people want to identify the causes of an event, assign credit or blame, or learn from their mistakes, they often reflect on how things could have gone differently. In this kind of reasoning, one considers a counterfactual world in which some events are different from their real-world counterparts and considers what else would have changed. Researchers have recently proposed several probabilistic models that aim to capture how people do (or should) reason about counterfactuals. We present a new model and show that it accounts better for human inferences than several alternative models. Our model builds on the work of Pearl (2000), and extends his approach in a way that accommodates backtracking inferences and that acknowledges the difference between counterfactual interventions and counterfactual observations. We present 6 new experiments and analyze data from 4 experiments carried out by Rips (2010), and the results suggest that the new model provides an accurate account of both mean human judgments and the judgments of individuals. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
A Hough transform global probabilistic approach to multiple-subject diffusion MRI tractography.
Aganj, Iman; Lenglet, Christophe; Jahanshad, Neda; Yacoub, Essa; Harel, Noam; Thompson, Paul M; Sapiro, Guillermo
2011-08-01
A global probabilistic fiber tracking approach based on the voting process provided by the Hough transform is introduced in this work. The proposed framework tests candidate 3D curves in the volume, assigning to each one a score computed from the diffusion images, and then selects the curves with the highest scores as the potential anatomical connections. The algorithm avoids local minima by performing an exhaustive search at the desired resolution. The technique is easily extended to multiple subjects, considering a single representative volume where the registered high-angular resolution diffusion images (HARDI) from all the subjects are non-linearly combined, thereby obtaining population-representative tracts. The tractography algorithm is run only once for the multiple subjects, and no tract alignment is necessary. We present experimental results on HARDI volumes, ranging from simulated and 1.5T physical phantoms to 7T and 4T human brain and 7T monkey brain datasets. Copyright © 2011 Elsevier B.V. All rights reserved.
RaptorX server: a resource for template-based protein structure modeling.
Källberg, Morten; Margaryan, Gohar; Wang, Sheng; Ma, Jianzhu; Xu, Jinbo
2014-01-01
Assigning functional properties to a newly discovered protein is a key challenge in modern biology. To this end, computational modeling of the three-dimensional atomic arrangement of the amino acid chain is often crucial in determining the role of the protein in biological processes. We present a community-wide web-based protocol, RaptorX server ( http://raptorx.uchicago.edu ), for automated protein secondary structure prediction, template-based tertiary structure modeling, and probabilistic alignment sampling.Given a target sequence, RaptorX server is able to detect even remotely related template sequences by means of a novel nonlinear context-specific alignment potential and probabilistic consistency algorithm. Using the protocol presented here it is thus possible to obtain high-quality structural models for many target protein sequences when only distantly related protein domains have experimentally solved structures. At present, RaptorX server can perform secondary and tertiary structure prediction of a 200 amino acid target sequence in approximately 30 min.
An advanced probabilistic structural analysis method for implicit performance functions
NASA Technical Reports Server (NTRS)
Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.
1989-01-01
In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.
Grammaticality, Acceptability, and Probability: A Probabilistic View of Linguistic Knowledge.
Lau, Jey Han; Clark, Alexander; Lappin, Shalom
2017-07-01
The question of whether humans represent grammatical knowledge as a binary condition on membership in a set of well-formed sentences, or as a probabilistic property has been the subject of debate among linguists, psychologists, and cognitive scientists for many decades. Acceptability judgments present a serious problem for both classical binary and probabilistic theories of grammaticality. These judgements are gradient in nature, and so cannot be directly accommodated in a binary formal grammar. However, it is also not possible to simply reduce acceptability to probability. The acceptability of a sentence is not the same as the likelihood of its occurrence, which is, in part, determined by factors like sentence length and lexical frequency. In this paper, we present the results of a set of large-scale experiments using crowd-sourced acceptability judgments that demonstrate gradience to be a pervasive feature in acceptability judgments. We then show how one can predict acceptability judgments on the basis of probability by augmenting probabilistic language models with an acceptability measure. This is a function that normalizes probability values to eliminate the confounding factors of length and lexical frequency. We describe a sequence of modeling experiments with unsupervised language models drawn from state-of-the-art machine learning methods in natural language processing. Several of these models achieve very encouraging levels of accuracy in the acceptability prediction task, as measured by the correlation between the acceptability measure scores and mean human acceptability values. We consider the relevance of these results to the debate on the nature of grammatical competence, and we argue that they support the view that linguistic knowledge can be intrinsically probabilistic. Copyright © 2016 Cognitive Science Society, Inc.
A novel probabilistic framework for event-based speech recognition
NASA Astrophysics Data System (ADS)
Juneja, Amit; Espy-Wilson, Carol
2003-10-01
One of the reasons for unsatisfactory performance of the state-of-the-art automatic speech recognition (ASR) systems is the inferior acoustic modeling of low-level acoustic-phonetic information in the speech signal. An acoustic-phonetic approach to ASR, on the other hand, explicitly targets linguistic information in the speech signal, but such a system for continuous speech recognition (CSR) is not known to exist. A probabilistic and statistical framework for CSR based on the idea of the representation of speech sounds by bundles of binary valued articulatory phonetic features is proposed. Multiple probabilistic sequences of linguistically motivated landmarks are obtained using binary classifiers of manner phonetic features-syllabic, sonorant and continuant-and the knowledge-based acoustic parameters (APs) that are acoustic correlates of those features. The landmarks are then used for the extraction of knowledge-based APs for source and place phonetic features and their binary classification. Probabilistic landmark sequences are constrained using manner class language models for isolated or connected word recognition. The proposed method could overcome the disadvantages encountered by the early acoustic-phonetic knowledge-based systems that led the ASR community to switch to systems highly dependent on statistical pattern analysis methods and probabilistic language or grammar models.
Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity.
Pecevski, Dejan; Maass, Wolfgang
2016-01-01
Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p (*) that generates the examples it receives. This holds even if p (*) contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference.
Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity123
Pecevski, Dejan
2016-01-01
Abstract Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p* that generates the examples it receives. This holds even if p* contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference. PMID:27419214
A Unified Probabilistic Framework for Dose–Response Assessment of Human Health Effects
Slob, Wout
2015-01-01
Background When chemical health hazards have been identified, probabilistic dose–response assessment (“hazard characterization”) quantifies uncertainty and/or variability in toxicity as a function of human exposure. Existing probabilistic approaches differ for different types of endpoints or modes-of-action, lacking a unifying framework. Objectives We developed a unified framework for probabilistic dose–response assessment. Methods We established a framework based on four principles: a) individual and population dose responses are distinct; b) dose–response relationships for all (including quantal) endpoints can be recast as relating to an underlying continuous measure of response at the individual level; c) for effects relevant to humans, “effect metrics” can be specified to define “toxicologically equivalent” sizes for this underlying individual response; and d) dose–response assessment requires making adjustments and accounting for uncertainty and variability. We then derived a step-by-step probabilistic approach for dose–response assessment of animal toxicology data similar to how nonprobabilistic reference doses are derived, illustrating the approach with example non-cancer and cancer datasets. Results Probabilistically derived exposure limits are based on estimating a “target human dose” (HDMI), which requires risk management–informed choices for the magnitude (M) of individual effect being protected against, the remaining incidence (I) of individuals with effects ≥ M in the population, and the percent confidence. In the example datasets, probabilistically derived 90% confidence intervals for HDMI values span a 40- to 60-fold range, where I = 1% of the population experiences ≥ M = 1%–10% effect sizes. Conclusions Although some implementation challenges remain, this unified probabilistic framework can provide substantially more complete and transparent characterization of chemical hazards and support better-informed risk management decisions. Citation Chiu WA, Slob W. 2015. A unified probabilistic framework for dose–response assessment of human health effects. Environ Health Perspect 123:1241–1254; http://dx.doi.org/10.1289/ehp.1409385 PMID:26006063
Middlebrooks, E H; Tuna, I S; Grewal, S S; Almeida, L; Heckman, M G; Lesser, E R; Foote, K D; Okun, M S; Holanda, V M
2018-06-01
Although globus pallidus internus deep brain stimulation is a widely accepted treatment for Parkinson disease, there is persistent variability in outcomes that is not yet fully understood. In this pilot study, we aimed to investigate the potential role of globus pallidus internus segmentation using probabilistic tractography as a supplement to traditional targeting methods. Eleven patients undergoing globus pallidus internus deep brain stimulation were included in this retrospective analysis. Using multidirection diffusion-weighted MR imaging, we performed probabilistic tractography at all individual globus pallidus internus voxels. Each globus pallidus internus voxel was then assigned to the 1 ROI with the greatest number of propagated paths. On the basis of deep brain stimulation programming settings, the volume of tissue activated was generated for each patient using a finite element method solution. For each patient, the volume of tissue activated within each of the 10 segmented globus pallidus internus regions was calculated and examined for association with a change in the Unified Parkinson Disease Rating Scale, Part III score before and after treatment. Increasing volume of tissue activated was most strongly correlated with a change in the Unified Parkinson Disease Rating Scale, Part III score for the primary motor region (Spearman r = 0.74, P = .010), followed by the supplementary motor area/premotor cortex (Spearman r = 0.47, P = .15). In this pilot study, we assessed a novel method of segmentation of the globus pallidus internus based on probabilistic tractography as a supplement to traditional targeting methods. Our results suggest that our method may be an independent predictor of deep brain stimulation outcome, and evaluation of a larger cohort or prospective study is warranted to validate these findings. © 2018 by American Journal of Neuroradiology.
Accounting for Uncertainties in Strengths of SiC MEMS Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel; Evans, Laura; Beheim, Glen; Trapp, Mark; Jadaan, Osama; Sharpe, William N., Jr.
2007-01-01
A methodology has been devised for accounting for uncertainties in the strengths of silicon carbide structural components of microelectromechanical systems (MEMS). The methodology enables prediction of the probabilistic strengths of complexly shaped MEMS parts using data from tests of simple specimens. This methodology is intended to serve as a part of a rational basis for designing SiC MEMS, supplementing methodologies that have been borrowed from the art of designing macroscopic brittle material structures. The need for this or a similar methodology arises as a consequence of the fundamental nature of MEMS and the brittle silicon-based materials of which they are typically fabricated. When tested to fracture, MEMS and structural components thereof show wide part-to-part scatter in strength. The methodology involves the use of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) software in conjunction with the ANSYS Probabilistic Design System (PDS) software to simulate or predict the strength responses of brittle material components while simultaneously accounting for the effects of variability of geometrical features on the strength responses. As such, the methodology involves the use of an extended version of the ANSYS/CARES/PDS software system described in Probabilistic Prediction of Lifetimes of Ceramic Parts (LEW-17682-1/4-1), Software Tech Briefs supplement to NASA Tech Briefs, Vol. 30, No. 9 (September 2006), page 10. The ANSYS PDS software enables the ANSYS finite-element-analysis program to account for uncertainty in the design-and analysis process. The ANSYS PDS software accounts for uncertainty in material properties, dimensions, and loading by assigning probabilistic distributions to user-specified model parameters and performing simulations using various sampling techniques.
An alternative to Rasch analysis using triadic comparisons and multi-dimensional scaling
NASA Astrophysics Data System (ADS)
Bradley, C.; Massof, R. W.
2016-11-01
Rasch analysis is a principled approach for estimating the magnitude of some shared property of a set of items when a group of people assign ordinal ratings to them. In the general case, Rasch analysis not only estimates person and item measures on the same invariant scale, but also estimates the average thresholds used by the population to define rating categories. However, Rasch analysis fails when there is insufficient variance in the observed responses because it assumes a probabilistic relationship between person measures, item measures and the rating assigned by a person to an item. When only a single person is rating all items, there may be cases where the person assigns the same rating to many items no matter how many times he rates them. We introduce an alternative to Rasch analysis for precisely these situations. Our approach leverages multi-dimensional scaling (MDS) and requires only rank orderings of items and rank orderings of pairs of distances between items to work. Simulations show one variant of this approach - triadic comparisons with non-metric MDS - provides highly accurate estimates of item measures in realistic situations.
Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics
Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter
2010-01-01
Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575
Probabilistic forecasting of extreme weather events based on extreme value theory
NASA Astrophysics Data System (ADS)
Van De Vyver, Hans; Van Schaeybroeck, Bert
2016-04-01
Extreme events in weather and climate such as high wind gusts, heavy precipitation or extreme temperatures are commonly associated with high impacts on both environment and society. Forecasting extreme weather events is difficult, and very high-resolution models are needed to describe explicitly extreme weather phenomena. A prediction system for such events should therefore preferably be probabilistic in nature. Probabilistic forecasts and state estimations are nowadays common in the numerical weather prediction community. In this work, we develop a new probabilistic framework based on extreme value theory that aims to provide early warnings up to several days in advance. We consider the combined events when an observation variable Y (for instance wind speed) exceeds a high threshold y and its corresponding deterministic forecasts X also exceeds a high forecast threshold y. More specifically two problems are addressed:} We consider pairs (X,Y) of extreme events where X represents a deterministic forecast, and Y the observation variable (for instance wind speed). More specifically two problems are addressed: Given a high forecast X=x_0, what is the probability that Y>y? In other words: provide inference on the conditional probability: [ Pr{Y>y|X=x_0}. ] Given a probabilistic model for Problem 1, what is the impact on the verification analysis of extreme events. These problems can be solved with bivariate extremes (Coles, 2001), and the verification analysis in (Ferro, 2007). We apply the Ramos and Ledford (2009) parametric model for bivariate tail estimation of the pair (X,Y). The model accommodates different types of extremal dependence and asymmetry within a parsimonious representation. Results are presented using the ensemble reforecast system of the European Centre of Weather Forecasts (Hagedorn, 2008). Coles, S. (2001) An Introduction to Statistical modelling of Extreme Values. Springer-Verlag.Ferro, C.A.T. (2007) A probability model for verifying deterministic forecasts of extreme events. Wea. Forecasting {22}, 1089-1100.Hagedorn, R. (2008) Using the ECMWF reforecast dataset to calibrate EPS forecasts. ECMWF Newsletter, {117}, 8-13.Ramos, A., Ledford, A. (2009) A new class of models for bivariate joint tails. J.R. Statist. Soc. B {71}, 219-241.
Vander Zanden, Hannah B.; Tucker, Anton D.; Hart, Kristen M.; Lamont, Margaret M.; Fujisaki, Ikuko; Addison, David S.; Mansfield, Katherine L.; Phillips, Katrina F.; Wunder, Michael B.; Bowen, Gabriel J.; Pajuelo, Mariela; Bolten, Alan B.; Bjorndal, Karen A.
2015-01-01
Stable isotope analysis is a useful tool to track animal movements in both terrestrial and marine environments. These intrinsic markers are assimilated through the diet and may exhibit spatial gradients as a result of biogeochemical processes at the base of the food web. In the marine environment, maps to predict the spatial distribution of stable isotopes are limited, and thus determining geographic origin has been reliant upon integrating satellite telemetry and stable isotope data. Migratory sea turtles regularly move between foraging and reproductive areas. Whereas most nesting populations can be easily accessed and regularly monitored, little is known about the demographic trends in foraging populations. The purpose of the present study was to examine migration patterns of loggerhead nesting aggregations in the Gulf of Mexico (GoM), where sea turtles have been historically understudied. Two methods of geographic assignment using stable isotope values in known-origin samples from satellite telemetry were compared: 1) a nominal approach through discriminant analysis and 2) a novel continuous-surface approach using bivariate carbon and nitrogen isoscapes (isotopic landscapes) developed for this study. Tissue samples for stable isotope analysis were obtained from 60 satellite-tracked individuals at five nesting beaches within the GoM. Both methodological approaches for assignment resulted in high accuracy of foraging area determination, though each has advantages and disadvantages. The nominal approach is more appropriate when defined boundaries are necessary, but up to 42% of the individuals could not be considered in this approach. All individuals can be included in the continuous-surface approach, and individual results can be aggregated to identify geographic hotspots of foraging area use, though the accuracy rate was lower than nominal assignment. The methodological validation provides a foundation for future sea turtle studies in the region to inexpensively determine geographic origin for large numbers of untracked individuals. Regular monitoring of sea turtle nesting aggregations with stable isotope sampling can be used to fill critical data gaps regarding habitat use and migration patterns. Probabilistic assignment to origin with isoscapes has not been previously used in the marine environment, but the methods presented here could also be applied to other migratory marine species.
Zanden, Hannah B Vander; Tucker, Anton D; Hart, Kristen M; Lamont, Margaret M; Fuisaki, Ikuko; Addison, David; Mansfield, Katherine L; Phillips, Katrina F; Wunder, Michael B; Bowen, Gabriel J; Pajuelo, Mariela; Bolten, Alan B; Bjorndal, Karen A
2015-03-01
Stable isotope analysis is a useful tool to track animal movements in both terrestrial and marine environments. These intrinsic markers are assimilated through the diet and may exhibit spatial gradients as a result of biogeochemical processes at the base of the food web. In the marine environment, maps to predict the spatial distribution of stable isotopes are limited, and thus determining geographic origin has been reliant upon integrating satellite telemetry and stable isotope data. Migratory sea turtles regularly move between foraging and reproductive areas. Whereas most nesting populations can be easily accessed and regularly monitored, little is known about the demographic trends in foraging populations. The purpose of the present study was to examine migration patterns of loggerhead nesting aggregations in the Gulf of Mexico (GoM), where sea turtles have been historically understudied. Two methods of geographic assignment using stable isotope values in known-origin samples from satellite telemetry were compared: (1) a nominal approach through discriminant analysis and (2) a novel continuous-surface approach using bivariate carbon and nitrogen isoscapes (isotopic landscapes) developed for this study. Tissue samples for stable isotope analysis were obtained from 60 satellite-tracked individuals at five nesting beaches within the GoM. Both methodological approaches for assignment resulted in high accuracy of foraging area determination, though each has advantages and disadvantages. The nominal approach is more appropriate when defined boundaries are necessary, but up to 42% of the individuals could not be considered in this approach. All individuals can be included in the continuous-surface approach, and individual results can be aggregated to identify geographic hotspots of foraging area use, though the accuracy rate was lower than nominal assignment. The methodological validation provides a foundation for future sea turtle studies in the region to inexpensively determine geographic origin for large numbers of untracked individuals. Regular monitoring of sea turtle nesting aggregations with stable isotope sampling can be used to fill critical data gaps regarding habitat use and migration patterns. Probabilistic assignment to origin with isoscapes has not been previously used in the marine environment, but the methods presented here could also be applied to other migratory marine species.
Peak Dose Assessment for Proposed DOE-PPPO Authorized Limits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maldonado, Delis
2012-06-01
The Oak Ridge Institute for Science and Education (ORISE), a U.S. Department of Energy (DOE) prime contractor, was contracted by the DOE Portsmouth/Paducah Project Office (DOE-PPPO) to conduct a peak dose assessment in support of the Authorized Limits Request for Solid Waste Disposal at Landfill C-746-U at the Paducah Gaseous Diffusion Plant (DOE-PPPO 2011a). The peak doses were calculated based on the DOE-PPPO Proposed Single Radionuclides Soil Guidelines and the DOE-PPPO Proposed Authorized Limits (AL) Volumetric Concentrations available in DOE-PPPO 2011a. This work is provided as an appendix to the Dose Modeling Evaluations and Technical Support Document for the Authorizedmore » Limits Request for the C-746-U Landfill at the Paducah Gaseous Diffusion Plant, Paducah, Kentucky (ORISE 2012). The receptors evaluated in ORISE 2012 were selected by the DOE-PPPO for the additional peak dose evaluations. These receptors included a Landfill Worker, Trespasser, Resident Farmer (onsite), Resident Gardener, Recreational User, Outdoor Worker and an Offsite Resident Farmer. The RESRAD (Version 6.5) and RESRAD-OFFSITE (Version 2.5) computer codes were used for the peak dose assessments. Deterministic peak dose assessments were performed for all the receptors and a probabilistic dose assessment was performed only for the Offsite Resident Farmer at the request of the DOE-PPPO. In a deterministic analysis, a single input value results in a single output value. In other words, a deterministic analysis uses single parameter values for every variable in the code. By contrast, a probabilistic approach assigns parameter ranges to certain variables, and the code randomly selects the values for each variable from the parameter range each time it calculates the dose (NRC 2006). The receptor scenarios, computer codes and parameter input files were previously used in ORISE 2012. A few modifications were made to the parameter input files as appropriate for this effort. Some of these changes included increasing the time horizon beyond 1,050 years (yr), and using the radionuclide concentrations provided by the DOE-PPPO as inputs into the codes. The deterministic peak doses were evaluated within time horizons of 70 yr (for the Landfill Worker and Trespasser), 1,050 yr, 10,000 yr and 100,000 yr (for the Resident Farmer [onsite], Resident Gardener, Recreational User, Outdoor Worker and Offsite Resident Farmer) at the request of the DOE-PPPO. The time horizons of 10,000 yr and 100,000 yr were used at the request of the DOE-PPPO for informational purposes only. The probabilistic peak of the mean dose assessment was performed for the Offsite Resident Farmer using Technetium-99 (Tc-99) and a time horizon of 1,050 yr. The results of the deterministic analyses indicate that among all receptors and time horizons evaluated, the highest projected dose, 2,700 mrem/yr, occurred for the Resident Farmer (onsite) at 12,773 yr. The exposure pathways contributing to the peak dose are ingestion of plants, external gamma, and ingestion of milk, meat and soil. However, this receptor is considered an implausible receptor. The only receptors considered plausible are the Landfill Worker, Recreational User, Outdoor Worker and the Offsite Resident Farmer. The maximum projected dose among the plausible receptors is 220 mrem/yr for the Outdoor Worker and it occurs at 19,045 yr. The exposure pathways contributing to the dose for this receptor are external gamma and soil ingestion. The results of the probabilistic peak of the mean dose analysis for the Offsite Resident Farmer indicate that the average (arithmetic mean) of the peak of the mean doses for this receptor is 0.98 mrem/yr and it occurs at 1,050 yr. This dose corresponds to Tc-99 within the time horizon of 1,050 yr.« less
Mangado, Nerea; Pons-Prats, Jordi; Coma, Martí; Mistrík, Pavel; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel Á
2018-01-01
Cochlear implantation (CI) is a complex surgical procedure that restores hearing in patients with severe deafness. The successful outcome of the implanted device relies on a group of factors, some of them unpredictable or difficult to control. Uncertainties on the electrode array position and the electrical properties of the bone make it difficult to accurately compute the current propagation delivered by the implant and the resulting neural activation. In this context, we use uncertainty quantification methods to explore how these uncertainties propagate through all the stages of CI computational simulations. To this end, we employ an automatic framework, encompassing from the finite element generation of CI models to the assessment of the neural response induced by the implant stimulation. To estimate the confidence intervals of the simulated neural response, we propose two approaches. First, we encode the variability of the cochlear morphology among the population through a statistical shape model. This allows us to generate a population of virtual patients using Monte Carlo sampling and to assign to each of them a set of parameter values according to a statistical distribution. The framework is implemented and parallelized in a High Throughput Computing environment that enables to maximize the available computing resources. Secondly, we perform a patient-specific study to evaluate the computed neural response to seek the optimal post-implantation stimulus levels. Considering a single cochlear morphology, the uncertainty in tissue electrical resistivity and surgical insertion parameters is propagated using the Probabilistic Collocation method, which reduces the number of samples to evaluate. Results show that bone resistivity has the highest influence on CI outcomes. In conjunction with the variability of the cochlear length, worst outcomes are obtained for small cochleae with high resistivity values. However, the effect of the surgical insertion length on the CI outcomes could not be clearly observed, since its impact may be concealed by the other considered parameters. Whereas the Monte Carlo approach implies a high computational cost, Probabilistic Collocation presents a suitable trade-off between precision and computational time. Results suggest that the proposed framework has a great potential to help in both surgical planning decisions and in the audiological setting process.
Nadal, Martí; Kumar, Vikas; Schuhmacher, Marta; Domingo, José L
2008-04-01
Recently, we developed a GIS-Integrated Integral Risk Index (IRI) to assess human health risks in areas with presence of environmental pollutants. Contaminants were previously ranked by applying a self-organizing map (SOM) to their characteristics of persistence, bioaccumulation, and toxicity in order to obtain the Hazard Index (HI). In the present study, the original IRI was substantially improved by allowing the entrance of probabilistic data. A neuroprobabilistic HI was developed by combining SOM and Monte Carlo analysis. In general terms, the deterministic and probabilistic HIs followed a similar pattern: polychlorinated biphenyls (PCBs) and light polycyclic aromatic hydrocarbons (PAHs) were the pollutants showing the highest and lowest values of HI, respectively. However, the bioaccumulation value of heavy metals notably increased after considering a probability density function to explain the bioaccumulation factor. To check its applicability, a case study was investigated. The probabilistic integral risk was calculated in the chemical/petrochemical industrial area of Tarragona (Catalonia, Spain), where an environmental program has been carried out since 2002. The risk change between 2002 and 2005 was evaluated on the basis of probabilistic data of the levels of various pollutants in soils. The results indicated that the risk of the chemicals under study did not follow a homogeneous tendency. However, the current levels of pollution do not mean a relevant source of health risks for the local population. Moreover, the neuroprobabilistic HI seems to be an adequate tool to be taken into account in risk assessment processes.
A human reliability based usability evaluation method for safety-critical software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, R. L.; Tran, T. Q.; Gertman, D. I.
2006-07-01
Boring and Gertman (2005) introduced a novel method that augments heuristic usability evaluation methods with that of the human reliability analysis method of SPAR-H. By assigning probabilistic modifiers to individual heuristics, it is possible to arrive at the usability error probability (UEP). Although this UEP is not a literal probability of error, it nonetheless provides a quantitative basis to heuristic evaluation. This method allows one to seamlessly prioritize and identify usability issues (i.e., a higher UEP requires more immediate fixes). However, the original version of this method required the usability evaluator to assign priority weights to the final UEP, thusmore » allowing the priority of a usability issue to differ among usability evaluators. The purpose of this paper is to explore an alternative approach to standardize the priority weighting of the UEP in an effort to improve the method's reliability. (authors)« less
Bryan, Brett Anthony; Raymond, Christopher Mark; Crossman, Neville David; King, Darran
2011-02-01
Consideration of the social values people assign to relatively undisturbed native ecosystems is critical for the success of science-based conservation plans. We used an interview process to identify and map social values assigned to 31 ecosystem services provided by natural areas in an agricultural landscape in southern Australia. We then modeled the spatial distribution of 12 components of ecological value commonly used in setting spatial conservation priorities. We used the analytical hierarchy process to weight these components and used multiattribute utility theory to combine them into a single spatial layer of ecological value. Social values assigned to natural areas were negatively correlated with ecological values overall, but were positively correlated with some components of ecological value. In terms of the spatial distribution of values, people valued protected areas, whereas those natural areas underrepresented in the reserve system were of higher ecological value. The habitats of threatened animal species were assigned both high ecological value and high social value. Only small areas were assigned both high ecological value and high social value in the study area, whereas large areas of high ecological value were of low social value, and vice versa. We used the assigned ecological and social values to identify different conservation strategies (e.g., information sharing, community engagement, incentive payments) that may be effective for specific areas. We suggest that consideration of both ecological and social values in selection of conservation strategies can enhance the success of science-based conservation planning. ©2010 Society for Conservation Biology.
1991-11-01
2 mega joule/m 2 (MJ/m 2 ) curie 3.700000 x E +1 *giga becquerel (GBq) degree (angle) 1.745329 x E -2 radian (rad) degree Farenheit tK = (tp...quantization assigned two quantization values. One value was assigned for demodulation values that was larger than zero and another quantization value to...demodulation values that were smaller than zero (for maximum-likelihood decisions). Logic 0 was assigned for a positive demodulation value and a logic 1 was
laboratory's understanding of capacity value in modern power systems and enjoys applying probabilistic systems efficiency and load management opportunities Education M.E.S. in Environment and Resource Studies, University
Chen, Kevin T; Izquierdo-Garcia, David; Poynton, Clare B; Chonde, Daniel B; Catana, Ciprian
2017-03-01
To propose an MR-based method for generating continuous-valued head attenuation maps and to assess its accuracy and reproducibility. Demonstrating that novel MR-based photon attenuation correction methods are both accurate and reproducible is essential prior to using them routinely in research and clinical studies on integrated PET/MR scanners. Continuous-valued linear attenuation coefficient maps ("μ-maps") were generated by combining atlases that provided the prior probability of voxel positions belonging to a certain tissue class (air, soft tissue, or bone) and an MR intensity-based likelihood classifier to produce posterior probability maps of tissue classes. These probabilities were used as weights to generate the μ-maps. The accuracy of this probabilistic atlas-based continuous-valued μ-map ("PAC-map") generation method was assessed by calculating the voxel-wise absolute relative change (RC) between the MR-based and scaled CT-based attenuation-corrected PET images. To assess reproducibility, we performed pair-wise comparisons of the RC values obtained from the PET images reconstructed using the μ-maps generated from the data acquired at three time points. The proposed method produced continuous-valued μ-maps that qualitatively reflected the variable anatomy in patients with brain tumor and agreed well with the scaled CT-based μ-maps. The absolute RC comparing the resulting PET volumes was 1.76 ± 2.33 %, quantitatively demonstrating that the method is accurate. Additionally, we also showed that the method is highly reproducible, the mean RC value for the PET images reconstructed using the μ-maps obtained at the three visits being 0.65 ± 0.95 %. Accurate and highly reproducible continuous-valued head μ-maps can be generated from MR data using a probabilistic atlas-based approach.
NASA Astrophysics Data System (ADS)
Ji, S.; Yuan, X.
2016-06-01
A generic probabilistic model, under fundamental Bayes' rule and Markov assumption, is introduced to integrate the process of mobile platform localization with optical sensors. And based on it, three relative independent solutions, bundle adjustment, Kalman filtering and particle filtering are deduced under different and additional restrictions. We want to prove that first, Kalman filtering, may be a better initial-value supplier for bundle adjustment than traditional relative orientation in irregular strips and networks or failed tie-point extraction. Second, in high noisy conditions, particle filtering can act as a bridge for gap binding when a large number of gross errors fail a Kalman filtering or a bundle adjustment. Third, both filtering methods, which help reduce the error propagation and eliminate gross errors, guarantee a global and static bundle adjustment, who requires the strictest initial values and control conditions. The main innovation is about the integrated processing of stochastic errors and gross errors in sensor observations, and the integration of the three most used solutions, bundle adjustment, Kalman filtering and particle filtering into a generic probabilistic localization model. The tests in noisy and restricted situations are designed and examined to prove them.
NASA Astrophysics Data System (ADS)
Gözükırmızı, Coşar; Kırkın, Melike Ebru
2017-01-01
Probabilistic evolution theory (PREVTH) provides a powerful framework for the solution of initial value problems of explicit ordinary differential equation sets with second degree multinomial right hand side functions. The use of the recursion between squarified telescope matrices provides the opportunity to obtain accurate results without much effort. Convergence may be considered as one of the drawbacks of PREVTH. It is related to many factors: the initial values and the coefficients in the right hand side functions are the most apparent ones. If a space extension is utilized before PREVTH, the convergence of PREVTH may also be affected by how the space extension is performed. There are works about implementations related to probabilistic evolution and how to improve the convergence by methods like analytic continuation. These works were written before squarification was introduced. Since recursion between squarified telescope matrices has given us the opportunity to obtain results corresponding to relatively higher truncation levels, it is important to obtain and analyze results related to certain problems in different areas of engineering. This manuscript may be considered to be in a series of papers and conference proceedings which serves for this purpose.
NASA Technical Reports Server (NTRS)
Sobel, Larry; Buttitta, Claudio; Suarez, James
1993-01-01
Probabilistic predictions based on the Integrated Probabilistic Assessment of Composite Structures (IPACS) code are presented for the material and structural response of unnotched and notched, 1M6/3501-6 Gr/Ep laminates. Comparisons of predicted and measured modulus and strength distributions are given for unnotched unidirectional, cross-ply, and quasi-isotropic laminates. The predicted modulus distributions were found to correlate well with the test results for all three unnotched laminates. Correlations of strength distributions for the unnotched laminates are judged good for the unidirectional laminate and fair for the cross-ply laminate, whereas the strength correlation for the quasi-isotropic laminate is deficient because IPACS did not yet have a progressive failure capability. The paper also presents probabilistic and structural reliability analysis predictions for the strain concentration factor (SCF) for an open-hole, quasi-isotropic laminate subjected to longitudinal tension. A special procedure was developed to adapt IPACS for the structural reliability analysis. The reliability results show the importance of identifying the most significant random variables upon which the SCF depends, and of having accurate scatter values for these variables.
Reliability, Risk and Cost Trade-Offs for Composite Designs
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.
1996-01-01
Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Wing, Kam Liu
1987-01-01
In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.
Vanderveldt, Ariana; Green, Leonard; Myerson, Joel
2014-01-01
The value of an outcome is affected both by the delay until its receipt (delay discounting) and by the likelihood of its receipt (probability discounting). Despite being well-described by the same hyperboloid function, delay and probability discounting involve fundamentally different processes, as revealed, for example, by the differential effects of reward amount. Previous research has focused on the discounting of delayed and probabilistic rewards separately, with little research examining more complex situations in which rewards are both delayed and probabilistic. In two experiments, participants made choices between smaller rewards that were both immediate and certain and larger rewards that were both delayed and probabilistic. Analyses revealed significant interactions between delay and probability factors inconsistent with an additive model. In contrast, a hyperboloid discounting model in which delay and probability were combined multiplicatively provided an excellent fit to the data. These results suggest that the hyperboloid is a good descriptor of decision making in complicated monetary choice situations like those people encounter in everyday life. PMID:24933696
Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling
NASA Technical Reports Server (NTRS)
Hojnicki, Jeffrey S.; Rusick, Jeffrey J.
2005-01-01
Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).
Monetizing Leakage Risk of Geologic CO2 Storage using Wellbore Permeability Frequency Distributions
NASA Astrophysics Data System (ADS)
Bielicki, Jeffrey; Fitts, Jeffrey; Peters, Catherine; Wilson, Elizabeth
2013-04-01
Carbon dioxide (CO2) may be captured from large point sources (e.g., coal-fired power plants, oil refineries, cement manufacturers) and injected into deep sedimentary basins for storage, or sequestration, from the atmosphere. This technology—CO2 Capture and Storage (CCS)—may be a significant component of the portfolio of technologies deployed to mitigate climate change. But injected CO2, or the brine it displaces, may leak from the storage reservoir through a variety of natural and manmade pathways, including existing wells and wellbores. Such leakage will incur costs to a variety of stakeholders, which may affect the desirability of potential CO2 injection locations as well as the feasibility of the CCS approach writ large. Consequently, analyzing and monetizing leakage risk is necessary to develop CCS as a viable technological option to mitigate climate change. Risk is the product of the probability of an outcome and the impact of that outcome. Assessment of leakage risk from geologic CO2 storage reservoirs requires an analysis of the probabilities and magnitudes of leakage, identification of the outcomes that may result from leakage, and an assessment of the expected economic costs of those outcomes. One critical uncertainty regarding the rate and magnitude of leakage is determined by the leakiness of the well leakage pathway. This leakiness is characterized by a leakage permeability for the pathway, and recent work has sought to determine frequency distributions for the leakage permeabilities of wells and wellbores. We conduct a probabilistic analysis of leakage and monetized leakage risk for CO2 injection locations in the Michigan Sedimentary Basin (USA) using empirically derived frequency distributions for wellbore leakage permeabilities. To conduct this probabilistic risk analysis, we apply the RISCS (Risk Interference of Subsurface CO2 Storage) model (Bielicki et al, 2013a, 2012b) to injection into the Mt. Simon Sandstone. RISCS monetizes leakage risk by combining 3D geospatial data with fluid-flow simulations from the ELSA (Estimating Leakage Semi-Analytically) model (e.g., Celia and Nordbotten, 2006) and the Leakage Impact Valuation (LIV) method (Pollak et al, 2013; Bielicki et al, 2013). We extend RISCS to iterate ELSA semi-analytic modeling simulations by drawing values from the frequency distribution of leakage permeabilities. The iterations assign these values to existing wells in the basin, and the probabilistic risk analysis thus incorporates the uncertainty of the extent of leakage. We show that monetized leakage risk can vary significantly over tens of kilometers, and we identify "hot spots" favorable to CO2 injection based on the monetized leakage risk for each potential location in the basin.
Preliminary Earthquake Hazard Map of Afghanistan
Boyd, Oliver S.; Mueller, Charles S.; Rukstales, Kenneth S.
2007-01-01
Introduction Earthquakes represent a serious threat to the people and institutions of Afghanistan. As part of a United States Agency for International Development (USAID) effort to assess the resource potential and seismic hazards of Afghanistan, the Seismic Hazard Mapping group of the United States Geological Survey (USGS) has prepared a series of probabilistic seismic hazard maps that help quantify the expected frequency and strength of ground shaking nationwide. To construct the maps, we do a complete hazard analysis for each of ~35,000 sites in the study area. We use a probabilistic methodology that accounts for all potential seismic sources and their rates of earthquake activity, and we incorporate modeling uncertainty by using logic trees for source and ground-motion parameters. See the Appendix for an explanation of probabilistic seismic hazard analysis and discussion of seismic risk. Afghanistan occupies a southward-projecting, relatively stable promontory of the Eurasian tectonic plate (Ambraseys and Bilham, 2003; Wheeler and others, 2005). Active plate boundaries, however, surround Afghanistan on the west, south, and east. To the west, the Arabian plate moves northward relative to Eurasia at about 3 cm/yr. The active plate boundary trends northwestward through the Zagros region of southwestern Iran. Deformation is accommodated throughout the territory of Iran; major structures include several north-south-trending, right-lateral strike-slip fault systems in the east and, farther to the north, a series of east-west-trending reverse- and strike-slip faults. This deformation apparently does not cross the border into relatively stable western Afghanistan. In the east, the Indian plate moves northward relative to Eurasia at a rate of about 4 cm/yr. A broad, transpressional plate-boundary zone extends into eastern Afghanistan, trending southwestward from the Hindu Kush in northeast Afghanistan, through Kabul, and along the Afghanistan-Pakistan border. Deformation here is expressed as a belt of major, north-northeast-trending, left-lateral strike-slip faults and abundant seismicity. The seismicity intensifies farther to the northeast and includes a prominent zone of deep earthquakes associated with northward subduction of the Indian plate beneath Eurasia that extends beneath the Hindu Kush and Pamirs Mountains. Production of the seismic hazard maps is challenging because the geological and seismological data required to produce a seismic hazard model are limited. The data that are available for this project include historical seismicity and poorly constrained slip rates on only a few of the many active faults in the country. Much of the hazard is derived from a new catalog of historical earthquakes: from 1964 to the present, with magnitude equal to or greater than about 4.5, and with depth between 0 and 250 kilometers. We also include four specific faults in the model: the Chaman fault with an assigned slip rate of 10 mm/yr, the Central Badakhshan fault with an assigned slip rate of 12 mm/yr, the Darvaz fault with an assigned slip rate of 7 mm/yr, and the Hari Rud fault with an assigned slip rate of 2 mm/yr. For these faults and for shallow seismicity less than 50 km deep, we incorporate published ground-motion estimates from tectonically active regions of western North America, Europe, and the Middle East. Ground-motion estimates for deeper seismicity are derived from data in subduction environments. We apply estimates derived for tectonic regions where subduction is the main tectonic process for intermediate-depth seismicity between 50- and 250-km depth. Within the framework of these limitations, we have developed a preliminary probabilistic seismic-hazard assessment of Afghanistan, the type of analysis that underpins the seismic components of modern building codes in the United States. The assessment includes maps of estimated peak ground-acceleration (PGA), 0.2-second spectral acceleration (SA), and 1.0-secon
Probabilistic resource allocation system with self-adaptive capability
NASA Technical Reports Server (NTRS)
Yufik, Yan M. (Inventor)
1998-01-01
A probabilistic resource allocation system is disclosed containing a low capacity computational module (Short Term Memory or STM) and a self-organizing associative network (Long Term Memory or LTM) where nodes represent elementary resources, terminal end nodes represent goals, and weighted links represent the order of resource association in different allocation episodes. Goals and their priorities are indicated by the user, and allocation decisions are made in the STM, while candidate associations of resources are supplied by the LTM based on the association strength (reliability). Weights are automatically assigned to the network links based on the frequency and relative success of exercising those links in the previous allocation decisions. Accumulation of allocation history in the form of an associative network in the LTM reduces computational demands on subsequent allocations. For this purpose, the network automatically partitions itself into strongly associated high reliability packets, allowing fast approximate computation and display of allocation solutions satisfying the overall reliability and other user-imposed constraints. System performance improves in time due to modification of network parameters and partitioning criteria based on the performance feedback.
A Markov chain model for reliability growth and decay
NASA Technical Reports Server (NTRS)
Siegrist, K.
1982-01-01
A mathematical model is developed to describe a complex system undergoing a sequence of trials in which there is interaction between the internal states of the system and the outcomes of the trials. For example, the model might describe a system undergoing testing that is redesigned after each failure. The basic assumptions for the model are that the state of the system after a trial depends probabilistically only on the state before the trial and on the outcome of the trial and that the outcome of a trial depends probabilistically only on the state of the system before the trial. It is shown that under these basic assumptions, the successive states form a Markov chain and the successive states and outcomes jointly form a Markov chain. General results are obtained for the transition probabilities, steady-state distributions, etc. A special case studied in detail describes a system that has two possible state ('repaired' and 'unrepaired') undergoing trials that have three possible outcomes ('inherent failure', 'assignable-cause' 'failure' and 'success'). For this model, the reliability function is computed explicitly and an optimal repair policy is obtained.
Assigning value to energy storage systems at multiple points in an electrical grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balducci, Patrick J.; Alam, M. Jan E.; Hardy, Trevor D.
This article presents a taxonomy for assigning benefits to the services provided by energy storage systems (ESSs), defines approaches for monetizing the value associated with these services, assigns values to major ESS applications by region based on a review of an extensive set of literature, and summarizes and evaluates the capabilities of several tools currently used to estimate value for specific ESS deployments.
Assigning value to energy storage systems at multiple points in an electrical grid
Balducci, Patrick J.; Alam, M. Jan E.; Hardy, Trevor D.; ...
2018-01-01
This article presents a taxonomy for assigning benefits to the services provided by energy storage systems (ESSs), defines approaches for monetizing the value associated with these services, assigns values to major ESS applications by region based on a review of an extensive set of literature, and summarizes and evaluates the capabilities of several tools currently used to estimate value for specific ESS deployments.
NASA Astrophysics Data System (ADS)
Faybishenko, B.; Flach, G. P.
2012-12-01
The objectives of this presentation are: (a) to illustrate the application of Monte Carlo and fuzzy-probabilistic approaches for uncertainty quantification (UQ) in predictions of potential evapotranspiration (PET), actual evapotranspiration (ET), and infiltration (I), using uncertain hydrological or meteorological time series data, and (b) to compare the results of these calculations with those from field measurements at the U.S. Department of Energy Savannah River Site (SRS), near Aiken, South Carolina, USA. The UQ calculations include the evaluation of aleatory (parameter uncertainty) and epistemic (model) uncertainties. The effect of aleatory uncertainty is expressed by assigning the probability distributions of input parameters, using historical monthly averaged data from the meteorological station at the SRS. The combined effect of aleatory and epistemic uncertainties on the UQ of PET, ET, and Iis then expressed by aggregating the results of calculations from multiple models using a p-box and fuzzy numbers. The uncertainty in PETis calculated using the Bair-Robertson, Blaney-Criddle, Caprio, Hargreaves-Samani, Hamon, Jensen-Haise, Linacre, Makkink, Priestly-Taylor, Penman, Penman-Monteith, Thornthwaite, and Turc models. Then, ET is calculated from the modified Budyko model, followed by calculations of I from the water balance equation. We show that probabilistic and fuzzy-probabilistic calculations using multiple models generate the PET, ET, and Idistributions, which are well within the range of field measurements. We also show that a selection of a subset of models can be used to constrain the uncertainty quantification of PET, ET, and I.
Probabilistic Magnetotelluric Inversion with Adaptive Regularisation Using the No-U-Turns Sampler
NASA Astrophysics Data System (ADS)
Conway, Dennis; Simpson, Janelle; Didana, Yohannes; Rugari, Joseph; Heinson, Graham
2018-04-01
We present the first inversion of magnetotelluric (MT) data using a Hamiltonian Monte Carlo algorithm. The inversion of MT data is an underdetermined problem which leads to an ensemble of feasible models for a given dataset. A standard approach in MT inversion is to perform a deterministic search for the single solution which is maximally smooth for a given data-fit threshold. An alternative approach is to use Markov Chain Monte Carlo (MCMC) methods, which have been used in MT inversion to explore the entire solution space and produce a suite of likely models. This approach has the advantage of assigning confidence to resistivity models, leading to better geological interpretations. Recent advances in MCMC techniques include the No-U-Turns Sampler (NUTS), an efficient and rapidly converging method which is based on Hamiltonian Monte Carlo. We have implemented a 1D MT inversion which uses the NUTS algorithm. Our model includes a fixed number of layers of variable thickness and resistivity, as well as probabilistic smoothing constraints which allow sharp and smooth transitions. We present the results of a synthetic study and show the accuracy of the technique, as well as the fast convergence, independence of starting models, and sampling efficiency. Finally, we test our technique on MT data collected from a site in Boulia, Queensland, Australia to show its utility in geological interpretation and ability to provide probabilistic estimates of features such as depth to basement.
Single-tier city logistics model for single product
NASA Astrophysics Data System (ADS)
Saragih, N. I.; Nur Bahagia, S.; Suprayogi; Syabri, I.
2017-11-01
This research develops single-tier city logistics model which consists of suppliers, UCCs, and retailers. The problem that will be answered in this research is how to determine the location of UCCs, to allocate retailers to opened UCCs, to assign suppliers to opened UCCs, to control inventory in the three entities involved, and to determine the route of the vehicles from opened UCCs to retailers. This model has never been developed before. All the decisions will be simultaneously optimized. Characteristic of the demand is probabilistic following a normal distribution, and the number of product is single.
In the United States, regional-scale air quality models are being used to identify emissions reductions needed to comply with the ozone National Ambient Air Quality Standard. Previous work has demonstrated that ozone extreme values (i.e., 4th highest ozone or Design Value) are c...
Haines, Seth S.
2015-07-13
The quantities of water and hydraulic fracturing proppant required for producing petroleum (oil, gas, and natural gas liquids) from continuous accumulations, and the quantities of water extracted during petroleum production, can be quantitatively assessed using a probabilistic approach. The water and proppant assessment methodology builds on the U.S. Geological Survey methodology for quantitative assessment of undiscovered technically recoverable petroleum resources in continuous accumulations. The U.S. Geological Survey assessment methodology for continuous petroleum accumulations includes fundamental concepts such as geologically defined assessment units, and probabilistic input values including well-drainage area, sweet- and non-sweet-spot areas, and success ratio within the untested area of each assessment unit. In addition to petroleum-related information, required inputs for the water and proppant assessment methodology include probabilistic estimates of per-well water usage for drilling, cementing, and hydraulic-fracture stimulation; the ratio of proppant to water for hydraulic fracturing; the percentage of hydraulic fracturing water that returns to the surface as flowback; and the ratio of produced water to petroleum over the productive life of each well. Water and proppant assessments combine information from recent or current petroleum assessments with water- and proppant-related input values for the assessment unit being studied, using Monte Carlo simulation, to yield probabilistic estimates of the volume of water for drilling, cementing, and hydraulic fracture stimulation; the quantity of proppant for hydraulic fracture stimulation; and the volumes of water produced as flowback shortly after well completion, and produced over the life of the well.
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M
2011-07-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I.; Marcotte, Edward M.
2011-01-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for all possible PSMs and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for all detected proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses. PMID:21488652
Bellebaum, Christian; Brodmann, Katja; Thoma, Patrizia
2014-01-01
Autism spectrum disorders (ASDs) are characterised by disturbances in social behaviour. A prevailing hypothesis suggests that these problems are related to deficits in assigning rewarding value to social stimuli. The present study aimed to examine monetary reward processing in adults with ASDs by means of event-related potentials (ERPs). Ten individuals with mild ASDs (Asperger's syndrome and high-functioning autism) and 12 healthy control subjects performed an active and an observational probabilistic reward-learning task. Both groups showed similar overall learning performance. With respect to reward processing, subjects with ASDs exhibited a general reduction in feedback-related negativity (FRN) amplitude, irrespective of feedback valence and type of learning (active or observational). Individuals with ASDs showed lower scores for cognitive empathy, while affective empathy did not differ between groups. Correlation analyses revealed that higher empathy (both cognitive and affective) negatively affected performance in observational learning in controls and in active learning in ASDs (only cognitive empathy). No relationships were seen between empathy and ERPs. Reduced FRN amplitudes are discussed in terms of a deficit in fast reward processing in ASDs, which may indicate altered reward system functioning.
Sensitivity and specificity of univariate MRI analysis of experimentally degraded cartilage
Lin, Ping-Chang; Reiter, David A.; Spencer, Richard G.
2010-01-01
MRI is increasingly used to evaluate cartilage in tissue constructs, explants, and animal and patient studies. However, while mean values of MR parameters, including T1, T2, magnetization transfer rate km, apparent diffusion coefficient ADC, and the dGEMRIC-derived fixed charge density, correlate with tissue status, the ability to classify tissue according to these parameters has not been explored. Therefore, the sensitivity and specificity with which each of these parameters was able to distinguish between normal and trypsin- degraded, and between normal and collagenase-degraded, cartilage explants were determined. Initial analysis was performed using a training set to determine simple group means to which parameters obtained from a validation set were compared. T1 and ADC showed the greatest ability to discriminate between normal and degraded cartilage. Further analysis with k-means clustering, which eliminates the need for a priori identification of sample status, generally performed comparably. Use of fuzzy c-means (FCM) clustering to define centroids likewise did not result in improvement in discrimination. Finally, a FCM clustering approach in which validation samples were assigned in a probabilistic fashion to control and degraded groups was implemented, reflecting the range of tissue characteristics seen with cartilage degradation. PMID:19705467
Three-dimensional desirability spaces for quality-by-design-based HPLC development.
Mokhtar, Hatem I; Abdel-Salam, Randa A; Hadad, Ghada M
2015-04-01
In this study, three-dimensional desirability spaces were introduced as a graphical representation method of design space. This was illustrated in the context of application of quality-by-design concepts on development of a stability indicating gradient reversed-phase high-performance liquid chromatography method for the determination of vinpocetine and α-tocopheryl acetate in a capsule dosage form. A mechanistic retention model to optimize gradient time, initial organic solvent concentration and ternary solvent ratio was constructed for each compound from six experimental runs. Then, desirability function of each optimized criterion and subsequently the global desirability function were calculated throughout the knowledge space. The three-dimensional desirability spaces were plotted as zones exceeding a threshold value of desirability index in space defined by the three optimized method parameters. Probabilistic mapping of desirability index aided selection of design space within the potential desirability subspaces. Three-dimensional desirability spaces offered better visualization and potential design spaces for the method as a function of three method parameters with ability to assign priorities to this critical quality as compared with the corresponding resolution spaces. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System.
Yuan, Xianfeng; Song, Mumin; Zhou, Fengyu; Chen, Zhumin; Li, Yan
2015-01-01
The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods.
A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System
Yuan, Xianfeng; Song, Mumin; Chen, Zhumin; Li, Yan
2015-01-01
The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods. PMID:26229526
Burgos-Paz, William; Cerón-Muñoz, Mario; Solarte-Portilla, Carlos
2011-10-01
The aim was to establish the genetic diversity and population structure of three guinea pig lines, from seven production zones located in Nariño, southwest Colombia. A total of 384 individuals were genotyped with six microsatellite markers. The measurement of intrapopulation diversity revealed allelic richness ranging from 3.0 to 6.56, and observed heterozygosity (Ho) from 0.33 to 0.60, with a deficit in heterozygous individuals. Although statistically significant (p < 0.05), genetic differentiation between population pairs was found to be low. Genetic distance, as well as clustering of guinea-pig lines and populations, coincided with the historical and geographical distribution of the populations. Likewise, high genetic identity between improved and native lines was established. An analysis of group probabilistic assignment revealed that each line should not be considered as a genetically homogeneous group. The findings corroborate the absorption of native genetic material into the improved line introduced into Colombia from Peru. It is necessary to establish conservation programs for native-line individuals in Nariño, and control genealogical and production records in order to reduce the inbreeding values in the populations.
Burgos-Paz, William; Cerón-Muñoz, Mario; Solarte-Portilla, Carlos
2011-01-01
The aim was to establish the genetic diversity and population structure of three guinea pig lines, from seven production zones located in Nariño, southwest Colombia. A total of 384 individuals were genotyped with six microsatellite markers. The measurement of intrapopulation diversity revealed allelic richness ranging from 3.0 to 6.56, and observed heterozygosity (Ho) from 0.33 to 0.60, with a deficit in heterozygous individuals. Although statistically significant (p < 0.05), genetic differentiation between population pairs was found to be low. Genetic distance, as well as clustering of guinea-pig lines and populations, coincided with the historical and geographical distribution of the populations. Likewise, high genetic identity between improved and native lines was established. An analysis of group probabilistic assignment revealed that each line should not be considered as a genetically homogeneous group. The findings corroborate the absorption of native genetic material into the improved line introduced into Colombia from Peru. It is necessary to establish conservation programs for native-line individuals in Nariño, and control genealogical and production records in order to reduce the inbreeding values in the populations. PMID:22215979
What is the Value Added to Adaptation Planning by Probabilistic Projections of Climate Change?
NASA Astrophysics Data System (ADS)
Wilby, R. L.
2008-12-01
Probabilistic projections of climate change offer new sources of risk information to support regional impacts assessment and adaptation options appraisal. However, questions continue to surround how best to apply these scenarios in a practical context, and whether the added complexity and computational burden leads to more robust decision-making. This paper provides an overview of recent efforts in the UK to 'bench-test' frameworks for employing probabilistic projections ahead of the release of the next generation, UKCIP08 projections (in November 2008). This is involving close collaboration between government agencies, research and stakeholder communities. Three examples will be cited to illustrate how probabilistic projections are already informing decisions about future flood risk management in London, water resource planning in trial river basins, and assessments of risks from rising water temperatures to Atlantic salmon stocks in southern England. When compared with conventional deterministic scenarios, ensemble projections allow exploration of a wider range of management options and highlight timescales for implementing adaptation measures. Users of probabilistic scenarios must keep in mind that other uncertainties (e.g., due to impacts model structure and parameterisation) should be handled in an equally rigorous way to those arising from climate models and emission scenarios. Finally, it is noted that a commitment to long-term monitoring is also critical for tracking environmental change, testing model projections, and for evaluating the success (or not) of any scenario-led interventions.
NASA Astrophysics Data System (ADS)
Ershov, Egor; Karnaukhov, Victor; Mozerov, Mikhail
2016-02-01
Two consecutive frames of a lateral navigation camera video sequence can be considered as an appropriate approximation to epipolar stereo. To overcome edge-aware inaccuracy caused by occlusion, we propose a model that matches the current frame to the next and to the previous ones. The positive disparity of matching to the previous frame has its symmetric negative disparity to the next frame. The proposed algorithm performs probabilistic choice for each matched pixel between the positive disparity and its symmetric disparity cost. A disparity map obtained by optimization over the cost volume composed of the proposed probabilistic choice is more accurate than the traditional left-to-right and right-to-left disparity maps cross-check. Also, our algorithm needs two times less computational operations per pixel than the cross-check technique. The effectiveness of our approach is demonstrated on synthetic data and real video sequences, with ground-truth value.
Classification of Company Performance using Weighted Probabilistic Neural Network
NASA Astrophysics Data System (ADS)
Yasin, Hasbi; Waridi Basyiruddin Arifin, Adi; Warsito, Budi
2018-05-01
Classification of company performance can be judged by looking at its financial status, whether good or bad state. Classification of company performance can be achieved by some approach, either parametric or non-parametric. Neural Network is one of non-parametric methods. One of Artificial Neural Network (ANN) models is Probabilistic Neural Network (PNN). PNN consists of four layers, i.e. input layer, pattern layer, addition layer, and output layer. The distance function used is the euclidean distance and each class share the same values as their weights. In this study used PNN that has been modified on the weighting process between the pattern layer and the addition layer by involving the calculation of the mahalanobis distance. This model is called the Weighted Probabilistic Neural Network (WPNN). The results show that the company's performance modeling with the WPNN model has a very high accuracy that reaches 100%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peace, Gerald; Goering, Timothy James; Miller, Mark Laverne
2007-01-01
A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations whenmore » data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses.« less
Guided SAR image despeckling with probabilistic non local weights
NASA Astrophysics Data System (ADS)
Gokul, Jithin; Nair, Madhu S.; Rajan, Jeny
2017-12-01
SAR images are generally corrupted by granular disturbances called speckle, which makes visual analysis and detail extraction a difficult task. Non Local despeckling techniques with probabilistic similarity has been a recent trend in SAR despeckling. To achieve effective speckle suppression without compromising detail preservation, we propose an improvement for the existing Generalized Guided Filter with Bayesian Non-Local Means (GGF-BNLM) method. The proposed method (Guided SAR Image Despeckling with Probabilistic Non Local Weights) replaces parametric constants based on heuristics in GGF-BNLM method with dynamically derived values based on the image statistics for weight computation. Proposed changes make GGF-BNLM method adaptive and as a result, significant improvement is achieved in terms of performance. Experimental analysis on SAR images shows excellent speckle reduction without compromising feature preservation when compared to GGF-BNLM method. Results are also compared with other state-of-the-art and classic SAR depseckling techniques to demonstrate the effectiveness of the proposed method.
Probabilistic analysis of bladed turbine disks and the effect of mistuning
NASA Technical Reports Server (NTRS)
Shah, A. R.; Nagpal, V. K.; Chamis, Christos C.
1990-01-01
Probabilistic assessment of the maximum blade response on a mistuned rotor disk is performed using the computer code NESSUS. The uncertainties in natural frequency, excitation frequency, amplitude of excitation and damping are included to obtain the cumulative distribution function (CDF) of blade responses. Advanced mean value first order analysis is used to compute CDF. The sensitivities of different random variables are identified. Effect of the number of blades on a rotor on mistuning is evaluated. It is shown that the uncertainties associated with the forcing function parameters have significant effect on the response distribution of the bladed rotor.
Probabilistic analysis of bladed turbine disks and the effect of mistuning
NASA Technical Reports Server (NTRS)
Shah, Ashwin; Nagpal, V. K.; Chamis, C. C.
1990-01-01
Probabilistic assessment of the maximum blade response on a mistuned rotor disk is performed using the computer code NESSUS. The uncertainties in natural frequency, excitation frequency, amplitude of excitation and damping have been included to obtain the cumulative distribution function (CDF) of blade responses. Advanced mean value first order analysis is used to compute CDF. The sensitivities of different random variables are identified. Effect of the number of blades on a rotor on mistuning is evaluated. It is shown that the uncertainties associated with the forcing function parameters have significant effect on the response distribution of the bladed rotor.
Faith, Daniel P
2008-12-01
New species conservation strategies, including the EDGE of Existence (EDGE) program, have expanded threatened species assessments by integrating information about species' phylogenetic distinctiveness. Distinctiveness has been measured through simple scores that assign shared credit among species for evolutionary heritage represented by the deeper phylogenetic branches. A species with a high score combined with a high extinction probability receives high priority for conservation efforts. Simple hypothetical scenarios for phylogenetic trees and extinction probabilities demonstrate how such scoring approaches can provide inefficient priorities for conservation. An existing probabilistic framework derived from the phylogenetic diversity measure (PD) properly captures the idea of shared responsibility for the persistence of evolutionary history. It avoids static scores, takes into account the status of close relatives through their extinction probabilities, and allows for the necessary updating of priorities in light of changes in species threat status. A hypothetical phylogenetic tree illustrates how changes in extinction probabilities of one or more species translate into changes in expected PD. The probabilistic PD framework provided a range of strategies that moved beyond expected PD to better consider worst-case PD losses. In another example, risk aversion gave higher priority to a conservation program that provided a smaller, but less risky, gain in expected PD. The EDGE program could continue to promote a list of top species conservation priorities through application of probabilistic PD and simple estimates of current extinction probability. The list might be a dynamic one, with all the priority scores updated as extinction probabilities change. Results of recent studies suggest that estimation of extinction probabilities derived from the red list criteria linked to changes in species range sizes may provide estimated probabilities for many different species. Probabilistic PD provides a framework for single-species assessment that is well-integrated with a broader measurement of impacts on PD owing to climate change and other factors.
2009-11-17
set of chains , the step adds scheduled methods that have an a priori likelihood of a failure outcome (Lines 3-5). It identifies the max eul value of the...activity meeting its objective, as well as its expected contribution to the schedule. By explicitly calculating these values , PADS is able to summarize the...variables. One of the main difficulties of this model is convolving the probability density functions and value functions while solving the model; this
Willingness-to-pay for a probabilistic flood forecast: a risk-based decision-making game
NASA Astrophysics Data System (ADS)
Arnal, Louise; Ramos, Maria-Helena; Coughlan, Erin; Cloke, Hannah L.; Stephens, Elisabeth; Wetterhall, Fredrik; van Andel, Schalk-Jan; Pappenberger, Florian
2016-04-01
Forecast uncertainty is a twofold issue, as it constitutes both an added value and a challenge for the forecaster and the user of the forecasts. Many authors have demonstrated the added (economic) value of probabilistic forecasts over deterministic forecasts for a diversity of activities in the water sector (e.g. flood protection, hydroelectric power management and navigation). However, the richness of the information is also a source of challenges for operational uses, due partially to the difficulty to transform the probability of occurrence of an event into a binary decision. The setup and the results of a risk-based decision-making experiment, designed as a game on the topic of flood protection mitigation, called ``How much are you prepared to pay for a forecast?'', will be presented. The game was played at several workshops in 2015, including during this session at the EGU conference in 2015, and a total of 129 worksheets were collected and analysed. The aim of this experiment was to contribute to the understanding of the role of probabilistic forecasts in decision-making processes and their perceived value by decision-makers. Based on the participants' willingness-to-pay for a forecast, the results of the game showed that the value (or the usefulness) of a forecast depends on several factors, including the way users perceive the quality of their forecasts and link it to the perception of their own performances as decision-makers. Balancing avoided costs and the cost (or the benefit) of having forecasts available for making decisions is not straightforward, even in a simplified game situation, and is a topic that deserves more attention from the hydrological forecasting community in the future.
Joseph Buongiorno
2001-01-01
Faustmann's formula gives the land value, or the forest value of land with trees, under deterministic assumptions regarding future stand growth and prices, over an infinite horizon. Markov decision process (MDP) models generalize Faustmann's approach by recognizing that future stand states and prices are known only as probabilistic distributions. The...
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323
Bayesian networks improve causal environmental ...
Rule-based weight of evidence approaches to ecological risk assessment may not account for uncertainties and generally lack probabilistic integration of lines of evidence. Bayesian networks allow causal inferences to be made from evidence by including causal knowledge about the problem, using this knowledge with probabilistic calculus to combine multiple lines of evidence, and minimizing biases in predicting or diagnosing causal relationships. Too often, sources of uncertainty in conventional weight of evidence approaches are ignored that can be accounted for with Bayesian networks. Specifying and propagating uncertainties improve the ability of models to incorporate strength of the evidence in the risk management phase of an assessment. Probabilistic inference from a Bayesian network allows evaluation of changes in uncertainty for variables from the evidence. The network structure and probabilistic framework of a Bayesian approach provide advantages over qualitative approaches in weight of evidence for capturing the impacts of multiple sources of quantifiable uncertainty on predictions of ecological risk. Bayesian networks can facilitate the development of evidence-based policy under conditions of uncertainty by incorporating analytical inaccuracies or the implications of imperfect information, structuring and communicating causal issues through qualitative directed graph formulations, and quantitatively comparing the causal power of multiple stressors on value
Malekpour, Shirin; Langeveld, Jeroen; Letema, Sammy; Clemens, François; van Lier, Jules B
2013-03-30
This paper introduces the probabilistic evaluation framework, to enable transparent and objective decision-making in technology selection for sanitation solutions in low-income countries. The probabilistic framework recognizes the often poor quality of the available data for evaluations. Within this framework, the evaluations will be done based on the probabilities that the expected outcomes occur in practice, considering the uncertainties in evaluation parameters. Consequently, the outcome of evaluations will not be single point estimates; but there exists a range of possible outcomes. A first trial application of this framework for evaluation of sanitation options in the Nyalenda settlement in Kisumu, Kenya, showed how the range of values that an evaluation parameter may obtain in practice would influence the evaluation outcomes. In addition, as the probabilistic evaluation requires various site-specific data, sensitivity analysis was performed to determine the influence of each data set quality on the evaluation outcomes. Based on that, data collection activities could be (re)directed, in a trade-off between the required investments in those activities and the resolution of the decisions that are to be made. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sujadi, Imam; Kurniasih, Rini; Subanti, Sri
2017-05-01
In the era of 21st century learning, it needs to use technology as a learning media. Using Edmodo as a learning media is one of the options as the complement in learning process. However, this research focuses on the effectiveness of learning material using Edmodo. The aim of this research to determine whether the level of student's probabilistic thinking that use learning material with Edmodo is better than the existing learning materials (books) implemented to teach the subject of students grade 8th. This is quasi-experimental research using control group pretest and posttest. The population of this study was students grade 8 of SMPN 12 Surakarta and the sampling technique used random sampling. The analysis technique used to examine two independent sample using Kolmogorov-Smirnov test. The obtained value of test statistic is M=0.38, since 0.38 is the largest tabled critical one-tailed value M0.05=0.011. The result of the research is the learning materials with Edmodo more effectively to enhance the level of probabilistic thinking learners than the learning that use the existing learning materials (books). Therefore, learning material using Edmodo can be used in learning process. It can also be developed into another learning material through Edmodo.
Fragment assignment in the cloud with eXpress-D
2013-01-01
Background Probabilistic assignment of ambiguously mapped fragments produced by high-throughput sequencing experiments has been demonstrated to greatly improve accuracy in the analysis of RNA-Seq and ChIP-Seq, and is an essential step in many other sequence census experiments. A maximum likelihood method using the expectation-maximization (EM) algorithm for optimization is commonly used to solve this problem. However, batch EM-based approaches do not scale well with the size of sequencing datasets, which have been increasing dramatically over the past few years. Thus, current approaches to fragment assignment rely on heuristics or approximations for tractability. Results We present an implementation of a distributed EM solution to the fragment assignment problem using Spark, a data analytics framework that can scale by leveraging compute clusters within datacenters–“the cloud”. We demonstrate that our implementation easily scales to billions of sequenced fragments, while providing the exact maximum likelihood assignment of ambiguous fragments. The accuracy of the method is shown to be an improvement over the most widely used tools available and can be run in a constant amount of time when cluster resources are scaled linearly with the amount of input data. Conclusions The cloud offers one solution for the difficulties faced in the analysis of massive high-thoughput sequencing data, which continue to grow rapidly. Researchers in bioinformatics must follow developments in distributed systems–such as new frameworks like Spark–for ways to port existing methods to the cloud and help them scale to the datasets of the future. Our software, eXpress-D, is freely available at: http://github.com/adarob/express-d. PMID:24314033
Preference pulses and the win-stay, fix-and-sample model of choice.
Hachiga, Yosuke; Sakagami, Takayuki; Silberberg, Alan
2015-11-01
Two groups of six rats each were trained to respond to two levers for a food reinforcer. One group was trained on concurrent variable-ratio 20 extinction schedules of reinforcement. The second group was trained on a concurrent variable-interval 27-s extinction schedule. In both groups, lever-schedule assignments changed randomly following reinforcement; a light cued the lever providing the next reinforcer. In the next condition, the light cue was removed and reinforcer assignment strictly alternated between levers. The next two conditions redetermined, in order, the first two conditions. Preference pulses, defined as a tendency for relative response rate to decline to the just-reinforced alternative with time since reinforcement, only appeared during the extinction schedule. Although the pulse's functional form was well described by a reinforcer-induction equation, there was a large residual between actual data and a pulse-as-artifact simulation (McLean, Grace, Pitts, & Hughes, 2014) used to discern reinforcer-dependent contributions to pulsing. However, if that simulation was modified to include a win-stay tendency (a propensity to stay on the just-reinforced alternative), the residual was greatly reduced. Additional modifications of the parameter values of the pulse-as-artifact simulation enabled it to accommodate the present results as well as those it originally accommodated. In its revised form, this simulation was used to create a model that describes response runs to the preferred alternative as terminating probabilistically, and runs to the unpreferred alternative as punctate with occasional perseverative response runs. After reinforcement, choices are modeled as returning briefly to the lever location that had been just reinforced. This win-stay propensity is hypothesized as due to reinforcer induction. © Society for the Experimental Analysis of Behavior.
Wang, Shi-Heng; Chen, Wei J; Tsai, Yu-Chin; Huang, Yung-Hsiang; Hwu, Hai-Gwo; Hsiao, Chuhsing K
2013-01-01
The copy number variation (CNV) is a type of genetic variation in the genome. It is measured based on signal intensity measures and can be assessed repeatedly to reduce the uncertainty in PCR-based typing. Studies have shown that CNVs may lead to phenotypic variation and modification of disease expression. Various challenges exist, however, in the exploration of CNV-disease association. Here we construct latent variables to infer the discrete CNV values and to estimate the probability of mutations. In addition, we propose to pool rare variants to increase the statistical power and we conduct family studies to mitigate the computational burden in determining the composition of CNVs on each chromosome. To explore in a stochastic sense the association between the collapsing CNV variants and disease status, we utilize a Bayesian hierarchical model incorporating the mutation parameters. This model assigns integers in a probabilistic sense to the quantitatively measured copy numbers, and is able to test simultaneously the association for all variants of interest in a regression framework. This integrative model can account for the uncertainty in copy number assignment and differentiate if the variation was de novo or inherited on the basis of posterior probabilities. For family studies, this model can accommodate the dependence within family members and among repeated CNV data. Moreover, the Mendelian rule can be assumed under this model and yet the genetic variation, including de novo and inherited variation, can still be included and quantified directly for each individual. Finally, simulation studies show that this model has high true positive and low false positive rates in the detection of de novo mutation.
Probabilistically modeling lava flows with MOLASSES
NASA Astrophysics Data System (ADS)
Richardson, J. A.; Connor, L.; Connor, C.; Gallant, E.
2017-12-01
Modeling lava flows through Cellular Automata methods enables a computationally inexpensive means to quickly forecast lava flow paths and ultimate areal extents. We have developed a lava flow simulator, MOLASSES, that forecasts lava flow inundation over an elevation model from a point source eruption. This modular code can be implemented in a deterministic fashion with given user inputs that will produce a single lava flow simulation. MOLASSES can also be implemented in a probabilistic fashion where given user inputs define parameter distributions that are randomly sampled to create many lava flow simulations. This probabilistic approach enables uncertainty in input data to be expressed in the model results and MOLASSES outputs a probability map of inundation instead of a determined lava flow extent. Since the code is comparatively fast, we use it probabilistically to investigate where potential vents are located that may impact specific sites and areas, as well as the unconditional probability of lava flow inundation of sites or areas from any vent. We have validated the MOLASSES code to community-defined benchmark tests and to the real world lava flows at Tolbachik (2012-2013) and Pico do Fogo (2014-2015). To determine the efficacy of the MOLASSES simulator at accurately and precisely mimicking the inundation area of real flows, we report goodness of fit using both model sensitivity and the Positive Predictive Value, the latter of which is a Bayesian posterior statistic. Model sensitivity is often used in evaluating lava flow simulators, as it describes how much of the lava flow was successfully modeled by the simulation. We argue that the positive predictive value is equally important in determining how good a simulator is, as it describes the percentage of the simulation space that was actually inundated by lava.
NASA Astrophysics Data System (ADS)
Addor, N.; Jaun, S.; Fundel, F.; Zappa, M.
2012-04-01
The Sihl River flows through Zurich, Switzerland's most populated city, for which it represents the largest flood threat. To anticipate extreme discharge events and provide decision support in case of flood risk, a hydrometeorological ensemble prediction system (HEPS) was launched operationally in 2008. This model chain relies on deterministic (COSMO-7) and probabilistic (COSMO-LEPS) atmospheric forecasts, which are used to force a semi-distributed hydrological model (PREVAH) coupled to a hydraulic model (FLORIS). The resulting hydrological forecasts are eventually communicated to the stakeholders involved in the Sihl discharge management. This fully operational setting provides a real framework with which we assessed the potential of deterministic and probabilistic discharge forecasts for flood mitigation. To study the suitability of HEPS for small-scale basins and to quantify the added value conveyed by the probability information, a 31-month reforecast was produced for the Sihl catchment (336 km2). Several metrics support the conclusion that the performance gain is of up to 2 days lead time for the catchment considered. Brier skill scores show that probabilistic hydrological forecasts outperform their deterministic counterparts for all the lead times and event intensities considered. The small size of the Sihl catchment does not prevent skillful discharge forecasts, but makes them particularly dependent on correct precipitation forecasts. Our evaluation stresses that the capacity of the model to provide confident and reliable mid-term probability forecasts for high discharges is limited. We finally highlight challenges for making decisions on the basis of hydrological predictions, and discuss the need for a tool to be used in addition to forecasts to compare the different mitigation actions possible in the Sihl catchment.
Bridges for Pedestrians with Random Parameters using the Stochastic Finite Elements Analysis
NASA Astrophysics Data System (ADS)
Szafran, J.; Kamiński, M.
2017-02-01
The main aim of this paper is to present a Stochastic Finite Element Method analysis with reference to principal design parameters of bridges for pedestrians: eigenfrequency and deflection of bridge span. They are considered with respect to random thickness of plates in boxed-section bridge platform, Young modulus of structural steel and static load resulting from crowd of pedestrians. The influence of the quality of the numerical model in the context of traditional FEM is shown also on the example of a simple steel shield. Steel structures with random parameters are discretized in exactly the same way as for the needs of traditional Finite Element Method. Its probabilistic version is provided thanks to the Response Function Method, where several numerical tests with random parameter values varying around its mean value enable the determination of the structural response and, thanks to the Least Squares Method, its final probabilistic moments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pichara, Karim; Protopapas, Pavlos
We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine howmore » classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same.« less
NASA Astrophysics Data System (ADS)
Baddari, Kamel; Bellalem, Fouzi; Baddari, Ibtihel; Makdeche, Said
2016-10-01
Statistical tests have been used to adjust the Zemmouri seismic data using a distribution function. The Pareto law has been used and the probabilities of various expected earthquakes were computed. A mathematical expression giving the quantiles was established. The extreme values limiting law confirmed the accuracy of the adjustment method. Using the moment magnitude scale, a probabilistic model was made to predict the occurrences of strong earthquakes. The seismic structure has been characterized by the slope of the recurrence plot γ, fractal dimension D, concentration parameter K sr, Hurst exponents H r and H t. The values of D, γ, K sr, H r, and H t diminished many months before the principal seismic shock ( M = 6.9) of the studied seismoactive zone has occurred. Three stages of the deformation of the geophysical medium are manifested in the variation of the coefficient G% of the clustering of minor seismic events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. S. Schroeder; R. W. Youngblood
The Risk-Informed Safety Margin Characterization (RISMC) pathway of the Light Water Reactor Sustainability Program is developing simulation-based methods and tools for analyzing safety margin from a modern perspective. [1] There are multiple definitions of 'margin.' One class of definitions defines margin in terms of the distance between a point estimate of a given performance parameter (such as peak clad temperature), and a point-value acceptance criterion defined for that parameter (such as 2200 F). The present perspective on margin is that it relates to the probability of failure, and not just the distance between a nominal operating point and a criterion.more » In this work, margin is characterized through a probabilistic analysis of the 'loads' imposed on systems, structures, and components, and their 'capacity' to resist those loads without failing. Given the probabilistic load and capacity spectra, one can assess the probability that load exceeds capacity, leading to component failure. Within the project, we refer to a plot of these probabilistic spectra as 'the logo.' Refer to Figure 1 for a notional illustration. The implications of referring to 'the logo' are (1) RISMC is focused on being able to analyze loads and spectra probabilistically, and (2) calling it 'the logo' tacitly acknowledges that it is a highly simplified picture: meaningful analysis of a given component failure mode may require development of probabilistic spectra for multiple physical parameters, and in many practical cases, 'load' and 'capacity' will not vary independently.« less
Analytical probabilistic modeling of RBE-weighted dose for ion therapy.
Wieser, H P; Hennig, P; Wahl, N; Bangert, M
2017-11-10
Particle therapy is especially prone to uncertainties. This issue is usually addressed with uncertainty quantification and minimization techniques based on scenario sampling. For proton therapy, however, it was recently shown that it is also possible to use closed-form computations based on analytical probabilistic modeling (APM) for this purpose. APM yields unique features compared to sampling-based approaches, motivating further research in this context. This paper demonstrates the application of APM for intensity-modulated carbon ion therapy to quantify the influence of setup and range uncertainties on the RBE-weighted dose. In particular, we derive analytical forms for the nonlinear computations of the expectation value and variance of the RBE-weighted dose by propagating linearly correlated Gaussian input uncertainties through a pencil beam dose calculation algorithm. Both exact and approximation formulas are presented for the expectation value and variance of the RBE-weighted dose and are subsequently studied in-depth for a one-dimensional carbon ion spread-out Bragg peak. With V and B being the number of voxels and pencil beams, respectively, the proposed approximations induce only a marginal loss of accuracy while lowering the computational complexity from order [Formula: see text] to [Formula: see text] for the expectation value and from [Formula: see text] to [Formula: see text] for the variance of the RBE-weighted dose. Moreover, we evaluated the approximated calculation of the expectation value and standard deviation of the RBE-weighted dose in combination with a probabilistic effect-based optimization on three patient cases considering carbon ions as radiation modality against sampled references. The resulting global γ-pass rates (2 mm,2%) are [Formula: see text]99.15% for the expectation value and [Formula: see text]94.95% for the standard deviation of the RBE-weighted dose, respectively. We applied the derived analytical model to carbon ion treatment planning, although the concept is in general applicable to other ion species considering a variable RBE.
Analytical probabilistic modeling of RBE-weighted dose for ion therapy
NASA Astrophysics Data System (ADS)
Wieser, H. P.; Hennig, P.; Wahl, N.; Bangert, M.
2017-12-01
Particle therapy is especially prone to uncertainties. This issue is usually addressed with uncertainty quantification and minimization techniques based on scenario sampling. For proton therapy, however, it was recently shown that it is also possible to use closed-form computations based on analytical probabilistic modeling (APM) for this purpose. APM yields unique features compared to sampling-based approaches, motivating further research in this context. This paper demonstrates the application of APM for intensity-modulated carbon ion therapy to quantify the influence of setup and range uncertainties on the RBE-weighted dose. In particular, we derive analytical forms for the nonlinear computations of the expectation value and variance of the RBE-weighted dose by propagating linearly correlated Gaussian input uncertainties through a pencil beam dose calculation algorithm. Both exact and approximation formulas are presented for the expectation value and variance of the RBE-weighted dose and are subsequently studied in-depth for a one-dimensional carbon ion spread-out Bragg peak. With V and B being the number of voxels and pencil beams, respectively, the proposed approximations induce only a marginal loss of accuracy while lowering the computational complexity from order O(V × B^2) to O(V × B) for the expectation value and from O(V × B^4) to O(V × B^2) for the variance of the RBE-weighted dose. Moreover, we evaluated the approximated calculation of the expectation value and standard deviation of the RBE-weighted dose in combination with a probabilistic effect-based optimization on three patient cases considering carbon ions as radiation modality against sampled references. The resulting global γ-pass rates (2 mm,2%) are > 99.15% for the expectation value and > 94.95% for the standard deviation of the RBE-weighted dose, respectively. We applied the derived analytical model to carbon ion treatment planning, although the concept is in general applicable to other ion species considering a variable RBE.
NASA Astrophysics Data System (ADS)
Wellons, Sarah; Torrey, Paul
2017-06-01
Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.
Assignment of functional activations to probabilistic cytoarchitectonic areas revisited.
Eickhoff, Simon B; Paus, Tomas; Caspers, Svenja; Grosbras, Marie-Helene; Evans, Alan C; Zilles, Karl; Amunts, Katrin
2007-07-01
Probabilistic cytoarchitectonic maps in standard reference space provide a powerful tool for the analysis of structure-function relationships in the human brain. While these microstructurally defined maps have already been successfully used in the analysis of somatosensory, motor or language functions, several conceptual issues in the analysis of structure-function relationships still demand further clarification. In this paper, we demonstrate the principle approaches for anatomical localisation of functional activations based on probabilistic cytoarchitectonic maps by exemplary analysis of an anterior parietal activation evoked by visual presentation of hand gestures. After consideration of the conceptual basis and implementation of volume or local maxima labelling, we comment on some potential interpretational difficulties, limitations and caveats that could be encountered. Extending and supplementing these methods, we then propose a supplementary approach for quantification of structure-function correspondences based on distribution analysis. This approach relates the cytoarchitectonic probabilities observed at a particular functionally defined location to the areal specific null distribution of probabilities across the whole brain (i.e., the full probability map). Importantly, this method avoids the need for a unique classification of voxels to a single cortical area and may increase the comparability between results obtained for different areas. Moreover, as distribution-based labelling quantifies the "central tendency" of an activation with respect to anatomical areas, it will, in combination with the established methods, allow an advanced characterisation of the anatomical substrates of functional activations. Finally, the advantages and disadvantages of the various methods are discussed, focussing on the question of which approach is most appropriate for a particular situation.
NASA Astrophysics Data System (ADS)
Ring, Christoph; Pollinger, Felix; Kaspar-Ott, Irena; Hertig, Elke; Jacobeit, Jucundus; Paeth, Heiko
2018-03-01
A major task of climate science are reliable projections of climate change for the future. To enable more solid statements and to decrease the range of uncertainty, global general circulation models and regional climate models are evaluated based on a 2 × 2 contingency table approach to generate model weights. These weights are compared among different methodologies and their impact on probabilistic projections of temperature and precipitation changes is investigated. Simulated seasonal precipitation and temperature for both 50-year trends and climatological means are assessed at two spatial scales: in seven study regions around the globe and in eight sub-regions of the Mediterranean area. Overall, 24 models of phase 3 and 38 models of phase 5 of the Coupled Model Intercomparison Project altogether 159 transient simulations of precipitation and 119 of temperature from four emissions scenarios are evaluated against the ERA-20C reanalysis over the 20th century. The results show high conformity with previous model evaluation studies. The metrics reveal that mean of precipitation and both temperature mean and trend agree well with the reference dataset and indicate improvement for the more recent ensemble mean, especially for temperature. The method is highly transferrable to a variety of further applications in climate science. Overall, there are regional differences of simulation quality, however, these are less pronounced than those between the results for 50-year mean and trend. The trend results are suitable for assigning weighting factors to climate models. Yet, the implications for probabilistic climate projections is strictly dependent on the region and season.
Yoon, Hyun Jin; Cheon, Sang Myung; Jeong, Young Jin; Kang, Do-Young
2012-02-01
We assign the anatomical names of functional activation regions in the brain, based on the probabilistic cyto-architectonic atlas by Anatomy 1.7 from an analysis of correlations between regional cerebral blood flow (rCBF) and clinical parameters of the non-demented Parkinson's disease (PD) patients by SPM8. We evaluated Anatomy 1.7 of SPM toolbox compared to 'Talairach Daemon' (TD) Client 2.4.2 software. One hundred and thirty-six patients (mean age 60.0 ± 9.09 years; 73 women and 63 men) with non-demented PD were selected. Tc-99m-HMPAO brain single-photon emission computed tomography (SPECT) scans were performed on the patients using a two-head gamma-camera. We analyzed the brain image of PD patients by SPM8 and found the anatomical names of correlated regions of rCBF perfusion with the clinical parameters using TD Client 2.4.2 and Anatomy 1.7. The SPM8 provided a correlation coefficient between clinical parameters and cerebral hypoperfusion by a simple regression method. To the clinical parameters were added age, duration of disease, education period, Hoehn and Yahr (H&Y) stage and Korean mini-mental state examination (K-MMSE) score. Age was correlated with cerebral perfusion in the Brodmann area (BA) 6 and BA 3b assigned by Anatomy 1.7 and BA 6 and pyramis in gray matter by TD Client 2.4.2 with p < 0.001 uncorrected. Also, assigned significant correlated regions were found in the left and right lobules VI (Hem) with duration of disease, in left and right lobules VIIa crus I (Hem) with education, in left insula (Ig2), left and right lobules VI (Hem) with H&Y, and in BA 4a and 6 with K-MMSE score with p < 0.05 uncorrected by Anatomy 1.7, respectively. Most areas of correlation were overlapped by two different anatomical labeling methods, but some correlation areas were found with different names. Age was the most significantly correlated clinical parameter with rCBF. TD Client found the exact anatomical name by the peak intensity position of the cluster while Anatomy 1.7 of SPM8 toolbox, using the cyto-architectonic probability maps, assigned the anatomical name by percentage value of the probability.
The values jury to aid natural resource decisions
Thomas C. Brown; George L. Peterson; Bruce E. Tonn
1995-01-01
Congressional legislation emphasizes that public resource allocation should reflect the values citizens assign to those resources. Yet, information about assigned values and preferences of members of the public, including economic measures of value, required by decision makers is often incomplete or unavailable. Existing sources of information about the public's...
Dornic, N; Ficheux, A S; Bernard, A; Roudot, A C
2017-08-01
The notes of guidance for the testing of cosmetic ingredients and their safety evaluation by the Scientific Committee on Consumer Safety (SCCS) is a document dedicated to ensuring the safety of European consumers. This contains useful data for risk assessment such as default values for Skin Surface Area (SSA). A more in-depth study of anthropometric data across Europe reveals considerable variations. The default SSA value was derived from a study on the Dutch population, which is known to be one of the tallest nations in the World. This value could be inadequate for shorter populations of Europe. Data were collected in a survey on cosmetic consumption in France. Probabilistic treatment of these data and analysis of the case of methylisothiazolinone, a sensitizer recently evaluated by a deterministic approach submitted to SCCS, suggest that the default value for SSA used in the quantitative risk assessment might not be relevant for a significant share of the French female population. Others female populations of Southern Europe may also be excluded. This is of importance given that some studies show an increasing risk of developping skin sensitization among women. The disparities in anthropometric data across Europe should be taken into consideration. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Degtyar, V. G.; Kalashnikov, S. T.; Mokin, Yu. A.
2017-10-01
The paper considers problems of analyzing aerodynamic properties (ADP) of reenetry vehicles (RV) as blunted rotary bodies with small random surface distortions. The interactions of math simulation of surface distortions, selection of tools for predicting ADPs of shaped bodies, evaluation of different-type ADP variations and their adaptation for dynamic problems are analyzed. The possibilities of deterministic and probabilistic approaches to evaluation of ADP variations are considered. The practical value of the probabilistic approach is demonstrated. The examples of extremal deterministic evaluations of ADP variations for a sphere and a sharp cone are given.
A probabilistic tornado wind hazard model for the continental United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hossain, Q; Kimball, J; Mensing, R
A probabilistic tornado wind hazard model for the continental United States (CONUS) is described. The model incorporates both aleatory (random) and epistemic uncertainties associated with quantifying the tornado wind hazard parameters. The temporal occurrences of tornadoes within the continental United States (CONUS) is assumed to be a Poisson process. A spatial distribution of tornado touchdown locations is developed empirically based on the observed historical events within the CONUS. The hazard model is an aerial probability model that takes into consideration the size and orientation of the facility, the length and width of the tornado damage area (idealized as a rectanglemore » and dependent on the tornado intensity scale), wind speed variation within the damage area, tornado intensity classification errors (i.e.,errors in assigning a Fujita intensity scale based on surveyed damage), and the tornado path direction. Epistemic uncertainties in describing the distributions of the aleatory variables are accounted for by using more than one distribution model to describe aleatory variations. The epistemic uncertainties are based on inputs from a panel of experts. A computer program, TORNADO, has been developed incorporating this model; features of this program are also presented.« less
Real-time adaptive aircraft scheduling
NASA Technical Reports Server (NTRS)
Kolitz, Stephan E.; Terrab, Mostafa
1990-01-01
One of the most important functions of any air traffic management system is the assignment of ground-holding times to flights, i.e., the determination of whether and by how much the take-off of a particular aircraft headed for a congested part of the air traffic control (ATC) system should be postponed in order to reduce the likelihood and extent of airborne delays. An analysis is presented for the fundamental case in which flights from many destinations must be scheduled for arrival at a single congested airport; the formulation is also useful in scheduling the landing of airborne flights within the extended terminal area. A set of approaches is described for addressing a deterministic and a probabilistic version of this problem. For the deterministic case, where airport capacities are known and fixed, several models were developed with associated low-order polynomial-time algorithms. For general delay cost functions, these algorithms find an optimal solution. Under a particular natural assumption regarding the delay cost function, an extremely fast (O(n ln n)) algorithm was developed. For the probabilistic case, using an estimated probability distribution of airport capacities, a model was developed with an associated low-order polynomial-time heuristic algorithm with useful properties.
A Probabilistic Approach to Model Update
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.
2001-01-01
Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.
Review of probabilistic analysis of dynamic response of systems with random parameters
NASA Technical Reports Server (NTRS)
Kozin, F.; Klosner, J. M.
1989-01-01
The various methods that have been studied in the past to allow probabilistic analysis of dynamic response for systems with random parameters are reviewed. Dynamic response may have been obtained deterministically if the variations about the nominal values were small; however, for space structures which require precise pointing, the variations about the nominal values of the structural details and of the environmental conditions are too large to be considered as negligible. These uncertainties are accounted for in terms of probability distributions about their nominal values. The quantities of concern for describing the response of the structure includes displacements, velocities, and the distributions of natural frequencies. The exact statistical characterization of the response would yield joint probability distributions for the response variables. Since the random quantities will appear as coefficients, determining the exact distributions will be difficult at best. Thus, certain approximations will have to be made. A number of techniques that are available are discussed, even in the nonlinear case. The methods that are described were: (1) Liouville's equation; (2) perturbation methods; (3) mean square approximate systems; and (4) nonlinear systems with approximation by linear systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKone, T.E.; Enoch, K.G.
2002-08-01
CalTOX has been developed as a set of spreadsheet models and spreadsheet data sets to assist in assessing human exposures from continuous releases to multiple environmental media, i.e. air, soil, and water. It has also been used for waste classification and for setting soil clean-up levels at uncontrolled hazardous wastes sites. The modeling components of CalTOX include a multimedia transport and transformation model, multi-pathway exposure scenario models, and add-ins to quantify and evaluate uncertainty and variability. All parameter values used as inputs to CalTOX are distributions, described in terms of mean values and a coefficient of variation, rather than asmore » point estimates or plausible upper values such as most other models employ. This probabilistic approach allows both sensitivity and uncertainty analyses to be directly incorporated into the model operation. This manual provides CalTOX users with a brief overview of the CalTOX spreadsheet model and provides instructions for using the spreadsheet to make deterministic and probabilistic calculations of source-dose-risk relationships.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farmer, J.D.; Ott, E.; Yorke, J.A.
Dimension is perhaps the most basic property of an attractor. In this paper we discuss a variety of different definitions of dimension, compute their values for a typical example, and review previous work on the dimension of chaotic attractors. The relevant definitions of dimension are of two general types, those that depend only on metric properties, and those that depend on probabilistic properties (that is, they depend on the frequency with which a typical trajectory visits different regions of the attractor). Both our example and the previous work that we review support the conclusion that all of the probabilistic dimensionsmore » take on the same value, which we call the dimension of the natural measure, and all of the metric dimensions take on a common value, which we call the fractal dimension. Furthermore, the dimension of the natural measure is typically equal to the Lyapunov dimension, which is defined in terms of Lyapunov numbers, and thus is usually far easier to calculate than any other definition. Because it is computable and more physically relevant, we feel that the dimension of the natural measure is more important than the fractal dimension.« less
Application of a time probabilistic approach to seismic landslide hazard estimates in Iran
NASA Astrophysics Data System (ADS)
Rajabi, A. M.; Del Gaudio, V.; Capolongo, D.; Khamehchiyan, M.; Mahdavifar, M. R.
2009-04-01
Iran is a country located in a tectonic active belt and is prone to earthquake and related phenomena. In the recent years, several earthquakes caused many fatalities and damages to facilities, e.g. the Manjil (1990), Avaj (2002), Bam (2003) and Firuzabad-e-Kojur (2004) earthquakes. These earthquakes generated many landslides. For instance, catastrophic landslides triggered by the Manjil Earthquake (Ms = 7.7) in 1990 buried the village of Fatalak, killed more than 130 peoples and cut many important road and other lifelines, resulting in major economic disruption. In general, earthquakes in Iran have been concentrated in two major zones with different seismicity characteristics: one is the region of Alborz and Central Iran and the other is the Zagros Orogenic Belt. Understanding where seismically induced landslides are most likely to occur is crucial in reducing property damage and loss of life in future earthquakes. For this purpose a time probabilistic approach for earthquake-induced landslide hazard at regional scale, proposed by Del Gaudio et al. (2003), has been applied to the whole Iranian territory to provide the basis of hazard estimates. This method consists in evaluating the recurrence of seismically induced slope failure conditions inferred from the Newmark's model. First, by adopting Arias Intensity to quantify seismic shaking and using different Arias attenuation relations for Alborz - Central Iran and Zagros regions, well-established methods of seismic hazard assessment, based on the Cornell (1968) method, were employed to obtain the occurrence probabilities for different levels of seismic shaking in a time interval of interest (50 year). Then, following Jibson (1998), empirical formulae specifically developed for Alborz - Central Iran and Zagros, were used to represent, according to the Newmark's model, the relation linking Newmark's displacement Dn to Arias intensity Ia and to slope critical acceleration ac. These formulae were employed to evaluate the slope critical acceleration (Ac)x for which a prefixed probability exists that seismic shaking would result in a Dn value equal to a threshold x whose exceedence would cause landslide triggering. The obtained ac values represent the minimum slope resistance required to keep the probability of seismic-landslide triggering within the prefixed value. In particular we calculated the spatial distribution of (Ac)x for x thresholds of 10 and 2 cm in order to represent triggering conditions for coherent slides (e.g., slumps, block slides, slow earth flows) and disrupted slides (e.g., rock falls, rock slides, rock avalanches), respectively. Then we produced a probabilistic national map that shows the spatial distribution of (Ac)10 and (Ac)2, for a 10% probability of exceedence in 50 year, which is a significant level of hazard equal to that commonly used for building codes. The spatial distribution of the calculated (Ac)xvalues can be compared with the in situ actual ac values of specific slopes to estimate whether these slopes have a significant probability of failing under seismic action in the future. As example of possible application of this kind of time probabilistic map to hazard estimates, we compared the values obtained for the Manjil region with a GIS map providing spatial distribution of estimated ac values in the same region. The spatial distribution of slopes characterized by ac < (Ac)10 was then compared with the spatial distribution of the major landslides of coherent type triggered by the Manjil earthquake. This comparison provides indications on potential, problems and limits of the experimented approach for the study area. References Cornell, C.A., 1968: Engineering seismic risk analysis, Bull. Seism. Soc. Am., 58, 1583-1606. Del Gaudio V., Wasowski J., & Pierri P., 2003: An approach to time probabilistic evaluation of seismically-induced landslide hazard. Bull Seism. Soc. Am., 93, 557-569. Jibson, R.W., E.L. Harp and J.A. Michael, 1998: A method for producing digital probabilistic seismic landslide hazard maps: an example from the Los Angeles, California, area, U.S. Geological Survey Open-File Report 98-113, Golden, Colorado, 17 pp.
Probabilistic Model for Untargeted Peak Detection in LC-MS Using Bayesian Statistics.
Woldegebriel, Michael; Vivó-Truyols, Gabriel
2015-07-21
We introduce a novel Bayesian probabilistic peak detection algorithm for liquid chromatography-mass spectroscopy (LC-MS). The final probabilistic result allows the user to make a final decision about which points in a chromatogram are affected by a chromatographic peak and which ones are only affected by noise. The use of probabilities contrasts with the traditional method in which a binary answer is given, relying on a threshold. By contrast, with the Bayesian peak detection presented here, the values of probability can be further propagated into other preprocessing steps, which will increase (or decrease) the importance of chromatographic regions into the final results. The present work is based on the use of the statistical overlap theory of component overlap from Davis and Giddings (Davis, J. M.; Giddings, J. Anal. Chem. 1983, 55, 418-424) as prior probability in the Bayesian formulation. The algorithm was tested on LC-MS Orbitrap data and was able to successfully distinguish chemical noise from actual peaks without any data preprocessing.
NASA Technical Reports Server (NTRS)
Bast, Callie C.; Boyce, Lola
1995-01-01
The development of methodology for a probabilistic material strength degradation is described. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes five effects that typically reduce lifetime strength: high temperature, high-cycle mechanical fatigue, low-cycle mechanical fatigue, creep and thermal fatigue. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing predictions of high-cycle mechanical fatigue and high temperature effects with experiments are presented. Results from this limited verification study strongly supported that material degradation can be represented by randomized multifactor interaction models.
Probabilistic forecasting for extreme NO2 pollution episodes.
Aznarte, José L
2017-10-01
In this study, we investigate the convenience of quantile regression to predict extreme concentrations of NO 2 . Contrarily to the usual point-forecasting, where a single value is forecast for each horizon, probabilistic forecasting through quantile regression allows for the prediction of the full probability distribution, which in turn allows to build models specifically fit for the tails of this distribution. Using data from the city of Madrid, including NO 2 concentrations as well as meteorological measures, we build models that predict extreme NO 2 concentrations, outperforming point-forecasting alternatives, and we prove that the predictions are accurate, reliable and sharp. Besides, we study the relative importance of the independent variables involved, and show how the important variables for the median quantile are different than those important for the upper quantiles. Furthermore, we present a method to compute the probability of exceedance of thresholds, which is a simple and comprehensible manner to present probabilistic forecasts maximizing their usefulness. Copyright © 2017 Elsevier Ltd. All rights reserved.
Estimating parameters for probabilistic linkage of privacy-preserved datasets.
Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H
2017-07-10
Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher than the F-measure using calculated probabilities. Further, the threshold estimation yielded results for F-measure that were only slightly below the highest possible for those probabilities. The method appears highly accurate across a spectrum of datasets with varying degrees of error. As there are few alternatives for parameter estimation, the approach is a major step towards providing a complete operational approach for probabilistic linkage of privacy-preserved datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nourzadeh, H; Watkins, W; Siebers, J
Purpose: To determine if auto-contour and manual-contour—based plans differ when evaluated with respect to probabilistic coverage metrics and biological model endpoints for prostate IMRT. Methods: Manual and auto-contours were created for 149 CT image sets acquired from 16 unique prostate patients. A single physician manually contoured all images. Auto-contouring was completed utilizing Pinnacle’s Smart Probabilistic Image Contouring Engine (SPICE). For each CT, three different 78 Gy/39 fraction 7-beam IMRT plans are created; PD with drawn ROIs, PAS with auto-contoured ROIs, and PM with auto-contoured OARs with the manually drawn target. For each plan, 1000 virtual treatment simulations with different sampledmore » systematic errors for each simulation and a different sampled random error for each fraction were performed using our in-house GPU-accelerated robustness analyzer tool which reports the statistical probability of achieving dose-volume metrics, NTCP, TCP, and the probability of achieving the optimization criteria for both auto-contoured (AS) and manually drawn (D) ROIs. Metrics are reported for all possible cross-evaluation pairs of ROI types (AS,D) and planning scenarios (PD,PAS,PM). Bhattacharyya coefficient (BC) is calculated to measure the PDF similarities for the dose-volume metric, NTCP, TCP, and objectives with respect to the manually drawn contour evaluated on base plan (D-PD). Results: We observe high BC values (BC≥0.94) for all OAR objectives. BC values of max dose objective on CTV also signify high resemblance (BC≥0.93) between the distributions. On the other hand, BC values for CTV’s D95 and Dmin objectives are small for AS-PM, AS-PD. NTCP distributions are similar across all evaluation pairs, while TCP distributions of AS-PM, AS-PD sustain variations up to %6 compared to other evaluated pairs. Conclusion: No significant probabilistic differences are observed in the metrics when auto-contoured OARs are used. The prostate auto-contour needs improvement to achieve clinically equivalent plans.« less
Probability in reasoning: a developmental test on conditionals.
Barrouillet, Pierre; Gauffroy, Caroline
2015-04-01
Probabilistic theories have been claimed to constitute a new paradigm for the psychology of reasoning. A key assumption of these theories is captured by what they call the Equation, the hypothesis that the meaning of the conditional is probabilistic in nature and that the probability of If p then q is the conditional probability, in such a way that P(if p then q)=P(q|p). Using the probabilistic truth-table task in which participants are required to evaluate the probability of If p then q sentences, the present study explored the pervasiveness of the Equation through ages (from early adolescence to adulthood), types of conditionals (basic, causal, and inducements) and contents. The results reveal that the Equation is a late developmental achievement only endorsed by a narrow majority of educated adults for certain types of conditionals depending on the content they involve. Age-related changes in evaluating the probability of all the conditionals studied closely mirror the development of truth-value judgements observed in previous studies with traditional truth-table tasks. We argue that our modified mental model theory can account for this development, and hence for the findings related with the probability task, which do not consequently support the probabilistic approach of human reasoning over alternative theories. Copyright © 2014 Elsevier B.V. All rights reserved.
Probabilistic liver atlas construction.
Dura, Esther; Domingo, Juan; Ayala, Guillermo; Marti-Bonmati, Luis; Goceri, E
2017-01-13
Anatomical atlases are 3D volumes or shapes representing an organ or structure of the human body. They contain either the prototypical shape of the object of interest together with other shapes representing its statistical variations (statistical atlas) or a probability map of belonging to the object (probabilistic atlas). Probabilistic atlases are mostly built with simple estimations only involving the data at each spatial location. A new method for probabilistic atlas construction that uses a generalized linear model is proposed. This method aims to improve the estimation of the probability to be covered by the liver. Furthermore, all methods to build an atlas involve previous coregistration of the sample of shapes available. The influence of the geometrical transformation adopted for registration in the quality of the final atlas has not been sufficiently investigated. The ability of an atlas to adapt to a new case is one of the most important quality criteria that should be taken into account. The presented experiments show that some methods for atlas construction are severely affected by the previous coregistration step. We show the good performance of the new approach. Furthermore, results suggest that extremely flexible registration methods are not always beneficial, since they can reduce the variability of the atlas and hence its ability to give sensible values of probability when used as an aid in segmentation of new cases.
NASA Astrophysics Data System (ADS)
Caglar, Mehmet Umut; Pal, Ranadip
2011-03-01
Central dogma of molecular biology states that ``information cannot be transferred back from protein to either protein or nucleic acid''. However, this assumption is not exactly correct in most of the cases. There are a lot of feedback loops and interactions between different levels of systems. These types of interactions are hard to analyze due to the lack of cell level data and probabilistic - nonlinear nature of interactions. Several models widely used to analyze and simulate these types of nonlinear interactions. Stochastic Master Equation (SME) models give probabilistic nature of the interactions in a detailed manner, with a high calculation cost. On the other hand Probabilistic Boolean Network (PBN) models give a coarse scale picture of the stochastic processes, with a less calculation cost. Differential Equation (DE) models give the time evolution of mean values of processes in a highly cost effective way. The understanding of the relations between the predictions of these models is important to understand the reliability of the simulations of genetic regulatory networks. In this work the success of the mapping between SME, PBN and DE models is analyzed and the accuracy and affectivity of the control policies generated by using PBN and DE models is compared.
Zhang, Kejiang; Achari, Gopal; Pei, Yuansheng
2010-10-01
Different types of uncertain information-linguistic, probabilistic, and possibilistic-exist in site characterization. Their representation and propagation significantly influence the management of contaminated sites. In the absence of a framework with which to properly represent and integrate these quantitative and qualitative inputs together, decision makers cannot fully take advantage of the available and necessary information to identify all the plausible alternatives. A systematic methodology was developed in the present work to incorporate linguistic, probabilistic, and possibilistic information into the Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), a subgroup of Multi-Criteria Decision Analysis (MCDA) methods for ranking contaminated sites. The identification of criteria based on the paradigm of comparative risk assessment provides a rationale for risk-based prioritization. Uncertain linguistic, probabilistic, and possibilistic information identified in characterizing contaminated sites can be properly represented as numerical values, intervals, probability distributions, and fuzzy sets or possibility distributions, and linguistic variables according to their nature. These different kinds of representation are first transformed into a 2-tuple linguistic representation domain. The propagation of hybrid uncertainties is then carried out in the same domain. This methodology can use the original site information directly as much as possible. The case study shows that this systematic methodology provides more reasonable results. © 2010 SETAC.
Developing and Implementing the Data Mining Algorithms in RAVEN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea
The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantificationmore » analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.« less
NASA Astrophysics Data System (ADS)
Shah, Syed Muhammad Saqlain; Batool, Safeera; Khan, Imran; Ashraf, Muhammad Usman; Abbas, Syed Hussnain; Hussain, Syed Adnan
2017-09-01
Automatic diagnosis of human diseases are mostly achieved through decision support systems. The performance of these systems is mainly dependent on the selection of the most relevant features. This becomes harder when the dataset contains missing values for the different features. Probabilistic Principal Component Analysis (PPCA) has reputation to deal with the problem of missing values of attributes. This research presents a methodology which uses the results of medical tests as input, extracts a reduced dimensional feature subset and provides diagnosis of heart disease. The proposed methodology extracts high impact features in new projection by using Probabilistic Principal Component Analysis (PPCA). PPCA extracts projection vectors which contribute in highest covariance and these projection vectors are used to reduce feature dimension. The selection of projection vectors is done through Parallel Analysis (PA). The feature subset with the reduced dimension is provided to radial basis function (RBF) kernel based Support Vector Machines (SVM). The RBF based SVM serves the purpose of classification into two categories i.e., Heart Patient (HP) and Normal Subject (NS). The proposed methodology is evaluated through accuracy, specificity and sensitivity over the three datasets of UCI i.e., Cleveland, Switzerland and Hungarian. The statistical results achieved through the proposed technique are presented in comparison to the existing research showing its impact. The proposed technique achieved an accuracy of 82.18%, 85.82% and 91.30% for Cleveland, Hungarian and Switzerland dataset respectively.
Monaghan, Kieran A.
2016-01-01
Natural ecological variability and analytical design can bias the derived value of a biotic index through the variable influence of indicator body-size, abundance, richness, and ascribed tolerance scores. Descriptive statistics highlight this risk for 26 aquatic indicator systems; detailed analysis is provided for contrasting weighted-average indices applying the example of the BMWP, which has the best supporting data. Differences in body size between taxa from respective tolerance classes is a common feature of indicator systems; in some it represents a trend ranging from comparatively small pollution tolerant to larger intolerant organisms. Under this scenario, the propensity to collect a greater proportion of smaller organisms is associated with negative bias however, positive bias may occur when equipment (e.g. mesh-size) selectively samples larger organisms. Biotic indices are often derived from systems where indicator taxa are unevenly distributed along the gradient of tolerance classes. Such skews in indicator richness can distort index values in the direction of taxonomically rich indicator classes with the subsequent degree of bias related to the treatment of abundance data. The misclassification of indicator taxa causes bias that varies with the magnitude of the misclassification, the relative abundance of misclassified taxa and the treatment of abundance data. These artifacts of assessment design can compromise the ability to monitor biological quality. The statistical treatment of abundance data and the manipulation of indicator assignment and class richness can be used to improve index accuracy. While advances in methods of data collection (i.e. DNA barcoding) may facilitate improvement, the scope to reduce systematic bias is ultimately limited to a strategy of optimal compromise. The shortfall in accuracy must be addressed by statistical pragmatism. At any particular site, the net bias is a probabilistic function of the sample data, resulting in an error variance around an average deviation. Following standardized protocols and assigning precise reference conditions, the error variance of their comparative ratio (test-site:reference) can be measured and used to estimate the accuracy of the resultant assessment. PMID:27392036
NASA Astrophysics Data System (ADS)
Shitzer, Avraham; Arens, Edward; Zhang, Hui
2016-07-01
The assignments of basal metabolic rates (BMR), basal cardiac output (BCO), and basal blood perfusion rates (BBPR) were compared in nine multi-compartment, whole-body thermoregulation models. The data are presented at three levels of detail: total body, specific body regions, and regional body tissue layers. Differences in the assignment of these quantities among the compared models increased with the level of detail, in the above order. The ranges of variability in the total body BMR was 6.5 % relative to the lowest value, with a mean of 84.3 ± 2 W, and in the BCO, it was 8 % with a mean of 4.70 ± 0.13 l/min. The least variability among the body regions is seen in the combined torso (shoulders, thorax, and abdomen: ±7.8 % BMR and ±5.9 % BBPR) and in the combined head (head, face, and neck ±9.9 % BMR and ±10.9 % BBPR), determined by the ratio of the standard deviation to the mean. Much more variability is apparent in the extremities with the most showing in the BMR of the feet (±117 %), followed by the BBPR in the arms (±61.3 %). In the tissue layers, most of the bone layers were assigned zero BMR and BBPR, except in the shoulders and in the extremities that were assigned non-zero values in a number of models. The next lowest values were assigned to the fat layers, with occasional zero values. Skin basal values were invariably non-zero but involved very low values in certain models, e.g., BBPR in the feet and the hands. Muscle layers were invariably assigned high values with the highest found in the thorax, abdomen, and legs. The brain, lung, and viscera layers were assigned the highest of all values of both basal quantities with those of the brain layers showing rather tight ranges of variability in both basal quantities. Average basal values of the "time-seasoned" models presented in this study could be useful as a first step in future modeling efforts subject to appropriate adjustment of values to conform to most recently available and reliable data.
Incremental dynamical downscaling for probabilistic analysis based on multiple GCM projections
NASA Astrophysics Data System (ADS)
Wakazuki, Y.
2015-12-01
A dynamical downscaling method for probabilistic regional scale climate change projections was developed to cover an uncertainty of multiple general circulation model (GCM) climate simulations. The climatological increments (future minus present climate states) estimated by GCM simulation results were statistically analyzed using the singular vector decomposition. Both positive and negative perturbations from the ensemble mean with the magnitudes of their standard deviations were extracted and were added to the ensemble mean of the climatological increments. The analyzed multiple modal increments were utilized to create multiple modal lateral boundary conditions for the future climate regional climate model (RCM) simulations by adding to an objective analysis data. This data handling is regarded to be an advanced method of the pseudo-global-warming (PGW) method previously developed by Kimura and Kitoh (2007). The incremental handling for GCM simulations realized approximated probabilistic climate change projections with the smaller number of RCM simulations. Three values of a climatological variable simulated by RCMs for a mode were used to estimate the response to the perturbation of the mode. For the probabilistic analysis, climatological variables of RCMs were assumed to show linear response to the multiple modal perturbations, although the non-linearity was seen for local scale rainfall. Probability of temperature was able to be estimated within two modes perturbation simulations, where the number of RCM simulations for the future climate is five. On the other hand, local scale rainfalls needed four modes simulations, where the number of the RCM simulations is nine. The probabilistic method is expected to be used for regional scale climate change impact assessment in the future.
Quantum Experimental Data in Psychology and Economics
NASA Astrophysics Data System (ADS)
Aerts, Diederik; D'Hooghe, Bart; Haven, Emmanuel
2010-12-01
We prove a theorem which shows that a collection of experimental data of probabilistic weights related to decisions with respect to situations and their disjunction cannot be modeled within a classical probabilistic weight structure in case the experimental data contain the effect referred to as the ‘disjunction effect’ in psychology. We identify different experimental situations in psychology, more specifically in concept theory and in decision theory, and in economics (namely situations where Savage’s Sure-Thing Principle is violated) where the disjunction effect appears and we point out the common nature of the effect. We analyze how our theorem constitutes a no-go theorem for classical probabilistic weight structures for common experimental data when the disjunction effect is affecting the values of these data. We put forward a simple geometric criterion that reveals the non classicality of the considered probabilistic weights and we illustrate our geometrical criterion by means of experimentally measured membership weights of items with respect to pairs of concepts and their disjunctions. The violation of the classical probabilistic weight structure is very analogous to the violation of the well-known Bell inequalities studied in quantum mechanics. The no-go theorem we prove in the present article with respect to the collection of experimental data we consider has a status analogous to the well known no-go theorems for hidden variable theories in quantum mechanics with respect to experimental data obtained in quantum laboratories. Our analysis puts forward a strong argument in favor of the validity of using the quantum formalism for modeling the considered psychological experimental data as considered in this paper.
antiSMASH 3.0—a comprehensive resource for the genome mining of biosynthetic gene clusters
Blin, Kai; Duddela, Srikanth; Krug, Daniel; Kim, Hyun Uk; Bruccoleri, Robert; Lee, Sang Yup; Fischbach, Michael A; Müller, Rolf; Wohlleben, Wolfgang; Breitling, Rainer; Takano, Eriko
2015-01-01
Abstract Microbial secondary metabolism constitutes a rich source of antibiotics, chemotherapeutics, insecticides and other high-value chemicals. Genome mining of gene clusters that encode the biosynthetic pathways for these metabolites has become a key methodology for novel compound discovery. In 2011, we introduced antiSMASH, a web server and stand-alone tool for the automatic genomic identification and analysis of biosynthetic gene clusters, available at http://antismash.secondarymetabolites.org. Here, we present version 3.0 of antiSMASH, which has undergone major improvements. A full integration of the recently published ClusterFinder algorithm now allows using this probabilistic algorithm to detect putative gene clusters of unknown types. Also, a new dereplication variant of the ClusterBlast module now identifies similarities of identified clusters to any of 1172 clusters with known end products. At the enzyme level, active sites of key biosynthetic enzymes are now pinpointed through a curated pattern-matching procedure and Enzyme Commission numbers are assigned to functionally classify all enzyme-coding genes. Additionally, chemical structure prediction has been improved by incorporating polyketide reduction states. Finally, in order for users to be able to organize and analyze multiple antiSMASH outputs in a private setting, a new XML output module allows offline editing of antiSMASH annotations within the Geneious software. PMID:25948579
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ascough, II, James Clifford
1992-05-01
The capability to objectively evaluate design performance of shallow landfill burial (SLB) systems is of great interest to diverse scientific disciplines, including hydrologists, engineers, environmental scientists, and SLB regulators. The goal of this work was to develop and validate a procedure for the nonsubjective evaluation of SLB designs under actual or simulated environmental conditions. A multiobjective decision module (MDM) based on scoring functions (Wymore, 1988) was implemented to evaluate SLB design performance. Input values to the MDM are provided by hydrologic models. The MDM assigns a total score to each SLB design alternative, thereby allowing for rapid and repeatable designmore » performance evaluation. The MDM was validated for a wide range of SLB designs under different climatic conditions. Rigorous assessment of SLB performance also requires incorporation of hydrologic probabilistic analysis and hydrologic risk into the overall design. This was accomplished through the development of a frequency analysis module. The frequency analysis module allows SLB design event magnitudes to be calculated based on the hydrologic return period. The multiobjective decision and freqeuncy anslysis modules were integrated in a decision support system (DSS) framework, SLEUTH (Shallow Landfill Evaluation Using Transport and Hydrology). SLEUTH is a Microsoft Windows {trademark} application, and is written in the Knowledge Pro Windows (Knowledge Garden, Inc., 1991) development language.« less
Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.
2015-01-01
In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152
NASA Astrophysics Data System (ADS)
Escriba, P. A.; Callado, A.; Santos, D.; Santos, C.; Simarro, J.; García-Moya, J. A.
2009-09-01
At 00 UTC 24 January 2009 an explosive ciclogenesis originated over the Atlantic Ocean reached its maximum intensity with observed surface pressures lower than 970 hPa on its center and placed at Gulf of Vizcaya. During its path through southern France this low caused strong westerly and north-westerly winds over the Iberian Peninsula higher than 150 km/h at some places. These extreme winds leaved 10 casualties in Spain, 8 of them in Catalonia. The aim of this work is to show whether exists an added value in the short range prediction of the 24 January 2009 strong winds when using the Short Range Ensemble Prediction System (SREPS) of the Spanish Meteorological Agency (AEMET), with respect to the operational forecasting tools. This study emphasizes two aspects of probabilistic forecasting: the ability of a 3-day forecast of warn an extreme windy event and the ability of quantifying the predictability of the event so that giving value to deterministic forecast. Two type of probabilistic forecasts of wind are carried out, a non-calibrated and a calibrated one using Bayesian Model Averaging (BMA). AEMET runs daily experimentally SREPS twice a day (00 and 12 UTC). This system consists of 20 members that are constructed by integrating 5 local area models, COSMO (COSMO), HIRLAM (HIRLAM Consortium), HRM (DWD), MM5 (NOAA) and UM (UKMO), at 25 km of horizontal resolution. Each model uses 4 different initial and boundary conditions, the global models GFS (NCEP), GME (DWD), IFS (ECMWF) and UM. By this way it is obtained a probabilistic forecast that takes into account the initial, the contour and the model errors. BMA is a statistical tool for combining predictive probability functions from different sources. The BMA predictive probability density function (PDF) is a weighted average of PDFs centered on the individual bias-corrected forecasts. The weights are equal to posterior probabilities of the models generating the forecasts and reflect the skill of the ensemble members. Here BMA is applied to provide probabilistic forecasts of wind speed. In this work several forecasts for different time ranges (H+72, H+48 and H+24) of 10 meters wind speed over Catalonia are verified subjectively at one of the instants of maximum intensity, 12 UTC 24 January 2009. On one hand, three probabilistic forecasts are compared, ECMWF EPS, non-calibrated SREPS and calibrated SREPS. On the other hand, the relationship between predictability and skill of deterministic forecast is studied by looking at HIRLAM 0.16 deterministic forecasts of the event. Verification is focused on location and intensity of 10 meters wind speed and 10-minutal measures from AEMET automatic ground stations are used as observations. The results indicate that SREPS is able to forecast three days ahead mean winds higher than 36 km/h and that correctly localizes them with a significant probability of ocurrence in the affected area. The probability is higher after BMA calibration of the ensemble. The fact that probability of strong winds is high allows us to state that the predictability of the event is also high and, as a consequence, deterministic forecasts are more reliable. This is confirmed when verifying HIRLAM deterministic forecasts against observed values.
Fischer, Paul W; Cullen, Alison C; Ettl, Gregory J
2017-01-01
The objectives of this study are to understand tradeoffs between forest carbon and timber values, and evaluate the impact of uncertainty in improved forest management (IFM) carbon offset projects to improve forest management decisions. The study uses probabilistic simulation of uncertainty in financial risk for three management scenarios (clearcutting in 45- and 65-year rotations and no harvest) under three carbon price schemes (historic voluntary market prices, cap and trade, and carbon prices set to equal net present value (NPV) from timber-oriented management). Uncertainty is modeled for value and amount of carbon credits and wood products, the accuracy of forest growth model forecasts, and four other variables relevant to American Carbon Registry methodology. Calculations use forest inventory data from a 1,740 ha forest in western Washington State, using the Forest Vegetation Simulator (FVS) growth model. Sensitivity analysis shows that FVS model uncertainty contributes more than 70% to overall NPV variance, followed in importance by variability in inventory sample (3-14%), and short-term prices for timber products (8%), while variability in carbon credit price has little influence (1.1%). At regional average land-holding costs, a no-harvest management scenario would become revenue-positive at a carbon credit break-point price of $14.17/Mg carbon dioxide equivalent (CO 2 e). IFM carbon projects are associated with a greater chance of both large payouts and large losses to landowners. These results inform policymakers and forest owners of the carbon credit price necessary for IFM approaches to equal or better the business-as-usual strategy, while highlighting the magnitude of financial risk and reward through probabilistic simulation. © 2016 Society for Risk Analysis.
A Dynamic Bayesian Network Model for the Production and Inventory Control
NASA Astrophysics Data System (ADS)
Shin, Ji-Sun; Takazaki, Noriyuki; Lee, Tae-Hong; Kim, Jin-Il; Lee, Hee-Hyol
In general, the production quantities and delivered goods are changed randomly and then the total stock is also changed randomly. This paper deals with the production and inventory control using the Dynamic Bayesian Network. Bayesian Network is a probabilistic model which represents the qualitative dependence between two or more random variables by the graph structure, and indicates the quantitative relations between individual variables by the conditional probability. The probabilistic distribution of the total stock is calculated through the propagation of the probability on the network. Moreover, an adjusting rule of the production quantities to maintain the probability of a lower limit and a ceiling of the total stock to certain values is shown.
Crovelli, R.A.
1997-01-01
The National Park Service needs to establish in all of the national parks how large the parking lots should be in order to enjoy and presence our natural resources, for example, in the Delicate Arch in the Arches National Park. Probabilistic and statistical relationships were developed between the number of vehicles (N) at one time in the Wolfe Ranch parking lot and the number of visitors (X) at Delicate Arch 1.5 miles away in the Arches National Park, southeastern Utah. The value of N is determined such that 30 or more visitors are at the arch only 10% of the time.
Probabilistic analysis of structures involving random stress-strain behavior
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Thacker, B. H.; Harren, S. V.
1991-01-01
The present methodology for analysis of structures with random stress strain behavior characterizes the uniaxial stress-strain curve in terms of (1) elastic modulus, (2) engineering stress at initial yield, (3) initial plastic-hardening slope, (4) engineering stress at point of ultimate load, and (5) engineering strain at point of ultimate load. The methodology is incorporated into the Numerical Evaluation of Stochastic Structures Under Stress code for probabilistic structural analysis. The illustrative problem of a thick cylinder under internal pressure, where both the internal pressure and the stress-strain curve are random, is addressed by means of the code. The response value is the cumulative distribution function of the equivalent plastic strain at the inner radius.
Undiscovered porphyry copper resources in the Urals—A probabilistic mineral resource assessment
Hammarstrom, Jane M.; Mihalasky, Mark J.; Ludington, Stephen; Phillips, Jeffrey; Berger, Byron R.; Denning, Paul; Dicken, Connie; Mars, John; Zientek, Michael L.; Herrington, Richard J.; Seltmann, Reimar
2017-01-01
A probabilistic mineral resource assessment of metal resources in undiscovered porphyry copper deposits of the Ural Mountains in Russia and Kazakhstan was done using a quantitative form of mineral resource assessment. Permissive tracts were delineated on the basis of mapped and inferred subsurface distributions of igneous rocks assigned to tectonic zones that include magmatic arcs where the occurrence of porphyry copper deposits within 1 km of the Earth's surface are possible. These permissive tracts outline four north-south trending volcano-plutonic belts in major structural zones of the Urals. From west to east, these include permissive lithologies for porphyry copper deposits associated with Paleozoic subduction-related island-arc complexes preserved in the Tagil and Magnitogorsk arcs, Paleozoic island-arc fragments and associated tonalite-granodiorite intrusions in the East Uralian zone, and Carboniferous continental-margin arcs developed on the Kazakh craton in the Transuralian zone. The tracts range from about 50,000 to 130,000 km2 in area. The Urals host 8 known porphyry copper deposits with total identified resources of about 6.4 million metric tons of copper, at least 20 additional porphyry copper prospect areas, and numerous copper-bearing skarns and copper occurrences.Probabilistic estimates predict a mean of 22 undiscovered porphyry copper deposits within the four permissive tracts delineated in the Urals. Combining estimates with established grade and tonnage models predicts a mean of 82 million metric tons of undiscovered copper. Application of an economic filter suggests that about half of that amount could be economically recoverable based on assumed depth distributions, availability of infrastructure, recovery rates, current metals prices, and investment environment.
Fu, H C; Xu, Y Y; Chang, H Y
1999-12-01
Recognition of similar (confusion) characters is a difficult problem in optical character recognition (OCR). In this paper, we introduce a neural network solution that is capable of modeling minor differences among similar characters, and is robust to various personal handwriting styles. The Self-growing Probabilistic Decision-based Neural Network (SPDNN) is a probabilistic type neural network, which adopts a hierarchical network structure with nonlinear basis functions and a competitive credit-assignment scheme. Based on the SPDNN model, we have constructed a three-stage recognition system. First, a coarse classifier determines a character to be input to one of the pre-defined subclasses partitioned from a large character set, such as Chinese mixed with alphanumerics. Then a character recognizer determines the input image which best matches the reference character in the subclass. Lastly, the third module is a similar character recognizer, which can further enhance the recognition accuracy among similar or confusing characters. The prototype system has demonstrated a successful application of SPDNN to similar handwritten Chinese recognition for the public database CCL/HCCR1 (5401 characters x200 samples). Regarding performance, experiments on the CCL/HCCR1 database produced 90.12% recognition accuracy with no rejection, and 94.11% accuracy with 6.7% rejection, respectively. This recognition accuracy represents about 4% improvement on the previously announced performance. As to processing speed, processing before recognition (including image preprocessing, segmentation, and feature extraction) requires about one second for an A4 size character image, and recognition consumes approximately 0.27 second per character on a Pentium-100 based personal computer, without use of any hardware accelerator or co-processor.
Economic Evaluation of Pediatric Telemedicine Consultations to Rural Emergency Departments.
Yang, Nikki H; Dharmar, Madan; Yoo, Byung-Kwang; Leigh, J Paul; Kuppermann, Nathan; Romano, Patrick S; Nesbitt, Thomas S; Marcin, James P
2015-08-01
Comprehensive economic evaluations have not been conducted on telemedicine consultations to children in rural emergency departments (EDs). We conducted an economic evaluation to estimate the cost, effectiveness, and return on investment (ROI) of telemedicine consultations provided to health care providers of acutely ill and injured children in rural EDs compared with telephone consultations from a health care payer prospective. We built a decision model with parameters from primary programmatic data, national data, and the literature. We performed a base-case cost-effectiveness analysis (CEA), a probabilistic CEA with Monte Carlo simulation, and ROI estimation when CEA suggested cost-saving. The CEA was based on program effectiveness, derived from transfer decisions following telemedicine and telephone consultations. The average cost for a telemedicine consultation was $3641 per child/ED/year in 2013 US dollars. Telemedicine consultations resulted in 31% fewer patient transfers compared with telephone consultations and a cost reduction of $4662 per child/ED/year. Our probabilistic CEA demonstrated telemedicine consultations were less costly than telephone consultations in 57% of simulation iterations. The ROI was calculated to be 1.28 ($4662/$3641) from the base-case analysis and estimated to be 1.96 from the probabilistic analysis, suggesting a $1.96 return for each dollar invested in telemedicine. Treating 10 acutely ill and injured children at each rural ED with telemedicine resulted in an annual cost-savings of $46,620 per ED. Telephone and telemedicine consultations were not randomly assigned, potentially resulting in biased results. From a health care payer perspective, telemedicine consultations to health care providers of acutely ill and injured children presenting to rural EDs are cost-saving (base-case and more than half of Monte Carlo simulation iterations) or cost-effective compared with telephone consultations. © The Author(s) 2015.
Unsupervised Framework to Monitor Lake Dynamics
NASA Technical Reports Server (NTRS)
Chen, Xi C. (Inventor); Boriah, Shyam (Inventor); Khandelwal, Ankush (Inventor); Kumar, Vipin (Inventor)
2016-01-01
A method of reducing processing time when assigning geographic areas to land cover labels using satellite sensor values includes a processor receiving a feature value for each pixel in a time series of frames of satellite sensor values, each frame containing multiple pixels and each frame covering a same geographic location. For each sub-area of the geographic location, the sub-area is assigned to one of at least three land cover labels. The processor determines a fraction function for a first sub-area assigned to a first land cover label. The sub-areas that were assigned to the first land cover label are reassigned to one of the second land cover label and the third land cover label based on the fraction functions of the sub-areas.
Blind image quality assessment via probabilistic latent semantic analysis.
Yang, Xichen; Sun, Quansen; Wang, Tianshu
2016-01-01
We propose a blind image quality assessment that is highly unsupervised and training free. The new method is based on the hypothesis that the effect caused by distortion can be expressed by certain latent characteristics. Combined with probabilistic latent semantic analysis, the latent characteristics can be discovered by applying a topic model over a visual word dictionary. Four distortion-affected features are extracted to form the visual words in the dictionary: (1) the block-based local histogram; (2) the block-based local mean value; (3) the mean value of contrast within a block; (4) the variance of contrast within a block. Based on the dictionary, the latent topics in the images can be discovered. The discrepancy between the frequency of the topics in an unfamiliar image and a large number of pristine images is applied to measure the image quality. Experimental results for four open databases show that the newly proposed method correlates well with human subjective judgments of diversely distorted images.
NASA Astrophysics Data System (ADS)
Wei, Y.; Thomas, S.; Zhou, H.; Arcas, D.; Titov, V. V.
2017-12-01
The increasing potential tsunami hazards pose great challenges for infrastructures along the coastlines of the U.S. Pacific Northwest. Tsunami impact at a coastal site is usually assessed from deterministic scenarios based on 10,000 years of geological records in the Cascadia Subduction Zone (CSZ). Aside from these deterministic methods, the new ASCE 7-16 tsunami provisions provide engineering design criteria of tsunami loads on buildings based on a probabilistic approach. This work develops a site-specific model near Newport, OR using high-resolution grids, and compute tsunami inundation depth and velocities at the study site resulted from credible probabilistic and deterministic earthquake sources in the Cascadia Subduction Zone. Three Cascadia scenarios, two deterministic scenarios, XXL1 and L1, and a 2,500-yr probabilistic scenario compliant with the new ASCE 7-16 standard, are simulated using combination of a depth-averaged shallow water model for offshore propagation and a Boussinesq-type model for onshore inundation. We speculate on the methods and procedure to obtain the 2,500-year probabilistic scenario for Newport that is compliant with the ASCE 7-16 tsunami provisions. We provide details of model results, particularly the inundation depth and flow speed for a new building, which will also be designated as a tsunami vertical evacuation shelter, at Newport, Oregon. We show that the ASCE 7-16 consistent hazards are between those obtained from deterministic L1 and XXL1 scenarios, and the greatest impact on the building may come from later waves. As a further step, we utilize the inundation model results to numerically compute tracks of large vessels in the vicinity of the building site and estimate if these vessels will impact on the building site during the extreme XXL1 and ASCE 7-16 hazard-consistent scenarios. Two-step study is carried out first to study tracks of massless particles and then large vessels with assigned mass considering drag force, inertial force, ship grounding and mooring. The simulation results show that none of the large vessels will impact on the building site in all tested scenarios.
Climatological Observations for Maritime Prediction and Analysis Support Service (COMPASS)
NASA Astrophysics Data System (ADS)
OConnor, A.; Kirtman, B. P.; Harrison, S.; Gorman, J.
2016-02-01
Current US Navy forecasting systems cannot easily incorporate extended-range forecasts that can improve mission readiness and effectiveness; ensure safety; and reduce cost, labor, and resource requirements. If Navy operational planners had systems that incorporated these forecasts, they could plan missions using more reliable and longer-term weather and climate predictions. Further, using multi-model forecast ensembles instead of single forecasts would produce higher predictive performance. Extended-range multi-model forecast ensembles, such as those available in the North American Multi-Model Ensemble (NMME), are ideal for system integration because of their high skill predictions; however, even higher skill predictions can be produced if forecast model ensembles are combined correctly. While many methods for weighting models exist, the best method in a given environment requires expert knowledge of the models and combination methods.We present an innovative approach that uses machine learning to combine extended-range predictions from multi-model forecast ensembles and generate a probabilistic forecast for any region of the globe up to 12 months in advance. Our machine-learning approach uses 30 years of hindcast predictions to learn patterns of forecast model successes and failures. Each model is assigned a weight for each environmental condition, 100 km2 region, and day given any expected environmental information. These weights are then applied to the respective predictions for the region and time of interest to effectively stitch together a single, coherent probabilistic forecast. Our experimental results demonstrate the benefits of our approach to produce extended-range probabilistic forecasts for regions and time periods of interest that are superior, in terms of skill, to individual NMME forecast models and commonly weighted models. The probabilistic forecast leverages the strengths of three NMME forecast models to predict environmental conditions for an area spanning from San Diego, CA to Honolulu, HI, seven months in-advance. Key findings include: weighted combinations of models are strictly better than individual models; machine-learned combinations are especially better; and forecasts produced using our approach have the highest rank probability skill score most often.
Karnowski, Thomas P [Knoxville, TN; Tobin, Jr., Kenneth W.; Muthusamy Govindasamy, Vijaya Priya [Knoxville, TN; Chaum, Edward [Memphis, TN
2012-07-10
A method for assigning a confidence metric for automated determination of optic disc location that includes analyzing a retinal image and determining at least two sets of coordinates locating an optic disc in the retinal image. The sets of coordinates can be determined using first and second image analysis techniques that are different from one another. An accuracy parameter can be calculated and compared to a primary risk cut-off value. A high confidence level can be assigned to the retinal image if the accuracy parameter is less than the primary risk cut-off value and a low confidence level can be assigned to the retinal image if the accuracy parameter is greater than the primary risk cut-off value. The primary risk cut-off value being selected to represent an acceptable risk of misdiagnosis of a disease having retinal manifestations by the automated technique.
Stress attenuates the flexible updating of aversive value
Raio, Candace M.; Hartley, Catherine A.; Orederu, Temidayo A.; Li, Jian; Phelps, Elizabeth A.
2017-01-01
In a dynamic environment, sources of threat or safety can unexpectedly change, requiring the flexible updating of stimulus−outcome associations that promote adaptive behavior. However, aversive contexts in which we are required to update predictions of threat are often marked by stress. Acute stress is thought to reduce behavioral flexibility, yet its influence on the modulation of aversive value has not been well characterized. Given that stress exposure is a prominent risk factor for anxiety and trauma-related disorders marked by persistent, inflexible responses to threat, here we examined how acute stress affects the flexible updating of threat responses. Participants completed an aversive learning task, in which one stimulus was probabilistically associated with an electric shock, while the other stimulus signaled safety. A day later, participants underwent an acute stress or control manipulation before completing a reversal learning task during which the original stimulus−outcome contingencies switched. Skin conductance and neuroendocrine responses provided indices of sympathetic arousal and stress responses, respectively. Despite equivalent initial learning, stressed participants showed marked impairments in reversal learning relative to controls. Additionally, reversal learning deficits across participants were related to heightened levels of alpha-amylase, a marker of noradrenergic activity. Finally, fitting arousal data to a computational reinforcement learning model revealed that stress-induced reversal learning deficits emerged from stress-specific changes in the weight assigned to prediction error signals, disrupting the adaptive adjustment of learning rates. Our findings provide insight into how stress renders individuals less sensitive to changes in aversive reinforcement and have implications for understanding clinical conditions marked by stress-related psychopathology. PMID:28973957
A novel approach for detecting heat waves: the Standardized Heat-Wave Index.
NASA Astrophysics Data System (ADS)
Cucchi, Marco; Petitta, Marcello; Calmanti, Sandro
2016-04-01
Extreme temperatures have an impact on the energy balance of any living organism and on the operational capabilities of critical infrastructures. The ability to capture the occurrence of extreme temperature events is therefore an essential property of a multi-hazard extreme climate indicator. In this paper we introduce a new index for the detection of such extreme temperature events called SHI (Standardized Heat-Wave Index), developed in the context of XCF project for the construction of a multi-hazard extreme climate indicator (ECI). SHI is a probabilistic index based on the analysis of maximum daily temperatures time series; it is standardized, enabling comparisons overs space/time and with other indices, and it is capable of describing both extreme cold and hot events. Given a particular location, SHI is constructed using the time series of local maximum daily temperatures with the following procedure: three-days cumulated maximum daily temperatures are assigned to each day of the time series; probabilities of occurrence in the same months the reference days belong to are computed for each of the previous calculated values; such probability values are thus projected on a standard normal distribution, obtaining our standardized indices. In this work we present results obtained using NCEP Reanalysis dataset for air temperature at sigma 0.995 level, which timespan ranges from 1948 to 2014. Given the specific framework of this work, the geographical focus of this study is limited to the African continent. We present a validation of the index by showing its use for monitoring heat-waves under different climate regimes.
Code of Federal Regulations, 2012 CFR
2012-10-01
... geographical areas assigned to a County Zone Number and Per Acre Zone Value? 2806.21 Section 2806.21 Public... MANAGEMENT ACT Rents Linear Rights-Of-Way § 2806.21 When and how are counties or other geographical areas assigned to a County Zone Number and Per Acre Zone Value? Counties (or other geographical areas) are...
Code of Federal Regulations, 2011 CFR
2011-10-01
... geographical areas assigned to a County Zone Number and Per Acre Zone Value? 2806.21 Section 2806.21 Public... MANAGEMENT ACT Rents Linear Rights-Of-Way § 2806.21 When and how are counties or other geographical areas assigned to a County Zone Number and Per Acre Zone Value? Counties (or other geographical areas) are...
Code of Federal Regulations, 2013 CFR
2013-10-01
... geographical areas assigned to a County Zone Number and Per Acre Zone Value? 2806.21 Section 2806.21 Public... MANAGEMENT ACT Rents Linear Rights-Of-Way § 2806.21 When and how are counties or other geographical areas assigned to a County Zone Number and Per Acre Zone Value? Counties (or other geographical areas) are...
Code of Federal Regulations, 2014 CFR
2014-10-01
... geographical areas assigned to a County Zone Number and Per Acre Zone Value? 2806.21 Section 2806.21 Public... MANAGEMENT ACT Rents Linear Rights-Of-Way § 2806.21 When and how are counties or other geographical areas assigned to a County Zone Number and Per Acre Zone Value? Counties (or other geographical areas) are...
Computational approaches to protein inference in shotgun proteomics
2012-01-01
Shotgun proteomics has recently emerged as a powerful approach to characterizing proteomes in biological samples. Its overall objective is to identify the form and quantity of each protein in a high-throughput manner by coupling liquid chromatography with tandem mass spectrometry. As a consequence of its high throughput nature, shotgun proteomics faces challenges with respect to the analysis and interpretation of experimental data. Among such challenges, the identification of proteins present in a sample has been recognized as an important computational task. This task generally consists of (1) assigning experimental tandem mass spectra to peptides derived from a protein database, and (2) mapping assigned peptides to proteins and quantifying the confidence of identified proteins. Protein identification is fundamentally a statistical inference problem with a number of methods proposed to address its challenges. In this review we categorize current approaches into rule-based, combinatorial optimization and probabilistic inference techniques, and present them using integer programing and Bayesian inference frameworks. We also discuss the main challenges of protein identification and propose potential solutions with the goal of spurring innovative research in this area. PMID:23176300
Gutiérrez, Simón; Fernandez, Carlos; Barata, Carlos; Tarazona, José Vicente
2009-12-20
This work presents a computer model for Risk Assessment of Basins by Ecotoxicological Evaluation (RABETOX). The model is based on whole effluent toxicity testing and water flows along a specific river basin. It is capable of estimating the risk along a river segment using deterministic and probabilistic approaches. The Henares River Basin was selected as a case study to demonstrate the importance of seasonal hydrological variations in Mediterranean regions. As model inputs, two different ecotoxicity tests (the miniaturized Daphnia magna acute test and the D.magna feeding test) were performed on grab samples from 5 waste water treatment plant effluents. Also used as model inputs were flow data from the past 25 years, water velocity measurements and precise distance measurements using Geographical Information Systems (GIS). The model was implemented into a spreadsheet and the results were interpreted and represented using GIS in order to facilitate risk communication. To better understand the bioassays results, the effluents were screened through SPME-GC/MS analysis. The deterministic model, performed each month during one calendar year, showed a significant seasonal variation of risk while revealing that September represents the worst-case scenario with values up to 950 Risk Units. This classifies the entire area of study for the month of September as "sublethal significant risk for standard species". The probabilistic approach using Monte Carlo analysis was performed on 7 different forecast points distributed along the Henares River. A 0% probability of finding "low risk" was found at all forecast points with a more than 50% probability of finding "potential risk for sensitive species". The values obtained through both the deterministic and probabilistic approximations reveal the presence of certain substances, which might be causing sublethal effects in the aquatic species present in the Henares River.
St. Louis area earthquake hazards mapping project; seismic and liquefaction hazard maps
Cramer, Chris H.; Bauer, Robert A.; Chung, Jae-won; Rogers, David; Pierce, Larry; Voigt, Vicki; Mitchell, Brad; Gaunt, David; Williams, Robert; Hoffman, David; Hempen, Gregory L.; Steckel, Phyllis; Boyd, Oliver; Watkins, Connor M.; Tucker, Kathleen; McCallister, Natasha
2016-01-01
We present probabilistic and deterministic seismic and liquefaction hazard maps for the densely populated St. Louis metropolitan area that account for the expected effects of surficial geology on earthquake ground shaking. Hazard calculations were based on a map grid of 0.005°, or about every 500 m, and are thus higher in resolution than any earlier studies. To estimate ground motions at the surface of the model (e.g., site amplification), we used a new detailed near‐surface shear‐wave velocity model in a 1D equivalent‐linear response analysis. When compared with the 2014 U.S. Geological Survey (USGS) National Seismic Hazard Model, which uses a uniform firm‐rock‐site condition, the new probabilistic seismic‐hazard estimates document much more variability. Hazard levels for upland sites (consisting of bedrock and weathered bedrock overlain by loess‐covered till and drift deposits), show up to twice the ground‐motion values for peak ground acceleration (PGA), and similar ground‐motion values for 1.0 s spectral acceleration (SA). Probabilistic ground‐motion levels for lowland alluvial floodplain sites (generally the 20–40‐m‐thick modern Mississippi and Missouri River floodplain deposits overlying bedrock) exhibit up to twice the ground‐motion levels for PGA, and up to three times the ground‐motion levels for 1.0 s SA. Liquefaction probability curves were developed from available standard penetration test data assuming typical lowland and upland water table levels. A simplified liquefaction hazard map was created from the 5%‐in‐50‐year probabilistic ground‐shaking model. The liquefaction hazard ranges from low (60% of area expected to liquefy) in the lowlands. Because many transportation routes, power and gas transmission lines, and population centers exist in or on the highly susceptible lowland alluvium, these areas in the St. Louis region are at significant potential risk from seismically induced liquefaction and associated ground deformation
Probabilistic Estimates of Global Mean Sea Level and its Underlying Processes
NASA Astrophysics Data System (ADS)
Hay, C.; Morrow, E.; Kopp, R. E.; Mitrovica, J. X.
2015-12-01
Local sea level can vary significantly from the global mean value due to a suite of processes that includes ongoing sea-level changes due to the last ice age, land water storage, ocean circulation changes, and non-uniform sea-level changes that arise when modern-day land ice rapidly melts. Understanding these sources of spatial and temporal variability is critical to estimating past and present sea-level change and projecting future sea-level rise. Using two probabilistic techniques, a multi-model Kalman smoother and Gaussian process regression, we have reanalyzed 20th century tide gauge observations to produce a new estimate of global mean sea level (GMSL). Our methods allow us to extract global information from the sparse tide gauge field by taking advantage of the physics-based and model-derived geometry of the contributing processes. Both methods provide constraints on the sea-level contribution of glacial isostatic adjustment (GIA). The Kalman smoother tests multiple discrete models of glacial isostatic adjustment (GIA), probabilistically computing the most likely GIA model given the observations, while the Gaussian process regression characterizes the prior covariance structure of a suite of GIA models and then uses this structure to estimate the posterior distribution of local rates of GIA-induced sea-level change. We present the two methodologies, the model-derived geometries of the underlying processes, and our new probabilistic estimates of GMSL and GIA.
Bayesian Networks Improve Causal Environmental Assessments for Evidence-Based Policy.
Carriger, John F; Barron, Mace G; Newman, Michael C
2016-12-20
Rule-based weight of evidence approaches to ecological risk assessment may not account for uncertainties and generally lack probabilistic integration of lines of evidence. Bayesian networks allow causal inferences to be made from evidence by including causal knowledge about the problem, using this knowledge with probabilistic calculus to combine multiple lines of evidence, and minimizing biases in predicting or diagnosing causal relationships. Too often, sources of uncertainty in conventional weight of evidence approaches are ignored that can be accounted for with Bayesian networks. Specifying and propagating uncertainties improve the ability of models to incorporate strength of the evidence in the risk management phase of an assessment. Probabilistic inference from a Bayesian network allows evaluation of changes in uncertainty for variables from the evidence. The network structure and probabilistic framework of a Bayesian approach provide advantages over qualitative approaches in weight of evidence for capturing the impacts of multiple sources of quantifiable uncertainty on predictions of ecological risk. Bayesian networks can facilitate the development of evidence-based policy under conditions of uncertainty by incorporating analytical inaccuracies or the implications of imperfect information, structuring and communicating causal issues through qualitative directed graph formulations, and quantitatively comparing the causal power of multiple stressors on valued ecological resources. These aspects are demonstrated through hypothetical problem scenarios that explore some major benefits of using Bayesian networks for reasoning and making inferences in evidence-based policy.
Wang, Yan; Nowack, Bernd
2018-05-01
Many research studies have endeavored to investigate the ecotoxicological hazards of engineered nanomaterials (ENMs). However, little is known regarding the actual environmental risks of ENMs, combining both hazard and exposure data. The aim of the present study was to quantify the environmental risks for nano-Al 2 O 3 , nano-SiO 2 , nano iron oxides, nano-CeO 2 , and quantum dots by comparing the predicted environmental concentrations (PECs) with the predicted-no-effect concentrations (PNECs). The PEC values of these 5 ENMs in freshwaters in 2020 for northern Europe and southeastern Europe were taken from a published dynamic probabilistic material flow analysis model. The PNEC values were calculated using probabilistic species sensitivity distribution (SSD). The order of the PNEC values was quantum dots < nano-CeO 2 < nano iron oxides < nano-Al 2 O 3 < nano-SiO 2 . The risks posed by these 5 ENMs were demonstrated to be in the reverse order: nano-Al 2 O 3 > nano-SiO 2 > nano iron oxides > nano-CeO 2 > quantum dots. However, all risk characterization values are 4 to 8 orders of magnitude lower than 1, and no risk was therefore predicted for any of the investigated ENMs at the estimated release level in 2020. Compared to static models, the dynamic material flow model allowed us to use PEC values based on a more complex parameterization, considering a dynamic input over time and time-dependent release of ENMs. The probabilistic SSD approach makes it possible to include all available data to estimate hazards of ENMs by considering the whole range of variability between studies and material types. The risk-assessment approach is therefore able to handle the uncertainty and variability associated with the collected data. The results of the present study provide a scientific foundation for risk-based regulatory decisions of the investigated ENMs. Environ Toxicol Chem 2018;37:1387-1395. © 2018 SETAC. © 2018 SETAC.
Probabilistic Learning in Junior High School: Investigation of Student Probabilistic Thinking Levels
NASA Astrophysics Data System (ADS)
Kurniasih, R.; Sujadi, I.
2017-09-01
This paper was to investigate level on students’ probabilistic thinking. Probabilistic thinking level is level of probabilistic thinking. Probabilistic thinking is thinking about probabilistic or uncertainty matter in probability material. The research’s subject was students in grade 8th Junior High School students. The main instrument is a researcher and a supporting instrument is probabilistic thinking skills test and interview guidelines. Data was analyzed using triangulation method. The results showed that the level of students probabilistic thinking before obtaining a teaching opportunity at the level of subjective and transitional. After the students’ learning level probabilistic thinking is changing. Based on the results of research there are some students who have in 8th grade level probabilistic thinking numerically highest of levels. Level of students’ probabilistic thinking can be used as a reference to make a learning material and strategy.
A Bayesian Approach to Interactive Retrieval
ERIC Educational Resources Information Center
Tague, Jean M.
1973-01-01
A probabilistic model for interactive retrieval is presented. Bayesian statistical decision theory principles are applied: use of prior and sample information about the relationship of document descriptions to query relevance; maximization of expected value of a utility function, to the problem of optimally restructuring search strategies in an…
Probabilistic Flood Maps to support decision-making: Mapping the Value of Information
NASA Astrophysics Data System (ADS)
Alfonso, L.; Mukolwe, M. M.; Di Baldassarre, G.
2016-02-01
Floods are one of the most frequent and disruptive natural hazards that affect man. Annually, significant flood damage is documented worldwide. Flood mapping is a common preimpact flood hazard mitigation measure, for which advanced methods and tools (such as flood inundation models) are used to estimate potential flood extent maps that are used in spatial planning. However, these tools are affected, largely to an unknown degree, by both epistemic and aleatory uncertainty. Over the past few years, advances in uncertainty analysis with respect to flood inundation modeling show that it is appropriate to adopt Probabilistic Flood Maps (PFM) to account for uncertainty. However, the following question arises; how can probabilistic flood hazard information be incorporated into spatial planning? Thus, a consistent framework to incorporate PFMs into the decision-making is required. In this paper, a novel methodology based on Decision-Making under Uncertainty theories, in particular Value of Information (VOI) is proposed. Specifically, the methodology entails the use of a PFM to generate a VOI map, which highlights floodplain locations where additional information is valuable with respect to available floodplain management actions and their potential consequences. The methodology is illustrated with a simplified example and also applied to a real case study in the South of France, where a VOI map is analyzed on the basis of historical land use change decisions over a period of 26 years. Results show that uncertain flood hazard information encapsulated in PFMs can aid decision-making in floodplain planning.
Mushet, David M.; Euliss, Ned H.; Shaffer, Terry L.
2002-01-01
Floristic quality assessment is potentially an important tool for conservation efforts in the northern Great Plains of North America, but it has received little rigorous evaluation. Floristic quality assessments rely on coefficients assigned to each plant species of a region’s flora based on the conservatism of each species relative to others in the region. These “coefficients of conservatism” (C values) are assigned by a panel of experts familiar with a region’s flora. The floristic quality assessment method has faced some criticism due to the subjective nature of these assignments. To evaluate the effect of this subjectivity on floristic quality assessments, we performed separate evaluations of the native plant communities in a natural wetland complex and three restored wetland complexes. In our first assessment, we used C values assigned “subjectively” by the Northern Great Plains Floristic Quality Assessment Panel. We then performed an independent assessment using the observed distributions of species among a group of wetlands that ranged from highly disturbed to largely undisturbed (data-generated C values). Using the panel-assigned C values, mean C values (C¯">C¯C¯) of the restored wetlands rarely exceeded 3.4 and never exceeded 3.9, with the highest values occurring in the oldest restored complex; all but two wetlands in the natural wetland complex had a C¯">C¯C¯ greater than 3.9. Floristic quality indices (FQI) for the restored wetlands rarely exceeded 22 and usually reached maximums closer to 19, with higher values occurring again in the oldest restored complex; only two wetlands in the natural complex had an FQI less than 22. We observed that 95% confidence limits for species richness and percent natives overlapped greatly among wetland complexes, whereas confidence limits for both C¯">C¯C¯ and FQI overlapped little. C¯">C¯C¯ and FQI values were consistently greater when we used the datagenerated C values than when we used the panel-assigned C values; nonetheless, conclusions reached based on these two independent assessment techniques were virtually identical. Our results are consistent with the opinion that coefficients assigned subjectively by expert botanists familiar with a region’s flora provide adequate information to perform accurate floristic quality assessments.
NASA Astrophysics Data System (ADS)
Hussin, Haydar; van Westen, Cees; Reichenbach, Paola
2013-04-01
Local and regional authorities in mountainous areas that deal with hydro-meteorological hazards like landslides and floods try to set aside budgets for emergencies and risk mitigation. However, future losses are often not calculated in a probabilistic manner when allocating budgets or determining how much risk is acceptable. The absence of probabilistic risk estimates can create a lack of preparedness for reconstruction and risk reduction costs and a deficiency in promoting risk mitigation and prevention in an effective way. The probabilistic risk of natural hazards at local scale is usually ignored all together due to the difficulty in acknowledging, processing and incorporating uncertainties in the estimation of losses (e.g. physical damage, fatalities and monetary loss). This study attempts to set up a working framework for a probabilistic risk assessment (PRA) of landslides and floods at a municipal scale using the Fella river valley (Eastern Italian Alps) as a multi-hazard case study area. The emphasis is on the evaluation and determination of the uncertainty in the estimation of losses from multi-hazards. To carry out this framework some steps are needed: (1) by using physically based stochastic landslide and flood models we aim to calculate the probability of the physical impact on individual elements at risk, (2) this is then combined with a statistical analysis of the vulnerability and monetary value of the elements at risk in order to include their uncertainty in the risk assessment, (3) finally the uncertainty from each risk component is propagated into the loss estimation. The combined effect of landslides and floods on the direct risk to communities in narrow alpine valleys is also one of important aspects that needs to be studied.
Incorporating psychological influences in probabilistic cost analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kujawski, Edouard; Alvaro, Mariana; Edwards, William
2004-01-08
Today's typical probabilistic cost analysis assumes an ''ideal'' project that is devoid of the human and organizational considerations that heavily influence the success and cost of real-world projects. In the real world ''Money Allocated Is Money Spent'' (MAIMS principle); cost underruns are rarely available to protect against cost overruns while task overruns are passed on to the total project cost. Realistic cost estimates therefore require a modified probabilistic cost analysis that simultaneously models the cost management strategy including budget allocation. Psychological influences such as overconfidence in assessing uncertainties and dependencies among cost elements and risks are other important considerations thatmore » are generally not addressed. It should then be no surprise that actual project costs often exceed the initial estimates and are delivered late and/or with a reduced scope. This paper presents a practical probabilistic cost analysis model that incorporates recent findings in human behavior and judgment under uncertainty, dependencies among cost elements, the MAIMS principle, and project management practices. Uncertain cost elements are elicited from experts using the direct fractile assessment method and fitted with three-parameter Weibull distributions. The full correlation matrix is specified in terms of two parameters that characterize correlations among cost elements in the same and in different subsystems. The analysis is readily implemented using standard Monte Carlo simulation tools such as {at}Risk and Crystal Ball{reg_sign}. The analysis of a representative design and engineering project substantiates that today's typical probabilistic cost analysis is likely to severely underestimate project cost for probability of success values of importance to contractors and procuring activities. The proposed approach provides a framework for developing a viable cost management strategy for allocating baseline budgets and contingencies. Given the scope and magnitude of the cost-overrun problem, the benefits are likely to be significant.« less
NASA Astrophysics Data System (ADS)
Sari, Dwi Ivayana; Budayasa, I. Ketut; Juniati, Dwi
2017-08-01
Formulation of mathematical learning goals now is not only oriented on cognitive product, but also leads to cognitive process, which is probabilistic thinking. Probabilistic thinking is needed by students to make a decision. Elementary school students are required to develop probabilistic thinking as foundation to learn probability at higher level. A framework of probabilistic thinking of students had been developed by using SOLO taxonomy, which consists of prestructural probabilistic thinking, unistructural probabilistic thinking, multistructural probabilistic thinking and relational probabilistic thinking. This study aimed to analyze of probability task completion based on taxonomy of probabilistic thinking. The subjects were two students of fifth grade; boy and girl. Subjects were selected by giving test of mathematical ability and then based on high math ability. Subjects were given probability tasks consisting of sample space, probability of an event and probability comparison. The data analysis consisted of categorization, reduction, interpretation and conclusion. Credibility of data used time triangulation. The results was level of boy's probabilistic thinking in completing probability tasks indicated multistructural probabilistic thinking, while level of girl's probabilistic thinking in completing probability tasks indicated unistructural probabilistic thinking. The results indicated that level of boy's probabilistic thinking was higher than level of girl's probabilistic thinking. The results could contribute to curriculum developer in developing probability learning goals for elementary school students. Indeed, teachers could teach probability with regarding gender difference.
Probabilistic biological network alignment.
Todor, Andrei; Dobra, Alin; Kahveci, Tamer
2013-01-01
Interactions between molecules are probabilistic events. An interaction may or may not happen with some probability, depending on a variety of factors such as the size, abundance, or proximity of the interacting molecules. In this paper, we consider the problem of aligning two biological networks. Unlike existing methods, we allow one of the two networks to contain probabilistic interactions. Allowing interaction probabilities makes the alignment more biologically relevant at the expense of explosive growth in the number of alternative topologies that may arise from different subsets of interactions that take place. We develop a novel method that efficiently and precisely characterizes this massive search space. We represent the topological similarity between pairs of aligned molecules (i.e., proteins) with the help of random variables and compute their expected values. We validate our method showing that, without sacrificing the running time performance, it can produce novel alignments. Our results also demonstrate that our method identifies biologically meaningful mappings under a comprehensive set of criteria used in the literature as well as the statistical coherence measure that we developed to analyze the statistical significance of the similarity of the functions of the aligned protein pairs.
Hiraishi, Kunihiko
2014-01-01
One of the significant topics in systems biology is to develop control theory of gene regulatory networks (GRNs). In typical control of GRNs, expression of some genes is inhibited (activated) by manipulating external stimuli and expression of other genes. It is expected to apply control theory of GRNs to gene therapy technologies in the future. In this paper, a control method using a Boolean network (BN) is studied. A BN is widely used as a model of GRNs, and gene expression is expressed by a binary value (ON or OFF). In particular, a context-sensitive probabilistic Boolean network (CS-PBN), which is one of the extended models of BNs, is used. For CS-PBNs, the verification problem and the optimal control problem are considered. For the verification problem, a solution method using the probabilistic model checker PRISM is proposed. For the optimal control problem, a solution method using polynomial optimization is proposed. Finally, a numerical example on the WNT5A network, which is related to melanoma, is presented. The proposed methods provide us useful tools in control theory of GRNs. PMID:24587766
NASA Technical Reports Server (NTRS)
Fragola, Joseph R.; Maggio, Gaspare; Frank, Michael V.; Gerez, Luis; Mcfadden, Richard H.; Collins, Erin P.; Ballesio, Jorge; Appignani, Peter L.; Karns, James J.
1995-01-01
This document is the Executive Summary of a technical report on a probabilistic risk assessment (PRA) of the Space Shuttle vehicle performed under the sponsorship of the Office of Space Flight of the US National Aeronautics and Space Administration. It briefly summarizes the methodology and results of the Shuttle PRA. The primary objective of this project was to support management and engineering decision-making with respect to the Shuttle program by producing (1) a quantitative probabilistic risk model of the Space Shuttle during flight, (2) a quantitative assessment of in-flight safety risk, (3) an identification and prioritization of the design and operations that principally contribute to in-flight safety risk, and (4) a mechanism for risk-based evaluation proposed modifications to the Shuttle System. Secondary objectives were to provide a vehicle for introducing and transferring PRA technology to the NASA community, and to demonstrate the value of PRA by applying it beneficially to a real program of great international importance.
NASA Astrophysics Data System (ADS)
Scheingraber, Christoph; Käser, Martin; Allmann, Alexander
2017-04-01
Probabilistic seismic risk analysis (PSRA) is a well-established method for modelling loss from earthquake events. In the insurance industry, it is widely employed for probabilistic modelling of loss to a distributed portfolio. In this context, precise exposure locations are often unknown, which results in considerable loss uncertainty. The treatment of exposure uncertainty has already been identified as an area where PSRA would benefit from increased research attention. However, so far, epistemic location uncertainty has not been in the focus of a large amount of research. We propose a new framework for efficient treatment of location uncertainty. To demonstrate the usefulness of this novel method, a large number of synthetic portfolios resembling real-world portfolios is systematically analyzed. We investigate the effect of portfolio characteristics such as value distribution, portfolio size, or proportion of risk items with unknown coordinates on loss variability. Several sampling criteria to increase the computational efficiency of the framework are proposed and put into the wider context of well-established Monte-Carlo variance reduction techniques. The performance of each of the proposed criteria is analyzed.
NASA Technical Reports Server (NTRS)
Boyce, Lola; Bast, Callie C.
1992-01-01
The research included ongoing development of methodology that provides probabilistic lifetime strength of aerospace materials via computational simulation. A probabilistic material strength degradation model, in the form of a randomized multifactor interaction equation, is postulated for strength degradation of structural components of aerospace propulsion systems subjected to a number of effects or primative variables. These primative variable may include high temperature, fatigue or creep. In most cases, strength is reduced as a result of the action of a variable. This multifactor interaction strength degradation equation has been randomized and is included in the computer program, PROMISS. Also included in the research is the development of methodology to calibrate the above described constitutive equation using actual experimental materials data together with linear regression of that data, thereby predicting values for the empirical material constraints for each effect or primative variable. This regression methodology is included in the computer program, PROMISC. Actual experimental materials data were obtained from the open literature for materials typically of interest to those studying aerospace propulsion system components. Material data for Inconel 718 was analyzed using the developed methodology.
Thomas, Jeanice B; Duewer, David L; Mugenya, Isaac O; Phinney, Karen W; Sander, Lane C; Sharpless, Katherine E; Sniegoski, Lorna T; Tai, Susan S; Welch, Michael J; Yen, James H
2012-01-01
Standard Reference Material 968e Fat-Soluble Vitamins, Carotenoids, and Cholesterol in Human Serum provides certified values for total retinol, γ- and α-tocopherol, total lutein, total zeaxanthin, total β-cryptoxanthin, total β-carotene, 25-hydroxyvitamin D(3), and cholesterol. Reference and information values are also reported for nine additional compounds including total α-cryptoxanthin, trans- and total lycopene, total α-carotene, trans-β-carotene, and coenzyme Q(10). The certified values for the fat-soluble vitamins and carotenoids in SRM 968e were based on the agreement of results from the means of two liquid chromatographic methods used at the National Institute of Standards and Technology (NIST) and from the median of results of an interlaboratory comparison exercise among institutions that participate in the NIST Micronutrients Measurement Quality Assurance Program. The assigned values for cholesterol and 25-hydroxyvitamin D(3) in the SRM are the means of results obtained using the NIST reference method based upon gas chromatography-isotope dilution mass spectrometry and liquid chromatography-isotope dilution tandem mass spectrometry, respectively. SRM 968e is currently one of two available health-related NIST reference materials with concentration values assigned for selected fat-soluble vitamins, carotenoids, and cholesterol in human serum matrix. This SRM is used extensively by laboratories worldwide primarily to validate methods for determining these analytes in human serum and plasma and for assigning values to in-house control materials. The value assignment of the analytes in this SRM will help support measurement accuracy and traceability for laboratories performing health-related measurements in the clinical and nutritional communities.
Near Real-Time Event Detection & Prediction Using Intelligent Software Agents
2006-03-01
value was 0.06743. Multiple autoregressive integrated moving average ( ARIMA ) models were then build to see if the raw data, differenced data, or...slight improvement. The best adjusted r^2 value was found to be 0.1814. Successful results were not expected from linear or ARIMA -based modelling ...appear, 2005. [63] Mora-Lopez, L., Mora, J., Morales-Bueno, R., et al. Modelling time series of climatic parameters with probabilistic finite
Towards a General-Purpose Belief Maintenance System.
1987-04-01
reason using normal two or three-valued logic or using probabilistic values to represent partial belief. The design of the Belief Maintenance System is...as simply a generalization of Truth Maintenance Systems. whose possible reasoning tasks are a superset of those for a TMS. 2. DESIGN The design of...become support links in that they provide partial evidence in favor of a node. The basic design consists of three parts: (1) the conceptual control
NASA Technical Reports Server (NTRS)
Bullington, J. V.; Winkler, J. C.; Linton, D. G.; Khajenoori, S.
1995-01-01
The NASA Shuttle Logistics Depot (NSLD) is tasked with the responsibility for repair and manufacture of Line Replaceable Unit (LRU) hardware and components to support the Space Shuttle Orbiter. Due to shrinking budgets, cost effective repair of LRU's becomes a primary objective. To achieve this objective, is imperative that resources be assigned to those LRU's which have the greatest expectation of being needed as a spare. Forecasting the times at which spares are needed requires consideration of many significant factors including: failure rate, flight rate, spares availability, and desired level of support, among others. This paper summarizes the results of the research and development work that has been accomplished in producing an automated tool that assists in the assignment of effective repair start-times for LRU's at the NSLD. This system, called the Repair Start-time Assessment System (RSAS), uses probabilistic modeling technology to calculate a need date for a repair that considers the current repair pipeline status, as well as, serviceable spares and projections of future demands. The output from the system is a date for beginning the repair that has significantly greater confidence (in the sense that a desired probability of support is ensured) than times produced using other techniques. Since an important output of RSAS is the longest repair turn-around time that will ensure a desired probability of support, RSAS has the potential for being applied to operations at any repair depot where spares are on-hand and repair start-times are of interest. In addition, RSAS incorporates tenants of Just-in-Time (JIT) techniques in that the latest repair start-time (i.e., the latest time at which repair resources must be committed) may be calculated for every failed unit This could reduce the spares inventory for certain items, without significantly increasing the risk of unsatisfied demand.
NASA Astrophysics Data System (ADS)
Brunner, Manuela Irene; Seibert, Jan; Favre, Anne-Catherine
2018-02-01
Traditional design flood estimation approaches have focused on peak discharges and have often neglected other hydrograph characteristics such as hydrograph volume and shape. Synthetic design hydrograph estimation procedures overcome this deficiency by jointly considering peak discharge, hydrograph volume, and shape. Such procedures have recently been extended to allow for the consideration of process variability within a catchment by a flood-type specific construction of design hydrographs. However, they depend on observed runoff time series and are not directly applicable in ungauged catchments where such series are not available. To obtain reliable flood estimates, there is a need for an approach that allows for the consideration of process variability in the construction of synthetic design hydrographs in ungauged catchments. In this study, we therefore propose an approach that combines a bivariate index flood approach with event-type specific synthetic design hydrograph construction. First, regions of similar flood reactivity are delineated and a classification rule that enables the assignment of ungauged catchments to one of these reactivity regions is established. Second, event-type specific synthetic design hydrographs are constructed using the pooled data divided by event type from the corresponding reactivity region in a bivariate index flood procedure. The approach was tested and validated on a dataset of 163 Swiss catchments. The results indicated that 1) random forest is a suitable classification model for the assignment of an ungauged catchment to one of the reactivity regions, 2) the combination of a bivariate index flood approach and event-type specific synthetic design hydrograph construction enables the consideration of event types in ungauged catchments, and 3) the use of probabilistic class memberships in regional synthetic design hydrograph construction helps to alleviate the problem of misclassification. Event-type specific synthetic design hydrograph sets enable the inclusion of process variability into design flood estimation and can be used as a compromise between single best estimate synthetic design hydrographs and continuous simulation studies.
Bhattacharya, Moumita; Jurkovitz, Claudine; Shatkay, Hagit
2018-04-12
Patients associated with multiple co-occurring health conditions often face aggravated complications and less favorable outcomes. Co-occurring conditions are especially prevalent among individuals suffering from kidney disease, an increasingly widespread condition affecting 13% of the general population in the US. This study aims to identify and characterize patterns of co-occurring medical conditions in patients employing a probabilistic framework. Specifically, we apply topic modeling in a non-traditional way to find associations across SNOMED-CT codes assigned and recorded in the EHRs of >13,000 patients diagnosed with kidney disease. Unlike most prior work on topic modeling, we apply the method to codes rather than to natural language. Moreover, we quantitatively evaluate the topics, assessing their tightness and distinctiveness, and also assess the medical validity of our results. Our experiments show that each topic is succinctly characterized by a few highly probable and unique disease codes, indicating that the topics are tight. Furthermore, inter-topic distance between each pair of topics is typically high, illustrating distinctiveness. Last, most coded conditions grouped together within a topic, are indeed reported to co-occur in the medical literature. Notably, our results uncover a few indirect associations among conditions that have hitherto not been reported as correlated in the medical literature. Copyright © 2018. Published by Elsevier Inc.
Weakly Supervised Dictionary Learning
NASA Astrophysics Data System (ADS)
You, Zeyu; Raich, Raviv; Fern, Xiaoli Z.; Kim, Jinsub
2018-05-01
We present a probabilistic modeling and inference framework for discriminative analysis dictionary learning under a weak supervision setting. Dictionary learning approaches have been widely used for tasks such as low-level signal denoising and restoration as well as high-level classification tasks, which can be applied to audio and image analysis. Synthesis dictionary learning aims at jointly learning a dictionary and corresponding sparse coefficients to provide accurate data representation. This approach is useful for denoising and signal restoration, but may lead to sub-optimal classification performance. By contrast, analysis dictionary learning provides a transform that maps data to a sparse discriminative representation suitable for classification. We consider the problem of analysis dictionary learning for time-series data under a weak supervision setting in which signals are assigned with a global label instead of an instantaneous label signal. We propose a discriminative probabilistic model that incorporates both label information and sparsity constraints on the underlying latent instantaneous label signal using cardinality control. We present the expectation maximization (EM) procedure for maximum likelihood estimation (MLE) of the proposed model. To facilitate a computationally efficient E-step, we propose both a chain and a novel tree graph reformulation of the graphical model. The performance of the proposed model is demonstrated on both synthetic and real-world data.
Natural parameter values for generalized gene adjacency.
Yang, Zhenyu; Sankoff, David
2010-09-01
Given the gene orders in two modern genomes, it may be difficult to decide if some genes are close enough in both genomes to infer some ancestral proximity or some functional relationship. Current methods all depend on arbitrary parameters. We explore a class of gene proximity criteria and find two kinds of natural values for their parameters. One kind has to do with the parameter value where the expected information contained in two genomes about each other is maximized. The other kind of natural value has to do with parameter values beyond which all genes are clustered. We analyze these using combinatorial and probabilistic arguments as well as simulations.
NASA Astrophysics Data System (ADS)
Sikora, Jamie; Selby, John
2018-04-01
Bit commitment is a fundamental cryptographic task, in which Alice commits a bit to Bob such that she cannot later change the value of the bit, while, simultaneously, the bit is hidden from Bob. It is known that ideal bit commitment is impossible within quantum theory. In this work, we show that it is also impossible in generalized probabilistic theories (under a small set of assumptions) by presenting a quantitative trade-off between Alice's and Bob's cheating probabilities. Our proof relies crucially on a formulation of cheating strategies as cone programs, a natural generalization of semidefinite programs. In fact, using the generality of this technique, we prove that this result holds for the more general task of integer commitment.
Nonequilibrium Probabilistic Dynamics of the Logistic Map at the Edge of Chaos
NASA Astrophysics Data System (ADS)
Borges, Ernesto P.; Tsallis, Constantino; Añaños, Garín F.; de Oliveira, Paulo Murilo
2002-12-01
We consider nonequilibrium probabilistic dynamics in logisticlike maps xt+1=1-a|xt|z, (z>1) at their chaos threshold: We first introduce many initial conditions within one among W>>1 intervals partitioning the phase space and focus on the unique value qsen<1 for which the entropic form Sq≡(1- ∑
Zientek, Michael L.; Chechetkin, Vladimir S.; Parks, Heather L.; Box, Stephen E.; Briggs, Deborah A.; Cossette, Pamela M.; Dolgopolova, Alla; Hayes, Timothy S.; Seltmann, Reimar; Syusyura, Boris; Taylor, Cliff D.; Wintzer, Niki E.
2014-01-01
This probabilistic assessment indicates that a significant amount of undiscovered copper is associated with sediment-hosted stratabound copper deposits in the Kodar-Udokan Trough. In the assessment, a mean of 21 undiscovered deposits is estimated to occur within the Kodar-Udokan area. There are two known deposits in the area that contain drill-identified resources of 19.6 million metric tons of copper. Using Monte Carlo simulation, probabilistic estimates of the numbers of undiscovered sandstone copper deposits for these tracts were combined with tonnage and grade distributions of sandstone copper deposits to forecast an arithmetic mean of 20.6 million metric tons of undiscovered copper. Significant value can be expected from associated metals, particularly silver.
How Many Impulsivities? A Discounting Perspective
ERIC Educational Resources Information Center
Green, Leonard; Myerson, Joel
2013-01-01
People discount the value of delayed and uncertain outcomes, and how steeply individuals discount is thought to reflect how impulsive they are. From this perspective, steep discounting of delayed outcomes (which fails to maximize long-term welfare) and shallow discounting of probabilistic outcomes (which fails to adequately take risk into account)…
Probabilistic dose-response modeling: case study using dichloromethane PBPK model results.
Marino, Dale J; Starr, Thomas B
2007-12-01
A revised assessment of dichloromethane (DCM) has recently been reported that examines the influence of human genetic polymorphisms on cancer risks using deterministic PBPK and dose-response modeling in the mouse combined with probabilistic PBPK modeling in humans. This assessment utilized Bayesian techniques to optimize kinetic variables in mice and humans with mean values from posterior distributions used in the deterministic modeling in the mouse. To supplement this research, a case study was undertaken to examine the potential impact of probabilistic rather than deterministic PBPK and dose-response modeling in mice on subsequent unit risk factor (URF) determinations. Four separate PBPK cases were examined based on the exposure regimen of the NTP DCM bioassay. These were (a) Same Mouse (single draw of all PBPK inputs for both treatment groups); (b) Correlated BW-Same Inputs (single draw of all PBPK inputs for both treatment groups except for bodyweights (BWs), which were entered as correlated variables); (c) Correlated BW-Different Inputs (separate draws of all PBPK inputs for both treatment groups except that BWs were entered as correlated variables); and (d) Different Mouse (separate draws of all PBPK inputs for both treatment groups). Monte Carlo PBPK inputs reflect posterior distributions from Bayesian calibration in the mouse that had been previously reported. A minimum of 12,500 PBPK iterations were undertaken, in which dose metrics, i.e., mg DCM metabolized by the GST pathway/L tissue/day for lung and liver were determined. For dose-response modeling, these metrics were combined with NTP tumor incidence data that were randomly selected from binomial distributions. Resultant potency factors (0.1/ED(10)) were coupled with probabilistic PBPK modeling in humans that incorporated genetic polymorphisms to derive URFs. Results show that there was relatively little difference, i.e., <10% in central tendency and upper percentile URFs, regardless of the case evaluated. Independent draws of PBPK inputs resulted in the slightly higher URFs. Results were also comparable to corresponding values from the previously reported deterministic mouse PBPK and dose-response modeling approach that used LED(10)s to derive potency factors. This finding indicated that the adjustment from ED(10) to LED(10) in the deterministic approach for DCM compensated for variability resulting from probabilistic PBPK and dose-response modeling in the mouse. Finally, results show a similar degree of variability in DCM risk estimates from a number of different sources including the current effort even though these estimates were developed using very different techniques. Given the variety of different approaches involved, 95th percentile-to-mean risk estimate ratios of 2.1-4.1 represent reasonable bounds on variability estimates regarding probabilistic assessments of DCM.
On forecasting severe storms in Alberta using environmental sounding data
NASA Astrophysics Data System (ADS)
Dupilka, Maxwell L.
Thermodynamic and dynamic parameters computed from observed sounding data are examined to determine whether they can aid in forecasting the potential for severe weather in Alberta. The primary focus is to investigate which sounding parameters can provide probabilistic guidance to distinguish between Significant Tornadoes (F2 to F4), Weak Tornadoes (F0 and F1), and Non-Tornado severe hail storms (≥ 3 cm diameter hail but no reported tornado). The observational data set contains 87 thunderstorm events from 1967 to 2000 within 200 km of Stony Plain, Alberta. Three tornadic thunderstorms with F-scale ratings of F3 and F4 are examined in more detail. A secondary focus is to determine whether sounding data can be used to predict 24 hour snowfall amounts (specifically amounts ≥ 10 cm). Snowfall data covered all of Alberta east of the mountains from October 1990 to April 1993. The major findings were: (a) Significant Tornadoes tended to have stronger environmental bulk wind shear values than Weak Tornadoes or Non-Tornado storms, with a shear magnitude in the 900-500 mb layer exceeding 3 m s-1 km-1. Combining the 900-500 mb shear with the 900-800 mb shear increased the probabilistic guidance for the likelihood of Significant Tornado occurrence. (b) Values of storm-relative helicity showed skill in distinguishing Significant Tornadoes from both Weak Tornadoes and Non-Tornadoes. Significant Tornadoes tended to occur with 0-3 km storm-relative helicity >140 m2 s-2 whereas Weak Tornadoes were typically formed with values between 30 and 150 m 2 s-2. (c) The amount of precipitable water showed statistically significant differences between Significant Tornadoes and the other two groups. Significant Tornadoes had values exceeding 21 mm. Combining precipitable water values with the 900-500 mb shear increased the probabilistic guidance for the potential of Significant Tornadoes. (d) Values of thermal buoyancy, storm convergence, and height of the lifted condensation level provided no skill in discriminating between the three storm categories. (e) The Edmonton tornado case, unlike the Holden and Pine Lake cases, did not feature a prominent synoptic scale moisture front. (f) Observed snowfall amounts showed a roughly linear dependence on the 850 mb temperature, supporting a moisture conservation theory.
Hernaus, Dennis; Gold, James M; Waltz, James A; Frank, Michael J
2018-04-03
While many have emphasized impaired reward prediction error signaling in schizophrenia, multiple studies suggest that some decision-making deficits may arise from overreliance on stimulus-response systems together with a compromised ability to represent expected value. Guided by computational frameworks, we formulated and tested two scenarios in which maladaptive representations of expected value should be most evident, thereby delineating conditions that may evoke decision-making impairments in schizophrenia. In a modified reinforcement learning paradigm, 42 medicated people with schizophrenia and 36 healthy volunteers learned to select the most frequently rewarded option in a 75-25 pair: once when presented with a more deterministic (90-10) pair and once when presented with a more probabilistic (60-40) pair. Novel and old combinations of choice options were presented in a subsequent transfer phase. Computational modeling was employed to elucidate contributions from stimulus-response systems (actor-critic) and expected value (Q-learning). People with schizophrenia showed robust performance impairments with increasing value difference between two competing options, which strongly correlated with decreased contributions from expected value-based learning (Q-learning). Moreover, a subtle yet consistent contextual choice bias for the probabilistic 75 option was present in people with schizophrenia, which could be accounted for by a context-dependent reward prediction error in the actor-critic. We provide evidence that decision-making impairments in schizophrenia increase monotonically with demands placed on expected value computations. A contextual choice bias is consistent with overreliance on stimulus-response learning, which may signify a deficit secondary to the maladaptive representation of expected value. These results shed new light on conditions under which decision-making impairments may arise. Copyright © 2018 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
van Dijk, Eric; De Cremer, David
2006-10-01
Previous research on the allocation of scarce resources suggests that people who are assigned to higher positions (e.g., leaders) are more likely to make self-benefiting allocations than people who are assigned to lower positions (e.g., followers). In this article, the authors investigated the proposition that these findings would be moderated by people's social value orientations. In two experimental studies, the authors assigned participants either to the role of leader or follower and assessed the participants' social value orientations. In agreement with predictions, the findings show that position effects are moderated by social value orientation. Social value orientations only affected the allocation behavior of the leaders: Proself leaders allocated more resources to themselves than did prosocial leaders. Additional analyses indicate that these effects are mediated by feelings of entitlement.
Tolosa, Imma; Cassi, Roberto; Huertas, David
2018-04-11
A new marine sediment certified reference material (IAEA 459) with very low concentrations (μg kg -1 ) for a variety of persistent organic contaminants (POPs) listed by the Stockholm Convention, as well as other POPs and priority substances (PSs) listed in many environmental monitoring programs was developed by the IAEA. The sediment material was collected from the Ham River estuary in South Korea, and the assigned final values were derived from robust statistics on the results provided by selected laboratories which demonstrated technical and quality competence, following the guidance given in ISO Guide 35. The robust mean of the laboratory means was assigned as certified values, for those compounds where the assigned value was derived from at least five datasets and its relative expanded uncertainty was less than 40% of the assigned value (most of the values ranging from 8 to 20%). All the datasets were derived from at least two different analytical techniques which have allowed the assignment of certified concentrations for 22 polychlorinated biphenyl (PCB) congeners, 6 organochlorinated (OC) pesticides, 5 polybrominated diphenyl ethers (PBDEs), and 18 polycyclic aromatic hydrocarbon (PAHs). Mass fractions of compounds that did not fulfill the criteria of certification are considered information values, which include 29 PAHs, 11 PCBs, 16 OC pesticides, and 5 PBDEs. The extensive characterization and associated uncertainties at concentration levels close to the marine sediment quality guidelines will make CRM 459 a valuable matrix reference material for use in marine environmental monitoring programs.
2002-01-01
the present work, the Advanced Mean Value method developed by Millwater and co-workers is used [6-10]. II.1.1 Advanced Mean-Value Method The...Engineering A, submitted for publication, December, , 2001. 6. H. R. Millwater and Y.-T. Wu, “Computational Structural Reliability Analysis of a...Turbine Blade,” Proceedings International Gas Turbine and Aeroengine Congress and Exposition, Cincinnati, OH, May 24-27, 1993. 7. Millwater , H.R., Y
Exposure assessment of 3-monochloropropane-1, 2-diol esters from edible oils and fats in China.
Li, Chang; Nie, Shao-Ping; Zhou, Yong-Qiang; Xie, Ming-Yong
2015-01-01
3-monochoropropane-1, 2-diol (3-MCPD) esters from edible oils are considered to be a possible risk factor for adverse effects in human. In the present study, the exposure assessment of 3-MCPD esters to Chinese population was performed. A total of 143 edible oil and fat samples collected from Chinese markets were determined for the concentrations of 3-MCPD esters. The concentration data together with the data of fats consumed were analyzed by the point evaluation and probabilistic assessment for the exposure assessment. The point evaluation showed that the mean daily intake (DI) of 3-MCPD esters were lower than the value of provisional maximum tolerable daily intake (PMTDI) of 3-MCPD (2 µg/kg BW/d). The mean DI values in different age groups obtained from probabilistic assessment were similar to the results of the point evaluation. However, in high percentiles (95th, 97.5th, 99th), the DI values in all age groups were undesirably higher than the value of PMTDI. Overall, the children and adolescents exposed more to 3-MCPD esters than the adults. Uncertainty was also analyzed for the exposure assessment. Decreasing the level of 3-MCPD esters in edible oils and consuming less oil were top priority to minimize the risk of 3-MCPD esters. Copyright © 2014 Elsevier Ltd. All rights reserved.
Probabilistic Cross-identification of Cosmic Events
NASA Astrophysics Data System (ADS)
Budavári, Tamás
2011-08-01
I discuss a novel approach to identifying cosmic events in separate and independent observations. The focus is on the true events, such as supernova explosions, that happen once and, hence, whose measurements are not repeatable. Their classification and analysis must make the best use of all available data. Bayesian hypothesis testing is used to associate streams of events in space and time. Probabilities are assigned to the matches by studying their rates of occurrence. A case study of Type Ia supernovae illustrates how to use light curves in the cross-identification process. Constraints from realistic light curves happen to be well approximated by Gaussians in time, which makes the matching process very efficient. Model-dependent associations are computationally more demanding but can further boost one's confidence.
Risk analysis of Safety Service Patrol (SSP) systems in Virginia.
Dickey, Brett D; Santos, Joost R
2011-12-01
The transportation infrastructure is a vital backbone of any regional economy as it supports workforce mobility, tourism, and a host of socioeconomic activities. In this article, we specifically examine the incident management function of the transportation infrastructure. In many metropolitan regions, incident management is handled primarily by safety service patrols (SSPs), which monitor and resolve roadway incidents. In Virginia, SSP allocation across highway networks is based typically on average vehicle speeds and incident volumes. This article implements a probabilistic network model that partitions "business as usual" traffic flow with extreme-event scenarios. Results of simulated network scenarios reveal that flexible SSP configurations can improve incident resolution times relative to predetermined SSP assignments. © 2011 Society for Risk Analysis.
24 CFR 576.102 - Emergency shelter component.
Code of Federal Regulations, 2014 CFR
2014-04-01
..., dating violence, sexual assault, or stalking, including services offered by rape crisis centers and... Real Property Acquisition Policies Act of 1970 (URA). Eligible costs are the costs of providing URA... reasonable monetary value assigned to the building, such as the value assigned by an independent real estate...
Forecasting the duration of volcanic eruptions: an empirical probabilistic model
NASA Astrophysics Data System (ADS)
Gunn, L. S.; Blake, S.; Jones, M. C.; Rymer, H.
2014-01-01
The ability to forecast future volcanic eruption durations would greatly benefit emergency response planning prior to and during a volcanic crises. This paper introduces a probabilistic model to forecast the duration of future and on-going eruptions. The model fits theoretical distributions to observed duration data and relies on past eruptions being a good indicator of future activity. A dataset of historical Mt. Etna flank eruptions is presented and used to demonstrate the model. The data have been compiled through critical examination of existing literature along with careful consideration of uncertainties on reported eruption start and end dates between the years 1300 AD and 2010. Data following 1600 is considered to be reliable and free of reporting biases. The distribution of eruption duration between the years 1600 and 1669 is found to be statistically different from that following it and the forecasting model is run on two datasets of Mt. Etna flank eruption durations: 1600-2010 and 1670-2010. Each dataset is modelled using a log-logistic distribution with parameter values found by maximum likelihood estimation. Survivor function statistics are applied to the model distributions to forecast (a) the probability of an eruption exceeding a given duration, (b) the probability of an eruption that has already lasted a particular number of days exceeding a given total duration and (c) the duration with a given probability of being exceeded. Results show that excluding the 1600-1670 data has little effect on the forecasting model result, especially where short durations are involved. By assigning the terms `likely' and `unlikely' to probabilities of 66 % or more and 33 % or less, respectively, the forecasting model based on the 1600-2010 dataset indicates that a future flank eruption on Mt. Etna would be likely to exceed 20 days (± 7 days) but unlikely to exceed 86 days (± 29 days). This approach can easily be adapted for use on other highly active, well-documented volcanoes or for different duration data such as the duration of explosive episodes or the duration of repose periods between eruptions.
ERIC Educational Resources Information Center
Rothstein, Jesse
2009-01-01
Non-random assignment of students to teachers can bias value added estimates of teachers' causal effects. Rothstein (2008a, b) shows that typical value added models indicate large counter-factual effects of 5th grade teachers on students' 4th grade learning, indicating that classroom assignments are far from random. This paper quantifies the…
Assigned value improves memory of proper names.
Festini, Sara B; Hartley, Alan A; Tauber, Sarah K; Rhodes, Matthew G
2013-01-01
Names are more difficult to remember than other personal information such as occupations. The current research examined the influence of assigned point value on memory and metamemory judgements for names and occupations to determine whether incentive can improve recall of proper names. In Experiment 1 participants studied face-name and face-occupation pairs assigned 1 or 10 points, made judgements of learning, and were given a cued recall test. High-value names were recalled more often than low-value names. However, recall of occupations was not influenced by value. In Experiment 2 meaningless nonwords were used for both names and occupations. The name difficulty disappeared, and value influenced recall of both names and occupations. Thus value similarly influenced names and occupations when meaningfulness was held constant. In Experiment 3 participants were required to use overt rote rehearsal for all items. Value did not boost recall of high-value names, suggesting that differential processing could not be implemented to improve memory. Thus incentives may improve memory for proper names by motivating people to engage in selective rehearsal and effortful elaborative processing.
Poças, Maria F; Oliveira, Jorge C; Brandsch, Rainer; Hogg, Timothy
2010-07-01
The use of probabilistic approaches in exposure assessments of contaminants migrating from food packages is of increasing interest but the lack of concentration or migration data is often referred as a limitation. Data accounting for the variability and uncertainty that can be expected in migration, for example, due to heterogeneity in the packaging system, variation of the temperature along the distribution chain, and different time of consumption of each individual package, are required for probabilistic analysis. The objective of this work was to characterize quantitatively the uncertainty and variability in estimates of migration. A Monte Carlo simulation was applied to a typical solution of the Fick's law with given variability in the input parameters. The analysis was performed based on experimental data of a model system (migration of Irgafos 168 from polyethylene into isooctane) and illustrates how important sources of variability and uncertainty can be identified in order to refine analyses. For long migration times and controlled conditions of temperature the affinity of the migrant to the food can be the major factor determining the variability in the migration values (more than 70% of variance). In situations where both the time of consumption and temperature can vary, these factors can be responsible, respectively, for more than 60% and 20% of the variance in the migration estimates. The approach presented can be used with databases from consumption surveys to yield a true probabilistic estimate of exposure.
The Value of Interactive Assignments in the Online Learning Environment
ERIC Educational Resources Information Center
Florenthal, Bela
2016-01-01
The offerings of Web-based supplemental material for textbooks have been increasingly growing. When deciding to adopt a textbook, instructors examine the added value of the associated supplements, also called "e-learning tools," to enhance students' learning of course concepts. In this study, one such supplement, interactive assignments,…
Student's Lab Assignments in PDE Course with MAPLE.
ERIC Educational Resources Information Center
Ponidi, B. Alhadi
Computer-aided software has been used intensively in many mathematics courses, especially in computational subjects, to solve initial value and boundary value problems in Partial Differential Equations (PDE). Many software packages were used in student lab assignments such as FORTRAN, PASCAL, MATLAB, MATHEMATICA, and MAPLE in order to accelerate…
Teachers' Grading Practices: Meaning and Values Assigned
ERIC Educational Resources Information Center
Sun, Youyi; Cheng, Liying
2014-01-01
This study explores the meaning Chinese secondary school English language teachers associate with the grades they assign to their students, and the value judgements they make in grading. A questionnaire was issued to 350 junior and senior school English language teachers in China. The questionnaire data were analysed both quantitatively and…
NASA Astrophysics Data System (ADS)
Juarez, A. M.; Kibler, K. M.; Sayama, T.; Ohara, M.
2016-12-01
Flood management decision-making is often supported by risk assessment, which may overlook the role of coping capacity and the potential benefits derived from direct use of flood-prone land. Alternatively, risk-benefit analysis can support floodplain management to yield maximum socio-ecological benefits for the minimum flood risk. We evaluate flood risk-probabilistic benefit tradeoffs of livelihood practices compatible with direct human use of flood-prone land (agriculture/wild fisheries) and nature conservation (wild fisheries only) in Candaba, Philippines. Located north-west to Metro Manila, Candaba area is a multi-functional landscape that provides a temporally-variable mix of possible land uses, benefits and ecosystem services of local and regional value. To characterize inundation from 1.3- to 100-year recurrence intervals we couple frequency analysis with rainfall-runoff-inundation modelling and remotely-sensed data. By combining simulated probabilistic floods with both damage and benefit functions (e.g. fish capture and rice yield with flood intensity) we estimate potential damages and benefits over varying probabilistic flood hazards. We find that although direct human uses of flood-prone land are associated with damages, for all the investigated magnitudes of flood events with different frequencies, the probabilistic benefits ( 91 million) exceed risks by a large margin ( 33 million). Even considering risk, probabilistic livelihood benefits of direct human uses far exceed benefits provided by scenarios that exclude direct "risky" human uses (difference of 85 million). In addition, we find that individual coping strategies, such as adapting crop planting periods to the flood pulse or fishing rather than cultivating rice in the wet season, minimize flood losses ( 6 million) while allowing for valuable livelihood benefits ($ 125 million) in flood-prone land. Analysis of societal benefits and local capacities to cope with regular floods demonstrate the relevance of accounting for the full range of flood events and their relation to both potential damages and benefits in risk assessments. Management measures may thus be designed to reflect local contexts and support benefits of natural hydrologic processes, while minimizing flood damage.
NASA Astrophysics Data System (ADS)
Piecuch, C. G.; Huybers, P. J.; Tingley, M.
2016-12-01
Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.
Transactional Problem Content in Cost Discounting: Parallel Effects for Probability and Delay
ERIC Educational Resources Information Center
Jones, Stephen; Oaksford, Mike
2011-01-01
Four experiments investigated the effects of transactional content on temporal and probabilistic discounting of costs. Kusev, van Schaik, Ayton, Dent, and Chater (2009) have shown that content other than gambles can alter decision-making behavior even when associated value and probabilities are held constant. Transactions were hypothesized to lead…
Probabilistic seismic loss estimation via endurance time method
NASA Astrophysics Data System (ADS)
Tafakori, Ehsan; Pourzeynali, Saeid; Estekanchi, Homayoon E.
2017-01-01
Probabilistic Seismic Loss Estimation is a methodology used as a quantitative and explicit expression of the performance of buildings using terms that address the interests of both owners and insurance companies. Applying the ATC 58 approach for seismic loss assessment of buildings requires using Incremental Dynamic Analysis (IDA), which needs hundreds of time-consuming analyses, which in turn hinders its wide application. The Endurance Time Method (ETM) is proposed herein as part of a demand propagation prediction procedure and is shown to be an economical alternative to IDA. Various scenarios were considered to achieve this purpose and their appropriateness has been evaluated using statistical methods. The most precise and efficient scenario was validated through comparison against IDA driven response predictions of 34 code conforming benchmark structures and was proven to be sufficiently precise while offering a great deal of efficiency. The loss values were estimated by replacing IDA with the proposed ETM-based procedure in the ATC 58 procedure and it was found that these values suffer from varying inaccuracies, which were attributed to the discretized nature of damage and loss prediction functions provided by ATC 58.
Durán, I; Beiras, R
2013-10-01
Acute water quality criteria (WQC) for the protection of coastal ecosystems are developed on the basis of short-term ecotoxicological data using the most sensitive life stages of representative species from the main taxa of marine water column organisms. A probabilistic approach based on species sensitivity distribution (SSD) curves has been chosen and compared to the WQC obtained applying an assessment factor to the critical toxicity values, i.e. the 'deterministic' approach. The criteria obtained from HC5 values (5th percentile of the SSD) were 1.01 μg/l for Hg, 1.39 μg/l for Cu, 3.83 μg/l for Cd, 25.3 μg/l for Pb and 8.24 μg/l for Zn. Using sensitive early life stages and very sensitive endpoints allowed calculation of WQC for marine coastal ecosystems. These probabilistic WQC, intended to protect 95% of the species in 95% of the cases, were calculated on the basis of a limited ecotoxicological dataset, avoiding the use of large and uncertain assessment factors. Copyright © 2013 Elsevier B.V. All rights reserved.
Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow
NASA Astrophysics Data System (ADS)
Gupta, Atma Ram; Kumar, Ashwani
2017-12-01
Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.
Tong, Ruipeng; Yang, Xiaoyi; Su, Hanrui; Pan, Yue; Zhang, Qiuzhuo; Wang, Juan; Long, Mingce
2018-03-01
The levels, sources and quantitative probabilistic health risks for polycyclic aromatic hydrocarbons (PAHs) in agricultural soils in the vicinity of power, steel and petrochemical plants in the suburbs of Shanghai are discussed. The total concentration of 16 PAHs in the soils ranges from 223 to 8214ng g -1 . The sources of PAHs were analyzed by both isomeric ratios and a principal component analysis-multiple linear regression method. The results indicate that PAHs mainly originated from the incomplete combustion of coal and oil. The probabilistic risk assessments for both carcinogenic and non-carcinogenic risks posed by PAHs in soils with adult farmers as concerned receptors were quantitatively calculated by Monte Carlo simulation. The estimated total carcinogenic risks (TCR) for the agricultural soils has a 45% possibility of exceeding the acceptable threshold value (10 -6 ), indicating potential adverse health effects. However, all non-carcinogenic risks are below the threshold value. Oral intake is the dominant exposure pathway, accounting for 77.7% of TCR, while inhalation intake is negligible. The three PAHs with the highest contribution for TCR are BaP (64.35%), DBA (17.56%) and InP (9.06%). Sensitivity analyses indicate that exposure frequency has the greatest impact on the total risk uncertainty, followed by the exposure dose through oral intake and exposure duration. These results indicate that it is essential to manage the health risks of PAH-contaminated agricultural soils in the vicinity of typical industries in megacities. Copyright © 2017 Elsevier B.V. All rights reserved.
Economic impact of Tegaderm chlorhexidine gluconate (CHG) dressing in critically ill patients.
Thokala, Praveen; Arrowsmith, Martin; Poku, Edith; Martyn-St James, Marissa; Anderson, Jeff; Foster, Steve; Elliott, Tom; Whitehouse, Tony
2016-09-01
To estimate the economic impact of a Tegaderm TM chlorhexidine gluconate (CHG) gel dressing compared with a standard intravenous (i.v.) dressing (defined as non-antimicrobial transparent film dressing), used for insertion site care of short-term central venous and arterial catheters (intravascular catheters) in adult critical care patients using a cost-consequence model populated with data from published sources. A decision analytical cost-consequence model was developed which assigned each patient with an indwelling intravascular catheter and a standard dressing, a baseline risk of associated dermatitis, local infection at the catheter insertion site and catheter-related bloodstream infections (CRBSI), estimated from published secondary sources. The risks of these events for patients with a Tegaderm CHG were estimated by applying the effectiveness parameters from the clinical review to the baseline risks. Costs were accrued through costs of intervention (i.e. Tegaderm CHG or standard intravenous dressing) and hospital treatment costs depended on whether the patients had local dermatitis, local infection or CRBSI. Total costs were estimated as mean values of 10,000 probabilistic sensitivity analysis (PSA) runs. Tegaderm CHG resulted in an average cost-saving of £77 per patient in an intensive care unit. Tegaderm CHG also has a 98.5% probability of being cost-saving compared to standard i.v. dressings. The analyses suggest that Tegaderm CHG is a cost-saving strategy to reduce CRBSI and the results were robust to sensitivity analyses.
antiSMASH 3.0-a comprehensive resource for the genome mining of biosynthetic gene clusters.
Weber, Tilmann; Blin, Kai; Duddela, Srikanth; Krug, Daniel; Kim, Hyun Uk; Bruccoleri, Robert; Lee, Sang Yup; Fischbach, Michael A; Müller, Rolf; Wohlleben, Wolfgang; Breitling, Rainer; Takano, Eriko; Medema, Marnix H
2015-07-01
Microbial secondary metabolism constitutes a rich source of antibiotics, chemotherapeutics, insecticides and other high-value chemicals. Genome mining of gene clusters that encode the biosynthetic pathways for these metabolites has become a key methodology for novel compound discovery. In 2011, we introduced antiSMASH, a web server and stand-alone tool for the automatic genomic identification and analysis of biosynthetic gene clusters, available at http://antismash.secondarymetabolites.org. Here, we present version 3.0 of antiSMASH, which has undergone major improvements. A full integration of the recently published ClusterFinder algorithm now allows using this probabilistic algorithm to detect putative gene clusters of unknown types. Also, a new dereplication variant of the ClusterBlast module now identifies similarities of identified clusters to any of 1172 clusters with known end products. At the enzyme level, active sites of key biosynthetic enzymes are now pinpointed through a curated pattern-matching procedure and Enzyme Commission numbers are assigned to functionally classify all enzyme-coding genes. Additionally, chemical structure prediction has been improved by incorporating polyketide reduction states. Finally, in order for users to be able to organize and analyze multiple antiSMASH outputs in a private setting, a new XML output module allows offline editing of antiSMASH annotations within the Geneious software. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Proceedings, Seminar on Probabilistic Methods in Geotechnical Engineering
NASA Astrophysics Data System (ADS)
Hynes-Griffin, M. E.; Buege, L. L.
1983-09-01
Contents: Applications of Probabilistic Methods in Geotechnical Engineering; Probabilistic Seismic and Geotechnical Evaluation at a Dam Site; Probabilistic Slope Stability Methodology; Probability of Liquefaction in a 3-D Soil Deposit; Probabilistic Design of Flood Levees; Probabilistic and Statistical Methods for Determining Rock Mass Deformability Beneath Foundations: An Overview; Simple Statistical Methodology for Evaluating Rock Mechanics Exploration Data; New Developments in Statistical Techniques for Analyzing Rock Slope Stability.
van Riper, Carena J.; Sharp, Ryan; Bagstad, Kenneth J.; Vagias, Wade M.; Kwenye, Jane; Depper, Gina; Freimund, Wayne
2016-01-01
Successfully promoting and encouraging the adoption of environmental stewardship behavior is an important responsibility for public land management agencies. Although people increasingly report high levels of concern about environmental issues, widespread patterns of stewardship behavior have not followed suit (Moore 2002). One concept that can be applied in social science research to explain behavior change is that of values. More specifically, held and assigned values lie at the heart of understanding why people around the world continue to live in unsustainable ways that impact parks and protected areas. A held value is an individual psychological orientation defined by Rokeach as “an enduring belief that a specific mode of conduct or endstate of existence is personally and socially preferable” (1973, 550). Held values are at the core of human cognition, and as such, influence attitudes and behavior. Assigned values on the other hand, according to Brown (1984), are the perceived qualities of an environment that are based on and deduced from held values. In other words, assigned values are considered the material and nonmaterial benefits that people believe they obtain from ecosystems. Held and assigned values predict stewardship behaviors (Figure 1). During the 2013 George Wright Society Conference on Parks, Protected Areas, and Cultural Sites, we organized a session to improve our understanding of why individuals and groups choose to engage in stewardship behaviors that benefit the environment. We used held and assigned values as vehicles to explore what people cared about in diverse landscapes, review select case studies from across the globe, and question how best to incorporate visitor perspectives into protected area management decisions and policymaking. In addition to sharing project results, we also discussed the importance of accounting for multiple and often competing value perspectives, potential ways to integrate disciplinary perspectives on valuing nature, and future directions for social science research and practice. In this paper, we present the results from our session to provide fodder for further contemplation about the timely question of how park and protected area managers can foster values that lead to environmental protection.
Real-time value-driven diagnosis
NASA Technical Reports Server (NTRS)
Dambrosio, Bruce
1995-01-01
Diagnosis is often thought of as an isolated task in theoretical reasoning (reasoning with the goal of updating our beliefs about the world). We present a decision-theoretic interpretation of diagnosis as a task in practical reasoning (reasoning with the goal of acting in the world), and sketch components of our approach to this task. These components include an abstract problem description, a decision-theoretic model of the basic task, a set of inference methods suitable for evaluating the decision representation in real-time, and a control architecture to provide the needed continuing coordination between the agent and its environment. A principal contribution of this work is the representation and inference methods we have developed, which extend previously available probabilistic inference methods and narrow, somewhat, the gap between probabilistic and logical models of diagnosis.
Reliability analysis of composite structures
NASA Technical Reports Server (NTRS)
Kan, Han-Pin
1992-01-01
A probabilistic static stress analysis methodology has been developed to estimate the reliability of a composite structure. Closed form stress analysis methods are the primary analytical tools used in this methodology. These structural mechanics methods are used to identify independent variables whose variations significantly affect the performance of the structure. Once these variables are identified, scatter in their values is evaluated and statistically characterized. The scatter in applied loads and the structural parameters are then fitted to appropriate probabilistic distribution functions. Numerical integration techniques are applied to compute the structural reliability. The predicted reliability accounts for scatter due to variability in material strength, applied load, fabrication and assembly processes. The influence of structural geometry and mode of failure are also considerations in the evaluation. Example problems are given to illustrate various levels of analytical complexity.
Receipt of Preventive Services After Oregon's Randomized Medicaid Experiment.
Marino, Miguel; Bailey, Steffani R; Gold, Rachel; Hoopes, Megan J; O'Malley, Jean P; Huguet, Nathalie; Heintzman, John; Gallia, Charles; McConnell, K John; DeVoe, Jennifer E
2016-02-01
It is predicted that gaining health insurance via the Affordable Care Act will result in increased rates of preventive health services receipt in the U.S., primarily based on self-reported findings from previous health insurance expansion studies. This study examined the long-term (36-month) impact of Oregon's 2008 randomized Medicaid expansion ("Oregon Experiment") on receipt of 12 preventive care services in community health centers using electronic health record data. Demographic data from adult (aged 19-64 years) Oregon Experiment participants were probabilistically matched to electronic health record data from 49 Oregon community health centers within the OCHIN community health information network (N=10,643). Intent-to-treat analyses compared receipt of preventive services over a 36-month (2008-2011) period among those randomly assigned to apply for Medicaid versus not assigned, and instrumental variable analyses estimated the effect of actually gaining Medicaid coverage on preventive services receipt (data collected in 2012-2014; analysis performed in 2014-2015). Intent-to-treat analyses revealed statistically significant differences between patients randomly assigned to apply for Medicaid (versus not assigned) for 8 of 12 assessed preventive services. In intent-to-treat analyses, Medicaid coverage significantly increased the odds of receipt of most preventive services (ORs ranging from 1.04 [95% CI=1.02, 1.06] for smoking assessment to 1.27 [95% CI=1.02, 1.57] for mammography). Rates of preventive services receipt will likely increase as community health center patients gain insurance through Affordable Care Act expansions. Continued effort is needed to increase health insurance coverage in an effort to decrease health disparities in vulnerable populations. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Receipt of Preventive Services After Oregon’s Randomized Medicaid Experiment
Marino, Miguel; Bailey, Steffani R.; Gold, Rachel; Hoopes, Megan J.; O’Malley, Jean P.; Huguet, Nathalie; Heintzman, John; Gallia, Charles; McConnell, K. John; DeVoe, Jennifer E.
2015-01-01
Introduction It is predicted that gaining health insurance via the Affordable Care Act will result in increased rates of preventive health services receipt in the U.S, primarily based on self-reported findings from previous health insurance expansion studies. This study examined the long-term (36-month) impact of Oregon’s 2008 randomized Medicaid expansion (“Oregon Experiment”) on receipt of 12 preventive care services in community health centers using electronic health record data. Methods Demographic data from adult (aged 19–64 years) Oregon Experiment participants were probabilistically matched to electronic health record data from 49 Oregon community health centers within the OCHIN community health information network (N=10,643). Intent-to-treat analyses compared receipt of preventive services over a 36-month (2008–2011) period among those randomly assigned to apply for Medicaid versus not assigned, and instrumental variable analyses estimated the effect of actually gaining Medicaid coverage on preventive services receipt (data collected in 2012–2014; analysis performed in 2014–2015). Results Intent-to-treat analyses revealed statistically significant differences between patients randomly assigned to apply for Medicaid (versus not assigned) for eight of 12 assessed preventive services. In intent-to-treat[MM1] analyses, Medicaid coverage significantly increased the odds of receipt of most preventive services (ORs ranging from 1.04 [95% CI=1.02, 1.06] for smoking assessment to 1.27 [95% CI=1.02, 1.57] for mammography). Conclusions Rates of preventive services receipt will likely increase as community health center patients gain insurance through Affordable Care Act expansions. Continued effort is needed to increase health insurance coverage in an effort to decrease health disparities in vulnerable populations. PMID:26497264
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, X; Liu, S; Kalet, A
Purpose: The purpose of this work was to investigate the ability of a machine-learning based probabilistic approach to detect radiotherapy treatment plan anomalies given initial disease classes information. Methods In total we obtained 1112 unique treatment plans with five plan parameters and disease information from a Mosaiq treatment management system database for use in the study. The plan parameters include prescription dose, fractions, fields, modality and techniques. The disease information includes disease site, and T, M and N disease stages. A Bayesian network method was employed to model the probabilistic relationships between tumor disease information, plan parameters and an anomalymore » flag. A Bayesian learning method with Dirichlet prior was useed to learn the joint probabilities between dependent variables in error-free plan data and data with artificially induced anomalies. In the study, we randomly sampled data with anomaly in a specified anomaly space.We tested the approach with three groups of plan anomalies – improper concurrence of values of all five plan parameters and values of any two out of five parameters, and all single plan parameter value anomalies. Totally, 16 types of plan anomalies were covered by the study. For each type, we trained an individual Bayesian network. Results: We found that the true positive rate (recall) and positive predictive value (precision) to detect concurrence anomalies of five plan parameters in new patient cases were 94.45±0.26% and 93.76±0.39% respectively. To detect other 15 types of plan anomalies, the average recall and precision were 93.61±2.57% and 93.78±3.54% respectively. The computation time to detect the plan anomaly of each type in a new plan is ∼0.08 seconds. Conclusion: The proposed method for treatment plan anomaly detection was found effective in the initial tests. The results suggest that this type of models could be applied to develop plan anomaly detection tools to assist manual and automated plan checks. The senior author received research grants from ViewRay Inc. and Varian Medical System.« less
Newgard, Craig; Malveau, Susan; Staudenmayer, Kristan; Wang, N. Ewen; Hsia, Renee Y.; Mann, N. Clay; Holmes, James F.; Kuppermann, Nathan; Haukoos, Jason S.; Bulger, Eileen M.; Dai, Mengtao; Cook, Lawrence J.
2012-01-01
Objectives The objective was to evaluate the process of using existing data sources, probabilistic linkage, and multiple imputation to create large population-based injury databases matched to outcomes. Methods This was a retrospective cohort study of injured children and adults transported by 94 emergency medical systems (EMS) agencies to 122 hospitals in seven regions of the western United States over a 36-month period (2006 to 2008). All injured patients evaluated by EMS personnel within specific geographic catchment areas were included, regardless of field disposition or outcome. The authors performed probabilistic linkage of EMS records to four hospital and postdischarge data sources (emergency department [ED] data, patient discharge data, trauma registries, and vital statistics files) and then handled missing values using multiple imputation. The authors compare and evaluate matched records, match rates (proportion of matches among eligible patients), and injury outcomes within and across sites. Results There were 381,719 injured patients evaluated by EMS personnel in the seven regions. Among transported patients, match rates ranged from 14.9% to 87.5% and were directly affected by the availability of hospital data sources and proportion of missing values for key linkage variables. For vital statistics records (1-year mortality), estimated match rates ranged from 88.0% to 98.7%. Use of multiple imputation (compared to complete case analysis) reduced bias for injury outcomes, although sample size, percentage missing, type of variable, and combined-site versus single-site imputation models all affected the resulting estimates and variance. Conclusions This project demonstrates the feasibility and describes the process of constructing population-based injury databases across multiple phases of care using existing data sources and commonly available analytic methods. Attention to key linkage variables and decisions for handling missing values can be used to increase match rates between data sources, minimize bias, and preserve sampling design. PMID:22506952
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sasahara, M; Arimura, H; Hirose, T
Purpose: Current image-guided radiotherapy (IGRT) procedure is bonebased patient positioning, followed by subjective manual correction using cone beam computed tomography (CBCT). This procedure might cause the misalignment of the patient positioning. Automatic target-based patient positioning systems achieve the better reproducibility of patient setup. Our aim of this study was to develop an automatic target-based patient positioning framework for IGRT with CBCT images in prostate cancer treatment. Methods: Seventy-three CBCT images of 10 patients and 24 planning CT images with digital imaging and communications in medicine for radiotherapy (DICOM-RT) structures were used for this study. Our proposed framework started from themore » generation of probabilistic atlases of bone and prostate from 24 planning CT images and prostate contours, which were made in the treatment planning. Next, the gray-scale histograms of CBCT values within CTV regions in the planning CT images were obtained as the occurrence probability of the CBCT values. Then, CBCT images were registered to the atlases using a rigid registration with mutual information. Finally, prostate regions were estimated by applying the Bayesian inference to CBCT images with the probabilistic atlases and CBCT value occurrence probability. The proposed framework was evaluated by calculating the Euclidean distance of errors between two centroids of prostate regions determined by our method and ground truths of manual delineations by a radiation oncologist and a medical physicist on CBCT images for 10 patients. Results: The average Euclidean distance between the centroids of extracted prostate regions determined by our proposed method and ground truths was 4.4 mm. The average errors for each direction were 1.8 mm in anteroposterior direction, 0.6 mm in lateral direction and 2.1 mm in craniocaudal direction. Conclusion: Our proposed framework based on probabilistic atlases and Bayesian inference might be feasible to automatically determine prostate regions on CBCT images.« less
Methods for Combining Payload Parameter Variations with Input Environment
NASA Technical Reports Server (NTRS)
Merchant, D. H.; Straayer, J. W.
1975-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occuring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular value of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the methods are also presented.
Communicating weather forecast uncertainty: Do individual differences matter?
Grounds, Margaret A; Joslyn, Susan L
2018-03-01
Research suggests that people make better weather-related decisions when they are given numeric probabilities for critical outcomes (Joslyn & Leclerc, 2012, 2013). However, it is unclear whether all users can take advantage of probabilistic forecasts to the same extent. The research reported here assessed key cognitive and demographic factors to determine their relationship to the use of probabilistic forecasts to improve decision quality. In two studies, participants decided between spending resources to prevent icy conditions on roadways or risk a larger penalty when freezing temperatures occurred. Several forecast formats were tested, including a control condition with the night-time low temperature alone and experimental conditions that also included the probability of freezing and advice based on expected value. All but those with extremely low numeracy scores made better decisions with probabilistic forecasts. Importantly, no groups made worse decisions when probabilities were included. Moreover, numeracy was the best predictor of decision quality, regardless of forecast format, suggesting that the advantage may extend beyond understanding the forecast to general decision strategy issues. This research adds to a growing body of evidence that numerical uncertainty estimates may be an effective way to communicate weather danger to general public end users. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
A spatio-temporal model for probabilistic seismic hazard zonation of Tehran
NASA Astrophysics Data System (ADS)
Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza
2013-08-01
A precondition for all disaster management steps, building damage prediction, and construction code developments is a hazard assessment that shows the exceedance probabilities of different ground motion levels at a site considering different near- and far-field earthquake sources. The seismic sources are usually categorized as time-independent area sources and time-dependent fault sources. While the earlier incorporates the small and medium events, the later takes into account only the large characteristic earthquakes. In this article, a probabilistic approach is proposed to aggregate the effects of time-dependent and time-independent sources on seismic hazard. The methodology is then applied to generate three probabilistic seismic hazard maps of Tehran for 10%, 5%, and 2% exceedance probabilities in 50 years. The results indicate an increase in peak ground acceleration (PGA) values toward the southeastern part of the study area and the PGA variations are mostly controlled by the shear wave velocities across the city. In addition, the implementation of the methodology takes advantage of GIS capabilities especially raster-based analyses and representations. During the estimation of the PGA exceedance rates, the emphasis has been placed on incorporating the effects of different attenuation relationships and seismic source models by using a logic tree.
A probabilistic fatigue analysis of multiple site damage
NASA Technical Reports Server (NTRS)
Rohrbaugh, S. M.; Ruff, D.; Hillberry, B. M.; Mccabe, G.; Grandt, A. F., Jr.
1994-01-01
The variability in initial crack size and fatigue crack growth is incorporated in a probabilistic model that is used to predict the fatigue lives for unstiffened aluminum alloy panels containing multiple site damage (MSD). The uncertainty of the damage in the MSD panel is represented by a distribution of fatigue crack lengths that are analytically derived from equivalent initial flaw sizes. The variability in fatigue crack growth rate is characterized by stochastic descriptions of crack growth parameters for a modified Paris crack growth law. A Monte-Carlo simulation explicitly describes the MSD panel by randomly selecting values from the stochastic variables and then grows the MSD cracks with a deterministic fatigue model until the panel fails. Different simulations investigate the influences of the fatigue variability on the distributions of remaining fatigue lives. Six cases that consider fixed and variable conditions of initial crack size and fatigue crack growth rate are examined. The crack size distribution exhibited a dominant effect on the remaining fatigue life distribution, and the variable crack growth rate exhibited a lesser effect on the distribution. In addition, the probabilistic model predicted that only a small percentage of the life remains after a lead crack develops in the MSD panel.
Frost, Anja; Renners, Eike; Hötter, Michael; Ostermann, Jörn
2013-01-01
An important part of computed tomography is the calculation of a three-dimensional reconstruction of an object from series of X-ray images. Unfortunately, some applications do not provide sufficient X-ray images. Then, the reconstructed objects no longer truly represent the original. Inside of the volumes, the accuracy seems to vary unpredictably. In this paper, we introduce a novel method to evaluate any reconstruction, voxel by voxel. The evaluation is based on a sophisticated probabilistic handling of the measured X-rays, as well as the inclusion of a priori knowledge about the materials that the object receiving the X-ray examination consists of. For each voxel, the proposed method outputs a numerical value that represents the probability of existence of a predefined material at the position of the voxel while doing X-ray. Such a probabilistic quality measure was lacking so far. In our experiment, false reconstructed areas get detected by their low probability. In exact reconstructed areas, a high probability predominates. Receiver Operating Characteristics not only confirm the reliability of our quality measure but also demonstrate that existing methods are less suitable for evaluating a reconstruction. PMID:23344378
Predictability of short-range forecasting: a multimodel approach
NASA Astrophysics Data System (ADS)
García-Moya, Jose-Antonio; Callado, Alfons; Escribà, Pau; Santos, Carlos; Santos-Muñoz, Daniel; Simarro, Juan
2011-05-01
Numerical weather prediction (NWP) models (including mesoscale) have limitations when it comes to dealing with severe weather events because extreme weather is highly unpredictable, even in the short range. A probabilistic forecast based on an ensemble of slightly different model runs may help to address this issue. Among other ensemble techniques, Multimodel ensemble prediction systems (EPSs) are proving to be useful for adding probabilistic value to mesoscale deterministic models. A Multimodel Short Range Ensemble Prediction System (SREPS) focused on forecasting the weather up to 72 h has been developed at the Spanish Meteorological Service (AEMET). The system uses five different limited area models (LAMs), namely HIRLAM (HIRLAM Consortium), HRM (DWD), the UM (UKMO), MM5 (PSU/NCAR) and COSMO (COSMO Consortium). These models run with initial and boundary conditions provided by five different global deterministic models, namely IFS (ECMWF), UM (UKMO), GME (DWD), GFS (NCEP) and CMC (MSC). AEMET-SREPS (AE) validation on the large-scale flow, using ECMWF analysis, shows a consistent and slightly underdispersive system. For surface parameters, the system shows high skill forecasting binary events. 24-h precipitation probabilistic forecasts are verified using an up-scaling grid of observations from European high-resolution precipitation networks, and compared with ECMWF-EPS (EC).
Becker, Ursula; Briggs, Andrew H; Moreno, Santiago G; Ray, Joshua A; Ngo, Phuong; Samanta, Kunal
2016-06-01
To evaluate the cost-effectiveness of treatment with anti-CD20 monoclonal antibody obinutuzumab plus chlorambucil (GClb) in untreated patients with chronic lymphocytic leukemia unsuitable for full-dose fludarabine-based therapy. A Markov model was used to assess the cost-effectiveness of GClb versus other chemoimmunotherapy options. The model comprised three mutually exclusive health states: "progression-free survival (with/without therapy)", "progression (refractory/relapsed lines)", and "death". Each state was assigned a health utility value representing patients' quality of life and a specific cost value. Comparisons between GClb and rituximab plus chlorambucil or only chlorambucil were performed using patient-level clinical trial data; other comparisons were performed via a network meta-analysis using information gathered in a systematic literature review. To support the model, a utility elicitation study was conducted from the perspective of the UK National Health Service. There was good agreement between the model-predicted progression-free and overall survival and that from the CLL11 trial. On incorporating data from the indirect treatment comparisons, it was found that GClb was cost-effective with a range of incremental cost-effectiveness ratios below a threshold of £30,000 per quality-adjusted life-year gained, and remained so during deterministic and probabilistic sensitivity analyses under various scenarios. GClb was estimated to increase both quality-adjusted life expectancy and treatment costs compared with several commonly used therapies, with incremental cost-effectiveness ratios below commonly referenced UK thresholds. This article offers a real example of how to combine direct and indirect evidence in a cost-effectiveness analysis of oncology drugs. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Probabilistic Radiological Performance Assessment Modeling and Uncertainty
NASA Astrophysics Data System (ADS)
Tauxe, J.
2004-12-01
A generic probabilistic radiological Performance Assessment (PA) model is presented. The model, built using the GoldSim systems simulation software platform, concerns contaminant transport and dose estimation in support of decision making with uncertainty. Both the U.S. Nuclear Regulatory Commission (NRC) and the U.S. Department of Energy (DOE) require assessments of potential future risk to human receptors of disposal of LLW. Commercially operated LLW disposal facilities are licensed by the NRC (or agreement states), and the DOE operates such facilities for disposal of DOE-generated LLW. The type of PA model presented is probabilistic in nature, and hence reflects the current state of knowledge about the site by using probability distributions to capture what is expected (central tendency or average) and the uncertainty (e.g., standard deviation) associated with input parameters, and propagating through the model to arrive at output distributions that reflect expected performance and the overall uncertainty in the system. Estimates of contaminant release rates, concentrations in environmental media, and resulting doses to human receptors well into the future are made by running the model in Monte Carlo fashion, with each realization representing a possible combination of input parameter values. Statistical summaries of the results can be compared to regulatory performance objectives, and decision makers are better informed of the inherently uncertain aspects of the model which supports their decision-making. While this information may make some regulators uncomfortable, they must realize that uncertainties which were hidden in a deterministic analysis are revealed in a probabilistic analysis, and the chance of making a correct decision is now known rather than hoped for. The model includes many typical features and processes that would be part of a PA, but is entirely fictitious. This does not represent any particular site and is meant to be a generic example. A practitioner could, however, start with this model as a GoldSim template and, by adding site specific features and parameter values (distributions), use this model as a starting point for a real model to be used in real decision making.
Value encoding in single neurons in the human amygdala during decision making.
Jenison, Rick L; Rangel, Antonio; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A
2011-01-05
A growing consensus suggests that the brain makes simple choices by assigning values to the stimuli under consideration and then comparing these values to make a decision. However, the network involved in computing the values has not yet been fully characterized. Here, we investigated whether the human amygdala plays a role in the computation of stimulus values at the time of decision making. We recorded single neuron activity from the amygdala of awake patients while they made simple purchase decisions over food items. We found 16 amygdala neurons, located primarily in the basolateral nucleus that responded linearly to the values assigned to individual items.
Misfortune may be a blessing in disguise: Fairness perception and emotion modulate decision making.
Liu, Hong-Hsiang; Hwang, Yin-Dir; Hsieh, Ming H; Hsu, Yung-Fong; Lai, Wen-Sung
2017-08-01
Fairness perception and equality during social interactions frequently elicit affective arousal and affect decision making. By integrating the dictator game and a probabilistic gambling task, this study aimed to investigate the effects of a negative experience induced by perceived unfairness on decision making using behavioral, model fitting, and electrophysiological approaches. Participants were randomly assigned to the neutral, harsh, or kind groups, which consisted of various asset allocation scenarios to induce different levels of perceived unfairness. The monetary gain was subsequently considered the initial asset in a negatively rewarded, probabilistic gambling task in which the participants were instructed to maintain as much asset as possible. Our behavioral results indicated that the participants in the harsh group exhibited increased levels of negative emotions but retained greater total game scores than the participants in the other two groups. Parameter estimation of a reinforcement learning model using a Bayesian approach indicated that these participants were more loss aversive and consistent in decision making. Data from simultaneous ERP recordings further demonstrated that these participants exhibited larger feedback-related negativity to unexpected outcomes in the gambling task, which suggests enhanced reward sensitivity and signaling of reward prediction error. Collectively, our study suggests that a negative experience may be an advantage in the modulation of reward-based decision making. © 2017 Society for Psychophysiological Research.
NASA Astrophysics Data System (ADS)
Mansoor, Awais; Casas, Rafael; Linguraru, Marius G.
2016-03-01
Pleural effusion is an abnormal collection of fluid within the pleural cavity. Excessive accumulation of pleural fluid is an important bio-marker for various illnesses, including congestive heart failure, pneumonia, metastatic cancer, and pulmonary embolism. Quantification of pleural effusion can be indicative of the progression of disease as well as the effectiveness of any treatment being administered. Quantification, however, is challenging due to unpredictable amounts and density of fluid, complex topology of the pleural cavity, and the similarity in texture and intensity of pleural fluid to the surrounding tissues in computed tomography (CT) scans. Herein, we present an automated method for the segmentation of pleural effusion in CT scans based on spatial context information. The method consists of two stages: first, a probabilistic pleural effusion map is created using multi-atlas segmentation. The probabilistic map assigns a priori probabilities to the presence of pleural uid at every location in the CT scan. Second, a statistical pattern classification approach is designed to annotate pleural regions using local descriptors based on a priori probabilities, geometrical, and spatial features. Thirty seven CT scans from a diverse patient population containing confirmed cases of minimal to severe amounts of pleural effusion were used to validate the proposed segmentation method. An average Dice coefficient of 0.82685 and Hausdorff distance of 16.2155 mm was obtained.
NASA Astrophysics Data System (ADS)
Wang, Zhenming; Shi, Baoping; Kiefer, John D.; Woolery, Edward W.
2004-06-01
Musson's comments on our article, ``Communicating with uncertainty: A critical issue with probabilistic seismic hazard analysis'' are an example of myths and misunderstandings. We did not say that probabilistic seismic hazard analysis (PSHA) is a bad method, but we did say that it has some limitations that have significant implications. Our response to these comments follows. There is no consensus on exactly how to select seismological parameters and to assign weights in PSHA. This was one of the conclusions reached by a senior seismic hazard analysis committee [SSHAC, 1997] that included C. A. Cornell, founder of the PSHA methodology. The SSHAC report was reviewed by a panel of the National Research Council and was well accepted by seismologists and engineers. As an example of the lack of consensus, Toro and Silva [2001] produced seismic hazard maps for the central United States region that are quite different from those produced by Frankel et al. [2002] because they used different input seismological parameters and weights (see Table 1). We disagree with Musson's conclusion that ``because a method may be applied badly on one occasion does not mean the method itself is bad.'' We do not say that the method is poor, but rather that those who use PSHA need to document their inputs and communicate them fully to the users. It seems that Musson is trying to create myth by suggesting his own methods should be used.
Yang, Hua; Gao, Wen; Liu, Lei; Liu, Ke; Liu, E-Hu; Qi, Lian-Wen; Li, Ping
2015-11-10
Most Aconitum species, also known as aconite, are extremely poisonous, so it must be identified carefully. Differentiation of Aconitum species is challenging because of their similar appearance and chemical components. In this study, a universal strategy to discover chemical markers was developed for effective authentication of three commonly used aconite roots. The major procedures include: (1) chemical profiling and structural assignment of herbs by liquid chromatography with mass spectrometry (LC-MS), (2) quantification of major components by LC-MS, (3) probabilistic neural network (PNN) model to calculate contributions of components toward species classification, (4) discovery of minimized number of chemical markers for quality control. The MS fragmentation pathways of diester-, monoester-, and alkyloyamine-diterpenoid alkaloids were compared. Using these rules, 42 aconite alkaloids were identified in aconite roots. Subsequently, 11 characteristic compounds were quantified. A component-species modeling by PNN was then established combining the 11 analytes and 26-batch samples from three aconite species. The contribution of each analyte to species classification was calculated. Selection of fuziline, benzoylhypaconine, and talatizamine, or a combination of more compounds based on a contribution order, can be used for successful categorization of the three aconite species. Collectively, the proposed strategy is beneficial to selection of rational chemical markers for the species classification and quality control of herbal medicines. Copyright © 2015 Elsevier B.V. All rights reserved.
Huang, Guangzao; Yuan, Mingshun; Chen, Moliang; Li, Lei; You, Wenjie; Li, Hanjie; Cai, James J; Ji, Guoli
2017-10-07
The application of machine learning in cancer diagnostics has shown great promise and is of importance in clinic settings. Here we consider applying machine learning methods to transcriptomic data derived from tumor-educated platelets (TEPs) from individuals with different types of cancer. We aim to define a reliability measure for diagnostic purposes to increase the potential for facilitating personalized treatments. To this end, we present a novel classification method called MFRB (for Multiple Fitting Regression and Bayes decision), which integrates the process of multiple fitting regression (MFR) with Bayes decision theory. MFR is first used to map multidimensional features of the transcriptomic data into a one-dimensional feature. The probability density function of each class in the mapped space is then adjusted using the Gaussian probability density function. Finally, the Bayes decision theory is used to build a probabilistic classifier with the estimated probability density functions. The output of MFRB can be used to determine which class a sample belongs to, as well as to assign a reliability measure for a given class. The classical support vector machine (SVM) and probabilistic SVM (PSVM) are used to evaluate the performance of the proposed method with simulated and real TEP datasets. Our results indicate that the proposed MFRB method achieves the best performance compared to SVM and PSVM, mainly due to its strong generalization ability for limited, imbalanced, and noisy data.
Byass, Peter; Huong, Dao Lan; Minh, Hoang Van
2003-01-01
Verbal autopsy (VA) has become an important tool in the past 20 years for determining cause of death in communities where there is no routine registration. In many cases, expert physicians have been used to interpret the VA findings and so assign individual causes of death. However, this is time consuming and not always repeatable. Other approaches such as algorithms and neural networks have been developed in some settings. This paper aims to develop a method that is simple, reliable and consistent, which could represent an advance in VA interpretation. This paper describes the development of a Bayesian probability model for VA interpretation as an attempt to find a better approach. This methodology and a preliminary implementation are described, with an evaluation based on VA material from rural Vietnam. The new model was tested against a series of 189 VA interviews from a rural community in Vietnam. Using this very basic model, over 70% of individual causes of death corresponded with those determined by two physicians increasing to over 80% if those cases ascribed to old age or as being indeterminate by the physicians were excluded. Although there is a clear need to improve the preliminary model and to test more extensively with larger and more varied datasets, these preliminary results suggest that there may be good potential in this probabilistic approach.
Against all odds -- Probabilistic forecasts and decision making
NASA Astrophysics Data System (ADS)
Liechti, Katharina; Zappa, Massimiliano
2015-04-01
In the city of Zurich (Switzerland) the setting is such that the damage potential due to flooding of the river Sihl is estimated to about 5 billion US dollars. The flood forecasting system that is used by the administration for decision making runs continuously since 2007. It has a time horizon of max. five days and operates at hourly time steps. The flood forecasting system includes three different model chains. Two of those are run by the deterministic NWP models COSMO-2 and COSMO-7 and one is driven by the probabilistic NWP COSMO-Leps. The model chains are consistent since February 2010, so five full years are available for the evaluation for the system. The system was evaluated continuously and is a very nice example to present the added value that lies in probabilistic forecasts. The forecasts are available on an online-platform to the decision makers. Several graphical representations of the forecasts and forecast-history are available to support decision making and to rate the current situation. The communication between forecasters and decision-makers is quite close. To put it short, an ideal situation. However, an event or better put a non-event in summer 2014 showed that the knowledge about the general superiority of probabilistic forecasts doesn't necessarily mean that the decisions taken in a specific situation will be based on that probabilistic forecast. Some years of experience allow gaining confidence in the system, both for the forecasters and for the decision-makers. Even if from the theoretical point of view the handling during crisis situation is well designed, a first event demonstrated that the dialog with the decision-makers still lacks of exercise during such situations. We argue, that a false alarm is a needed experience to consolidate real-time emergency procedures relying on ensemble predictions. A missed event would probably also fit, but, in our case, we are very happy not to report about this option.
NASA Astrophysics Data System (ADS)
Anees, Asim; Aryal, Jagannath; O'Reilly, Małgorzata M.; Gale, Timothy J.; Wardlaw, Tim
2016-12-01
A robust non-parametric framework, based on multiple Radial Basic Function (RBF) kernels, is proposed in this study, for detecting land/forest cover changes using Landsat 7 ETM+ images. One of the widely used frameworks is to find change vectors (difference image) and use a supervised classifier to differentiate between change and no-change. The Bayesian Classifiers e.g. Maximum Likelihood Classifier (MLC), Naive Bayes (NB), are widely used probabilistic classifiers which assume parametric models, e.g. Gaussian function, for the class conditional distributions. However, their performance can be limited if the data set deviates from the assumed model. The proposed framework exploits the useful properties of Least Squares Probabilistic Classifier (LSPC) formulation i.e. non-parametric and probabilistic nature, to model class posterior probabilities of the difference image using a linear combination of a large number of Gaussian kernels. To this end, a simple technique, based on 10-fold cross-validation is also proposed for tuning model parameters automatically instead of selecting a (possibly) suboptimal combination from pre-specified lists of values. The proposed framework has been tested and compared with Support Vector Machine (SVM) and NB for detection of defoliation, caused by leaf beetles (Paropsisterna spp.) in Eucalyptus nitens and Eucalyptus globulus plantations of two test areas, in Tasmania, Australia, using raw bands and band combination indices of Landsat 7 ETM+. It was observed that due to multi-kernel non-parametric formulation and probabilistic nature, the LSPC outperforms parametric NB with Gaussian assumption in change detection framework, with Overall Accuracy (OA) ranging from 93.6% (κ = 0.87) to 97.4% (κ = 0.94) against 85.3% (κ = 0.69) to 93.4% (κ = 0.85), and is more robust to changing data distributions. Its performance was comparable to SVM, with added advantages of being probabilistic and capable of handling multi-class problems naturally with its original formulation.
Probabilistic simulation of stress concentration in composite laminates
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Murthy, P. L. N.; Liaw, L.
1993-01-01
A computational methodology is described to probabilistically simulate the stress concentration factors in composite laminates. This new approach consists of coupling probabilistic composite mechanics with probabilistic finite element structural analysis. The probabilistic composite mechanics is used to probabilistically describe all the uncertainties inherent in composite material properties while probabilistic finite element is used to probabilistically describe the uncertainties associated with methods to experimentally evaluate stress concentration factors such as loads, geometry, and supports. The effectiveness of the methodology is demonstrated by using it to simulate the stress concentration factors in composite laminates made from three different composite systems. Simulated results match experimental data for probability density and for cumulative distribution functions. The sensitivity factors indicate that the stress concentration factors are influenced by local stiffness variables, by load eccentricities and by initial stress fields.
Probabilistic load simulation: Code development status
NASA Astrophysics Data System (ADS)
Newell, J. F.; Ho, H.
1991-05-01
The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.
An Empirical Approach to the St. Petersburg Paradox
ERIC Educational Resources Information Center
Klyve, Dominic; Lauren, Anna
2011-01-01
The St. Petersburg game is a probabilistic thought experiment. It describes a game which seems to have infinite expected value, but which no reasonable person could be expected to pay much to play. Previous empirical work has centered around trying to find the most likely payoff that would result from playing the game n times. In this paper, we…
A Probabilistic-Numerical Approximation for an Obstacle Problem Arising in Game Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruen, Christine, E-mail: christine.gruen@univ-brest.fr
We investigate a two-player zero-sum stochastic differential game in which one of the players has more information on the game than his opponent. We show how to construct numerical schemes for the value function of this game, which is given by the solution of a quasilinear partial differential equation with obstacle.
Using Qualitative and Phenomenological Principles to Assess Stakeholders' Perceptions of Probability
ERIC Educational Resources Information Center
Newman, Isadore; Hitchcock, John H.; Nastasi, Bonnie K.
2017-01-01
Any attempt to influence behavior by sharing a research finding that makes a probabilistic statement (e.g., a p value) should necessarily entail consideration of how consumers of the information might interpret this information. Such consideration can be informed, at least in part, by applying phenomenological principles of inquiry. This does not…
Near-term probabilistic forecast of significant wildfire events for the Western United States
Haiganoush K. Preisler; Karin L. Riley; Crystal S. Stonesifer; Dave E. Calkin; Matt Jolly
2016-01-01
Fire danger and potential for large fires in the United States (US) is currently indicated via several forecasted qualitative indices. However, landscape-level quantitative forecasts of the probability of a large fire are currently lacking. In this study, we present a framework for forecasting large fire occurrence - an extreme value event - and evaluating...
Probabilistic Reasoning and Prediction with Young Children
ERIC Educational Resources Information Center
Kinnear, Virginia; Clark, Julie
2014-01-01
This paper reports findings from a classroom based study with 5 year old children in their first term of school. A data modelling activity contextualised by a picture story book was used to present a prediction problem. A data table with numerical data values provided for three consecutive days of rubbish collection was provided, with a fourth day…
Probabilistic Analysis and Density Parameter Estimation Within Nessus
NASA Astrophysics Data System (ADS)
Godines, Cody R.; Manteufel, Randall D.
2002-12-01
This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.
Probabilistic Analysis and Density Parameter Estimation Within Nessus
NASA Technical Reports Server (NTRS)
Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)
2002-01-01
This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.
NASA Astrophysics Data System (ADS)
Rahman, M. Moklesur; Bai, Ling; Khan, Nangyal Ghani; Li, Guohui
2018-02-01
The Himalayan-Tibetan region has a long history of devastating earthquakes with wide-spread casualties and socio-economic damages. Here, we conduct the probabilistic seismic hazard analysis by incorporating the incomplete historical earthquake records along with the instrumental earthquake catalogs for the Himalayan-Tibetan region. Historical earthquake records back to more than 1000 years ago and an updated, homogenized and declustered instrumental earthquake catalog since 1906 are utilized. The essential seismicity parameters, namely, the mean seismicity rate γ, the Gutenberg-Richter b value, and the maximum expected magnitude M max are estimated using the maximum likelihood algorithm assuming the incompleteness of the catalog. To compute the hazard value, three seismogenic source models (smoothed gridded, linear, and areal sources) and two sets of ground motion prediction equations are combined by means of a logic tree on accounting the epistemic uncertainties. The peak ground acceleration (PGA) and spectral acceleration (SA) at 0.2 and 1.0 s are predicted for 2 and 10% probabilities of exceedance over 50 years assuming bedrock condition. The resulting PGA and SA maps show a significant spatio-temporal variation in the hazard values. In general, hazard value is found to be much higher than the previous studies for regions, where great earthquakes have actually occurred. The use of the historical and instrumental earthquake catalogs in combination of multiple seismogenic source models provides better seismic hazard constraints for the Himalayan-Tibetan region.
Regier, Dean A; Friedman, Jan M; Marra, Carlo A
2010-05-14
Array genomic hybridization (AGH) provides a higher detection rate than does conventional cytogenetic testing when searching for chromosomal imbalance causing intellectual disability (ID). AGH is more costly than conventional cytogenetic testing, and it remains unclear whether AGH provides good value for money. Decision analytic modeling was used to evaluate the trade-off between costs, clinical effectiveness, and benefit of an AGH testing strategy compared to a conventional testing strategy. The trade-off between cost and effectiveness was expressed via the incremental cost-effectiveness ratio. Probabilistic sensitivity analysis was performed via Monte Carlo simulation. The baseline AGH testing strategy led to an average cost increase of $217 (95% CI $172-$261) per patient and an additional 8.2 diagnoses in every 100 tested (0.082; 95% CI 0.044-0.119). The mean incremental cost per additional diagnosis was $2646 (95% CI $1619-$5296). Probabilistic sensitivity analysis demonstrated that there was a 95% probability that AGH would be cost effective if decision makers were willing to pay $4550 for an additional diagnosis. Our model suggests that using AGH instead of conventional karyotyping for most ID patients provides good value for money. Deterministic sensitivity analysis found that employing AGH after first-line cytogenetic testing had proven uninformative did not provide good value for money when compared to using AGH as first-line testing. Copyright (c) 2010 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Broekhuizen, Henk; IJzerman, Maarten J; Hauber, A Brett; Groothuis-Oudshoorn, Catharina G M
2017-03-01
The need for patient engagement has been recognized by regulatory agencies, but there is no consensus about how to operationalize this. One approach is the formal elicitation and use of patient preferences for weighing clinical outcomes. The aim of this study was to demonstrate how patient preferences can be used to weigh clinical outcomes when both preferences and clinical outcomes are uncertain by applying a probabilistic value-based multi-criteria decision analysis (MCDA) method. Probability distributions were used to model random variation and parameter uncertainty in preferences, and parameter uncertainty in clinical outcomes. The posterior value distributions and rank probabilities for each treatment were obtained using Monte-Carlo simulations. The probability of achieving the first rank is the probability that a treatment represents the highest value to patients. We illustrated our methodology for a simplified case on six HIV treatments. Preferences were modeled with normal distributions and clinical outcomes were modeled with beta distributions. The treatment value distributions showed the rank order of treatments according to patients and illustrate the remaining decision uncertainty. This study demonstrated how patient preference data can be used to weigh clinical evidence using MCDA. The model takes into account uncertainty in preferences and clinical outcomes. The model can support decision makers during the aggregation step of the MCDA process and provides a first step toward preference-based personalized medicine, yet requires further testing regarding its appropriate use in real-world settings.
Sense of place: Mount Desert Island residents and Acadia National Park
Nicole L. Ballinger; Robert E. Manning
1998-01-01
The framework of sense of place, developed by humanistic geographers, has been employed by researchers in their efforts to explain the range of attachments, values, and meanings assigned to natural areas. This study used an exploratory approach to address the range of values and meanings assigned by local residents to places in Acadia National Park. Qualitative...
Uncertain deduction and conditional reasoning.
Evans, Jonathan St B T; Thompson, Valerie A; Over, David E
2015-01-01
There has been a paradigm shift in the psychology of deductive reasoning. Many researchers no longer think it is appropriate to ask people to assume premises and decide what necessarily follows, with the results evaluated by binary extensional logic. Most every day and scientific inference is made from more or less confidently held beliefs and not assumptions, and the relevant normative standard is Bayesian probability theory. We argue that the study of "uncertain deduction" should directly ask people to assign probabilities to both premises and conclusions, and report an experiment using this method. We assess this reasoning by two Bayesian metrics: probabilistic validity and coherence according to probability theory. On both measures, participants perform above chance in conditional reasoning, but they do much better when statements are grouped as inferences, rather than evaluated in separate tasks.
A strategy for understanding noise-induced annoyance
NASA Astrophysics Data System (ADS)
Fidell, S.; Green, D. M.; Schultz, T. J.; Pearsons, K. S.
1988-08-01
This report provides a rationale for development of a systematic approach to understanding noise-induced annoyance. Two quantitative models are developed to explain: (1) the prevalence of annoyance due to residential exposure to community noise sources; and (2) the intrusiveness of individual noise events. Both models deal explicitly with the probabilistic nature of annoyance, and assign clear roles to acoustic and nonacoustic determinants of annoyance. The former model provides a theoretical foundation for empirical dosage-effect relationships between noise exposure and community response, while the latter model differentiates between the direct and immediate annoyance of noise intrusions and response bias factors that influence the reporting of annoyance. The assumptions of both models are identified, and the nature of the experimentation necessary to test hypotheses derived from the models is described.
NASA Astrophysics Data System (ADS)
Acedo, L.; Villanueva-Oller, J.; Moraño, J. A.; Villanueva, R.-J.
2013-01-01
The Berkeley Open Infrastructure for Network Computing (BOINC) has become the standard open source solution for grid computing in the Internet. Volunteers use their computers to complete an small part of the task assigned by a dedicated server. We have developed a BOINC project called Neurona@Home whose objective is to simulate a cellular automata random network with, at least, one million neurons. We consider a cellular automata version of the integrate-and-fire model in which excitatory and inhibitory nodes can activate or deactivate neighbor nodes according to a set of probabilistic rules. Our aim is to determine the phase diagram of the model and its behaviour and to compare it with the electroencephalographic signals measured in real brains.
The probabilistic nature of preferential choice.
Rieskamp, Jörg
2008-11-01
Previous research has developed a variety of theories explaining when and why people's decisions under risk deviate from the standard economic view of expected utility maximization. These theories are limited in their predictive accuracy in that they do not explain the probabilistic nature of preferential choice, that is, why an individual makes different choices in nearly identical situations, or why the magnitude of these inconsistencies varies in different situations. To illustrate the advantage of probabilistic theories, three probabilistic theories of decision making under risk are compared with their deterministic counterparts. The probabilistic theories are (a) a probabilistic version of a simple choice heuristic, (b) a probabilistic version of cumulative prospect theory, and (c) decision field theory. By testing the theories with the data from three experimental studies, the superiority of the probabilistic models over their deterministic counterparts in predicting people's decisions under risk become evident. When testing the probabilistic theories against each other, decision field theory provides the best account of the observed behavior.
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhary, Kenny; Najm, Habib N.
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
Chowdhary, Kenny; Najm, Habib N.
2016-04-13
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
Testing for ontological errors in probabilistic forecasting models of natural systems
Marzocchi, Warner; Jordan, Thomas H.
2014-01-01
Probabilistic forecasting models describe the aleatory variability of natural systems as well as our epistemic uncertainty about how the systems work. Testing a model against observations exposes ontological errors in the representation of a system and its uncertainties. We clarify several conceptual issues regarding the testing of probabilistic forecasting models for ontological errors: the ambiguity of the aleatory/epistemic dichotomy, the quantification of uncertainties as degrees of belief, the interplay between Bayesian and frequentist methods, and the scientific pathway for capturing predictability. We show that testability of the ontological null hypothesis derives from an experimental concept, external to the model, that identifies collections of data, observed and not yet observed, that are judged to be exchangeable when conditioned on a set of explanatory variables. These conditional exchangeability judgments specify observations with well-defined frequencies. Any model predicting these behaviors can thus be tested for ontological error by frequentist methods; e.g., using P values. In the forecasting problem, prior predictive model checking, rather than posterior predictive checking, is desirable because it provides more severe tests. We illustrate experimental concepts using examples from probabilistic seismic hazard analysis. Severe testing of a model under an appropriate set of experimental concepts is the key to model validation, in which we seek to know whether a model replicates the data-generating process well enough to be sufficiently reliable for some useful purpose, such as long-term seismic forecasting. Pessimistic views of system predictability fail to recognize the power of this methodology in separating predictable behaviors from those that are not. PMID:25097265
NASA Technical Reports Server (NTRS)
Bast, Callie Corinne Scheidt
1994-01-01
This thesis presents the on-going development of methodology for a probabilistic material strength degradation model. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes four effects that typically reduce lifetime strength: high temperature, mechanical fatigue, creep, and thermal fatigue. Statistical analysis was conducted on experimental Inconel 718 data obtained from the open literature. This analysis provided regression parameters for use as the model's empirical material constants, thus calibrating the model specifically for Inconel 718. Model calibration was carried out for four variables, namely, high temperature, mechanical fatigue, creep, and thermal fatigue. Methodology to estimate standard deviations of these material constants for input into the probabilistic material strength model was developed. Using the current version of PROMISS, entitled PROMISS93, a sensitivity study for the combined effects of mechanical fatigue, creep, and thermal fatigue was performed. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing a combination of mechanical fatigue and high temperature effects by model to the combination by experiment were conducted. Thus, for Inconel 718, the basic model assumption of independence between effects was evaluated. Results from this limited verification study strongly supported this assumption.
Zulkifley, Mohd Asyraf; Rawlinson, David; Moran, Bill
2012-01-01
In video analytics, robust observation detection is very important as the content of the videos varies a lot, especially for tracking implementation. Contrary to the image processing field, the problems of blurring, moderate deformation, low illumination surroundings, illumination change and homogenous texture are normally encountered in video analytics. Patch-Based Observation Detection (PBOD) is developed to improve detection robustness to complex scenes by fusing both feature- and template-based recognition methods. While we believe that feature-based detectors are more distinctive, however, for finding the matching between the frames are best achieved by a collection of points as in template-based detectors. Two methods of PBOD—the deterministic and probabilistic approaches—have been tested to find the best mode of detection. Both algorithms start by building comparison vectors at each detected points of interest. The vectors are matched to build candidate patches based on their respective coordination. For the deterministic method, patch matching is done in 2-level test where threshold-based position and size smoothing are applied to the patch with the highest correlation value. For the second approach, patch matching is done probabilistically by modelling the histograms of the patches by Poisson distributions for both RGB and HSV colour models. Then, maximum likelihood is applied for position smoothing while a Bayesian approach is applied for size smoothing. The result showed that probabilistic PBOD outperforms the deterministic approach with average distance error of 10.03% compared with 21.03%. This algorithm is best implemented as a complement to other simpler detection methods due to heavy processing requirement. PMID:23202226
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nooij, R.J.W. de; Lotterman, K.M.; Sande, P.H.J. van de
Environmental Impact Assessment (EIA) must account for legally protected and endangered species. Uncertainties relating to the validity and sensitivity of EIA arise from predictions and valuation of effects on these species. This paper presents a validity and sensitivity analysis of a model (BIO-SAFE) for assessment of impacts of land use changes and physical reconstruction measures on legally protected and endangered river species. The assessment is based on links between species (higher plants, birds, mammals, reptiles and amphibians, butterflies and dragon- and damselflies) and ecotopes (landscape ecological units, e.g., river dune, soft wood alluvial forests), and on value assignment to protectedmore » and endangered species using different valuation criteria (i.e., EU Habitats and Birds directive, Conventions of Bern and Bonn and Red Lists). The validity of BIO-SAFE has been tested by comparing predicted effects of landscape changes on the diversity of protected and endangered species with observed changes in biodiversity in five reconstructed floodplains. The sensitivity of BIO-SAFE to value assignment has been analysed using data of a Strategic Environmental Assessment concerning the Spatial Planning Key Decision for reconstruction of the Dutch floodplains of the river Rhine, aimed at flood defence and ecological rehabilitation. The weights given to the valuation criteria for protected and endangered species were varied and the effects on ranking of alternatives were quantified. A statistically significant correlation (p < 0.01) between predicted and observed values for protected and endangered species was found. The sensitivity of the model to value assignment proved to be low. Comparison of five realistic valuation options showed that different rankings of scenarios predominantly occur when valuation criteria are left out of the assessment. Based on these results we conclude that linking species to ecotopes can be used for adequate impact assessments. Quantification of sensitivity of impact assessment to value assignment shows that a model like BIO-SAFE is relatively insensitive to assignment of values to different policy and legislation based criteria. Arbitrariness of the value assignment therefore has a very limited effect on assessment outcomes. However, the decision to include valuation criteria or not is very important.« less
Impact of Probabilistic Weather on Flight Routing Decisions
NASA Technical Reports Server (NTRS)
Sheth, Kapil; Sridhar, Banavar; Mulfinger, Daniel
2006-01-01
Flight delays in the United States have been found to increase year after year, along with the increase in air traffic. During the four-month period from May through August of 2005, weather related delays accounted for roughly 70% of all reported delays, The current weather prediction in tactical (within 2 hours) timeframe is at manageable levels, however, the state of forecasting weather for strategic (2-6 hours) timeframe is still not dependable for long-term planning. In the absence of reliable severe weather forecasts, the decision-making for flights longer than two hours is challenging. This paper deals with an approach of using probabilistic weather prediction for Traffic Flow Management use, and a general method using this prediction for estimating expected values of flight length and delays in the National Airspace System (NAS). The current state-of-the-art convective weather forecasting is employed to aid the decision makers in arriving at decisions for traffic flow and flight planing. The six-agency effort working on the Next Generation Air Transportation System (NGATS) have considered weather-assimilated decision-making as one of the principal foci out of a list of eight. The weather Integrated Product Team has considered integrated weather information and improved aviation weather forecasts as two of the main efforts (Ref. 1, 2). Recently, research has focused on the concept of operations for strategic traffic flow management (Ref. 3) and how weather data can be integrated for improved decision-making for efficient traffic management initiatives (Ref. 4, 5). An overview of the weather data needs and benefits of various participants in the air traffic system along with available products can be found in Ref. 6. Previous work related to use of weather data in identifying and categorizing pilot intrusions into severe weather regions (Ref. 7, 8) has demonstrated a need for better forecasting in the strategic planning timeframes and moving towards a probabilistic description of weather (Ref. 9). This paper focuses on. specified probability in a local region for flight intrusion/deviation decision-making. The process uses a probabilistic weather description, implements that in a air traffic assessment system to study trajectories of aircraft crossing a cut-off probability contour. This value would be useful for meteorologists in creating optimum distribution profiles for severe weather, Once available, the expected values of flight path and aggregate delays are calculated for efficient operations. The current research, however, does not deal with the issue of multiple cell encounters, as well as echo tops, and will be a topic of future work.
Pelekis, Michael; Nicolich, Mark J; Gauthier, Joseph S
2003-12-01
Human health risk assessments use point values to develop risk estimates and thus impart a deterministic character to risk, which, by definition, is a probability phenomenon. The risk estimates are calculated based on individuals and then, using uncertainty factors (UFs), are extrapolated to the population that is characterized by variability. Regulatory agencies have recommended the quantification of the impact of variability in risk assessments through the application of probabilistic methods. In the present study, a framework that deals with the quantitative analysis of uncertainty (U) and variability (V) in target tissue dose in the population was developed by applying probabilistic analysis to physiologically-based toxicokinetic models. The mechanistic parameters that determine kinetics were described with probability density functions (PDFs). Since each PDF depicts the frequency of occurrence of all expected values of each parameter in the population, the combined effects of multiple sources of U/V were accounted for in the estimated distribution of tissue dose in the population, and a unified (adult and child) intraspecies toxicokinetic uncertainty factor UFH-TK was determined. The results show that the proposed framework accounts effectively for U/V in population toxicokinetics. The ratio of the 95th percentile to the 50th percentile of the annual average concentration of the chemical at the target tissue organ (i.e., the UFH-TK) varies with age. The ratio is equivalent to a unified intraspecies toxicokinetic UF, and it is one of the UFs by which the NOAEL can be divided to obtain the RfC/RfD. The 10-fold intraspecies UF is intended to account for uncertainty and variability in toxicokinetics (3.2x) and toxicodynamics (3.2x). This article deals exclusively with toxicokinetic component of UF. The framework provides an alternative to the default methodology and is advantageous in that the evaluation of toxicokinetic variability is based on the distribution of the effective target tissue dose, rather than applied dose. It allows for the replacement of the default adult and children intraspecies UF with toxicokinetic data-derived values and provides accurate chemical-specific estimates for their magnitude. It shows that proper application of probability and toxicokinetic theories can reduce uncertainties when establishing exposure limits for specific compounds and provide better assurance that established limits are adequately protective. It contributes to the development of a probabilistic noncancer risk assessment framework and will ultimately lead to the unification of cancer and noncancer risk assessment methodologies.
NASA Technical Reports Server (NTRS)
Singhal, Surendra N.
2003-01-01
The SAE G-11 RMSL Division and Probabilistic Methods Committee meeting sponsored by the Picatinny Arsenal during March 1-3, 2004 at Westin Morristown, will report progress on projects for probabilistic assessment of Army system and launch an initiative for probabilistic education. The meeting features several Army and industry Senior executives and Ivy League Professor to provide an industry/government/academia forum to review RMSL technology; reliability and probabilistic technology; reliability-based design methods; software reliability; and maintainability standards. With over 100 members including members with national/international standing, the mission of the G-11s Probabilistic Methods Committee is to enable/facilitate rapid deployment of probabilistic technology to enhance the competitiveness of our industries by better, faster, greener, smarter, affordable and reliable product development.
Using X-ray absorption to probe sulfur oxidation states in complex molecules
NASA Astrophysics Data System (ADS)
Vairavamurthy, A.
1998-10-01
X-ray absorption near-edge structure (XANES) spectroscopy offers an important non-destructive tool for determining oxidation states and for characterizing chemical speciation. The technique was used to experimentally verify the oxidation states of sulfur in different types of complex molecules because there are irregularities and uncertainties in assigning the values traditionally. The usual practice of determining oxidation states involves using a set of conventional rules. The oxidation state is an important control in the chemical speciation of sulfur, ranging from -2 to +6 in its different compounds. Experimental oxidation-state values for various types of sulfur compounds, using their XANES peak-energy positions, were assigned from a scale in which elemental sulfur and sulfate are designated as 0 and +6, respectively. Because these XANES-based values differed considerably from conventionally determined oxidation states for most sulfur compounds, a new term 'oxidation index' was coined to describe them. The experimental values were closer to those conventional values obtained by assigning shared electrons to the more electronegative atoms than to those based on other customary rules for assigning them. Because the oxidation index is distinct and characteristic for each different type of sulfur functionality, it becomes an important parameter for characterizing sulfur species, and for experimentally verifying uncertain oxidation states.
NASA Technical Reports Server (NTRS)
1997-01-01
Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CARES/Life software developed at the NASA Lewis Research Center eases this by providing a tool that uses probabilistic reliability analysis techniques to optimize the design and manufacture of brittle material components. CARES/Life is an integrated package that predicts the probability of a monolithic ceramic component's failure as a function of its time in service. It couples commercial finite element programs--which resolve a component's temperature and stress distribution - with reliability evaluation and fracture mechanics routines for modeling strength - limiting defects. These routines are based on calculations of the probabilistic nature of the brittle material's strength.
Students’ difficulties in probabilistic problem-solving
NASA Astrophysics Data System (ADS)
Arum, D. P.; Kusmayadi, T. A.; Pramudya, I.
2018-03-01
There are many errors can be identified when students solving mathematics problems, particularly in solving the probabilistic problem. This present study aims to investigate students’ difficulties in solving the probabilistic problem. It focuses on analyzing and describing students errors during solving the problem. This research used the qualitative method with case study strategy. The subjects in this research involve ten students of 9th grade that were selected by purposive sampling. Data in this research involve students’ probabilistic problem-solving result and recorded interview regarding students’ difficulties in solving the problem. Those data were analyzed descriptively using Miles and Huberman steps. The results show that students have difficulties in solving the probabilistic problem and can be divided into three categories. First difficulties relate to students’ difficulties in understanding the probabilistic problem. Second, students’ difficulties in choosing and using appropriate strategies for solving the problem. Third, students’ difficulties with the computational process in solving the problem. Based on the result seems that students still have difficulties in solving the probabilistic problem. It means that students have not able to use their knowledge and ability for responding probabilistic problem yet. Therefore, it is important for mathematics teachers to plan probabilistic learning which could optimize students probabilistic thinking ability.
A Software Tool for Quantitative Seismicity Analysis - ZMAP
NASA Astrophysics Data System (ADS)
Wiemer, S.; Gerstenberger, M.
2001-12-01
Earthquake catalogs are probably the most basic product of seismology, and remain arguably the most useful for tectonic studies. Modern seismograph networks can locate up to 100,000 earthquakes annually, providing a continuous and sometime overwhelming stream of data. ZMAP is a set of tools driven by a graphical user interface (GUI), designed to help seismologists analyze catalog data. ZMAP is primarily a research tool suited to the evaluation of catalog quality and to addressing specific hypotheses; however, it can also be useful in routine network operations. Examples of ZMAP features include catalog quality assessment (artifacts, completeness, explosion contamination), interactive data exploration, mapping transients in seismicity (rate changes, b-values, p-values), fractal dimension analysis and stress tensor inversions. Roughly 100 scientists worldwide have used the software at least occasionally. About 30 peer-reviewed publications have made use of ZMAP. ZMAP code is open source, written in the commercial software language Matlab by the Mathworks, a widely used software in the natural sciences. ZMAP was first published in 1994, and has continued to grow over the past 7 years. Recently, we released ZMAP v.6. The poster will introduce the features of ZMAP. We will specifically focus on ZMAP features related to time-dependent probabilistic hazard assessment. We are currently implementing a ZMAP based system that computes probabilistic hazard maps, which combine the stationary background hazard as well as aftershock and foreshock hazard into a comprehensive time dependent probabilistic hazard map. These maps will be displayed in near real time on the Internet. This poster is also intended as a forum for ZMAP users to provide feedback and discuss the future of ZMAP.
Wang, Yan; Deng, Lei; Caballero-Guzman, Alejandro; Nowack, Bernd
2016-12-01
Nano iron oxide particles are beneficial to our daily lives through their use in paints, construction materials, biomedical imaging and other industrial fields. However, little is known about the possible risks associated with the current exposure level of engineered nano iron oxides (nano-FeOX) to organisms in the environment. The goal of this study was to predict the release of nano-FeOX to the environment and assess their risks for surface waters in the EU and Switzerland. The material flows of nano-FeOX to technical compartments (waste incineration and waste water treatment plants) and to the environment were calculated with a probabilistic modeling approach. The mean value of the predicted environmental concentrations (PECs) of nano-FeOX in surface waters in the EU for a worst-case scenario (no particle sedimentation) was estimated to be 28 ng/l. Using a probabilistic species sensitivity distribution, the predicted no-effect concentration (PNEC) was determined from ecotoxicological data. The risk characterization ratio, calculated by dividing the PEC by PNEC values, was used to characterize the risks. The mean risk characterization ratio was predicted to be several orders of magnitude smaller than 1 (1.4 × 10 - 4 ). Therefore, this modeling effort indicates that only a very limited risk is posed by the current release level of nano-FeOX to organisms in surface waters. However, a better understanding of the hazards of nano-FeOX to the organisms in other ecosystems (such as sediment) needs to be assessed to determine the overall risk of these particles to the environment.
A probabilistic Hu-Washizu variational principle
NASA Technical Reports Server (NTRS)
Liu, W. K.; Belytschko, T.; Besterfield, G. H.
1987-01-01
A Probabilistic Hu-Washizu Variational Principle (PHWVP) for the Probabilistic Finite Element Method (PFEM) is presented. This formulation is developed for both linear and nonlinear elasticity. The PHWVP allows incorporation of the probabilistic distributions for the constitutive law, compatibility condition, equilibrium, domain and boundary conditions into the PFEM. Thus, a complete probabilistic analysis can be performed where all aspects of the problem are treated as random variables and/or fields. The Hu-Washizu variational formulation is available in many conventional finite element codes thereby enabling the straightforward inclusion of the probabilistic features into present codes.
Calculation of weighted averages approach for the estimation of ping tolerance values
Silalom, S.; Carter, J.L.; Chantaramongkol, P.
2010-01-01
A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.
ERIC Educational Resources Information Center
Rapchak, Marcia E.; Brungard, Allison B.; Bergfelt, Theodore W.
2016-01-01
Using the Information Literacy VALUE Rubric provided by the AAC&U, this study compares thirty final capstone assignments in a research course in a learning community with thirty final assignments in from students not in learning communities. Results indicated higher performance of the non-learning community students; however, transfer skills…
48 CFR 9904.415-60 - Illustrations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... in 9904.415-50(a) are met. Year Amount of future payment × discount rate 8-percent present value... future payment×Discount rate present value factor for 2 yr at 8 pct=Assignable cost $2,000×0.8573=$1,714... was used as the discount rate at the time the cost was assigned. The IRS rate in effect at the date of...
Preparation and characterization of a suite of ephedra-containing standard reference materials.
Sharpless, Katherine E; Anderson, David L; Betz, Joseph M; Butler, Therese A; Capar, Stephen G; Cheng, John; Fraser, Catharine A; Gardner, Graeme; Gay, Martha L; Howell, Daniel W; Ihara, Toshihide; Khan, Mansoor A; Lam, Joseph W; Long, Stephen E; McCooeye, Margaret; Mackey, Elizabeth A; Mindak, William R; Mitvalsky, Staci; Murphy, Karen E; NguyenPho, Agnes; Phinney, Karen W; Porter, Barbara J; Roman, Mark; Sander, Lane C; Satterfield, Mary B; Scriver, Christine; Sturgeon, Ralph; Thomas, Jeanice Brown; Vocke, Robert D; Wise, Stephen A; Wood, Laura J; Yang, Lu; Yen, James H; Ziobro, George C
2006-01-01
The National Institute of Standards and Technology, the U.S. Food and Drug Administration, Center for Drug Evaluation and Research and Center for Food Safety and Applied Nutrition, and the National Institutes of Health, Office of Dietary Supplements, are collaborating to produce a series of Standard Reference Materials (SRMs) for dietary supplements. A suite of ephedra materials is the first in the series, and this paper describes the acquisition, preparation, and value assignment of these materials: SRMs 3240 Ephedra sinica Stapf Aerial Parts, 3241 E. sinica Stapf Native Extract, 3242 E. sinica Stapf Commercial Extract, 3243 Ephedra-Containing Solid Oral Dosage Form, and 3244 Ephedra-Containing Protein Powder. Values are assigned for ephedrine alkaloids and toxic elements in all 5 materials. Values are assigned for other analytes (e.g., caffeine, nutrient elements, proximates, etc.) in some of the materials, as appropriate. Materials in this suite of SRMs are intended for use as primary control materials when values are assigned to in-house (secondary) control materials and for validation of analytical methods for the measurement of alkaloids, toxic elements, and, in the case of SRM 3244, nutrients in similar materials.
Probabilistic Aeroelastic Analysis of Turbomachinery Components
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Mital, S. K.; Stefko, G. L.
2004-01-01
A probabilistic approach is described for aeroelastic analysis of turbomachinery blade rows. Blade rows with subsonic flow and blade rows with supersonic flow with subsonic leading edge are considered. To demonstrate the probabilistic approach, the flutter frequency, damping and forced response of a blade row representing a compressor geometry is considered. The analysis accounts for uncertainties in structural and aerodynamic design variables. The results are presented in the form of probabilistic density function (PDF) and sensitivity factors. For subsonic flow cascade, comparisons are also made with different probabilistic distributions, probabilistic methods, and Monte-Carlo simulation. The approach shows that the probabilistic approach provides a more realistic and systematic way to assess the effect of uncertainties in design variables on the aeroelastic instabilities and response.
NASA Astrophysics Data System (ADS)
Barrera, A.; Altava-Ortiz, V.; Llasat, M. C.; Barnolas, M.
2007-09-01
Between the 11 and 13 October 2005 several flash floods were produced along the coast of Catalonia (NE Spain) due to a significant heavy rainfall event. Maximum rainfall achieved values up to 250 mm in 24 h. The total amount recorded during the event in some places was close to 350 mm. Barcelona city was also in the affected area where high rainfall intensities were registered, but just a few small floods occurred, thanks to the efficient urban drainage system of the city. Two forecasting methods have been applied in order to evaluate their capability of prediction regarding extreme events: the deterministic MM5 model and a probabilistic model based on the analogous method. The MM5 simulation allows analysing accurately the main meteorological features with a high spatial resolution (2 km), like the formation of some convergence lines over the region that partially explains the maximum precipitation location during the event. On the other hand, the analogous technique shows a good agreement among highest probability values and real affected areas, although a larger pluviometric rainfall database would be needed to improve the results. The comparison between the observed precipitation and from both QPF (quantitative precipitation forecast) methods shows that the analogous technique tends to underestimate the rainfall values and the MM5 simulation tends to overestimate them.
Probabilistic earthquake hazard analysis for Cairo, Egypt
NASA Astrophysics Data System (ADS)
Badawy, Ahmed; Korrat, Ibrahim; El-Hadidy, Mahmoud; Gaber, Hanan
2016-04-01
Cairo is the capital of Egypt and the largest city in the Arab world and Africa, and the sixteenth largest metropolitan area in the world. It was founded in the tenth century (969 ad) and is 1046 years old. It has long been a center of the region's political and cultural life. Therefore, the earthquake risk assessment for Cairo has a great importance. The present work aims to analysis the earthquake hazard of Cairo as a key input's element for the risk assessment. The regional seismotectonics setting shows that Cairo could be affected by both far- and near-field seismic sources. The seismic hazard of Cairo has been estimated using the probabilistic seismic hazard approach. The logic tree frame work was used during the calculations. Epistemic uncertainties were considered into account by using alternative seismotectonics models and alternative ground motion prediction equations. Seismic hazard values have been estimated within a grid of 0.1° × 0.1 ° spacing for all of Cairo's districts at different spectral periods and four return periods (224, 615, 1230, and 4745 years). Moreover, the uniform hazard spectra have been calculated at the same return periods. The pattern of the contour maps show that the highest values of the peak ground acceleration is concentrated in the eastern zone's districts (e.g., El Nozha) and the lowest values at the northern and western zone's districts (e.g., El Sharabiya and El Khalifa).
NASA Technical Reports Server (NTRS)
Bast, Callie C.; Jurena, Mark T.; Godines, Cody R.; Chamis, Christos C. (Technical Monitor)
2001-01-01
This project included both research and education objectives. The goal of this project was to advance innovative research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction for improved reliability and safety of structural components of aerospace and aircraft propulsion systems. Research and education partners included Glenn Research Center (GRC) and Southwest Research Institute (SwRI) along with the University of Texas at San Antonio (UTSA). SwRI enhanced the NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) code and provided consulting support for NESSUS-related activities at UTSA. NASA funding supported three undergraduate students, two graduate students, a summer course instructor and the Principal Investigator. Matching funds from UTSA provided for the purchase of additional equipment for the enhancement of the Advanced Interactive Computational SGI Lab established during the first year of this Partnership Award to conduct the probabilistic finite element summer courses. The research portion of this report presents the cumulation of work performed through the use of the probabilistic finite element program, NESSUS, Numerical Evaluation and Structures Under Stress, and an embedded Material Strength Degradation (MSD) model. Probabilistic structural analysis provided for quantification of uncertainties associated with the design, thus enabling increased system performance and reliability. The structure examined was a Space Shuttle Main Engine (SSME) fuel turbopump blade. The blade material analyzed was Inconel 718, since the MSD model was previously calibrated for this material. Reliability analysis encompassing the effects of high temperature and high cycle fatigue, yielded a reliability value of 0.99978 using a fully correlated random field for the blade thickness. The reliability did not change significantly for a change in distribution type except for a change in distribution from Gaussian to Weibull for the centrifugal load. The sensitivity factors determined to be most dominant were the centrifugal loading and the initial strength of the material. These two sensitivity factors were influenced most by a change in distribution type from Gaussian to Weibull. The education portion of this report describes short-term and long-term educational objectives. Such objectives serve to integrate research and education components of this project resulting in opportunities for ethnic minority students, principally Hispanic. The primary vehicle to facilitate such integration was the teaching of two probabilistic finite element method courses to undergraduate engineering students in the summers of 1998 and 1999.
Sutherland, J G H; Miksys, N; Furutani, K M; Thomson, R M
2014-01-01
To investigate methods of generating accurate patient-specific computational phantoms for the Monte Carlo calculation of lung brachytherapy patient dose distributions. Four metallic artifact mitigation methods are applied to six lung brachytherapy patient computed tomography (CT) images: simple threshold replacement (STR) identifies high CT values in the vicinity of the seeds and replaces them with estimated true values; fan beam virtual sinogram replaces artifact-affected values in a virtual sinogram and performs a filtered back-projection to generate a corrected image; 3D median filter replaces voxel values that differ from the median value in a region of interest surrounding the voxel and then applies a second filter to reduce noise; and a combination of fan beam virtual sinogram and STR. Computational phantoms are generated from artifact-corrected and uncorrected images using several tissue assignment schemes: both lung-contour constrained and unconstrained global schemes are considered. Voxel mass densities are assigned based on voxel CT number or using the nominal tissue mass densities. Dose distributions are calculated using the EGSnrc user-code BrachyDose for (125)I, (103)Pd, and (131)Cs seeds and are compared directly as well as through dose volume histograms and dose metrics for target volumes surrounding surgical sutures. Metallic artifact mitigation techniques vary in ability to reduce artifacts while preserving tissue detail. Notably, images corrected with the fan beam virtual sinogram have reduced artifacts but residual artifacts near sources remain requiring additional use of STR; the 3D median filter removes artifacts but simultaneously removes detail in lung and bone. Doses vary considerably between computational phantoms with the largest differences arising from artifact-affected voxels assigned to bone in the vicinity of the seeds. Consequently, when metallic artifact reduction and constrained tissue assignment within lung contours are employed in generated phantoms, this erroneous assignment is reduced, generally resulting in higher doses. Lung-constrained tissue assignment also results in increased doses in regions of interest due to a reduction in the erroneous assignment of adipose to voxels within lung contours. Differences in dose metrics calculated for different computational phantoms are sensitive to radionuclide photon spectra with the largest differences for (103)Pd seeds and smallest but still considerable differences for (131)Cs seeds. Despite producing differences in CT images, dose metrics calculated using the STR, fan beam + STR, and 3D median filter techniques produce similar dose metrics. Results suggest that the accuracy of dose distributions for permanent implant lung brachytherapy is improved by applying lung-constrained tissue assignment schemes to metallic artifact corrected images.
A Probabilistic Clustering Theory of the Organization of Visual Short-Term Memory
ERIC Educational Resources Information Center
Orhan, A. Emin; Jacobs, Robert A.
2013-01-01
Experimental evidence suggests that the content of a memory for even a simple display encoded in visual short-term memory (VSTM) can be very complex. VSTM uses organizational processes that make the representation of an item dependent on the feature values of all displayed items as well as on these items' representations. Here, we develop a…
LaWen T. Hollingsworth; Laurie L. Kurth; Bernard R. Parresol; Roger D. Ottmar; Susan J. Prichard
2012-01-01
Landscape-scale fire behavior analyses are important to inform decisions on resource management projects that meet land management objectives and protect values from adverse consequences of fire. Deterministic and probabilistic geospatial fire behavior analyses are conducted with various modeling systems including FARSITE, FlamMap, FSPro, and Large Fire Simulation...
ERIC Educational Resources Information Center
Hammerer, Dorothea; Eppinger, Ben
2012-01-01
In many instances, children and older adults show similar difficulties in reward-based learning and outcome monitoring. These impairments are most pronounced in situations in which reward is uncertain (e.g., probabilistic reward schedules) and if outcome information is ambiguous (e.g., the relative value of outcomes has to be learned).…
A Bayesian Nonparametric Causal Model for Regression Discontinuity Designs
ERIC Educational Resources Information Center
Karabatsos, George; Walker, Stephen G.
2013-01-01
The regression discontinuity (RD) design (Thistlewaite & Campbell, 1960; Cook, 2008) provides a framework to identify and estimate causal effects from a non-randomized design. Each subject of a RD design is assigned to the treatment (versus assignment to a non-treatment) whenever her/his observed value of the assignment variable equals or…
A Case-Study Assignment to Teach Theoretical Perspectives in Abnormal Psychology.
ERIC Educational Resources Information Center
Perkins, David V.
1991-01-01
Describes an assignment that requires students to organize, prepare, and revise a case study in abnormal behavior. Explains that students employ a single theoretical perspective in preparing a report on a figure from history, literature, the arts, or current events. Discusses the value of the assignment for students. (SG)
Probabilistic structural analysis methods for space propulsion system components
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1986-01-01
The development of a three-dimensional inelastic analysis methodology for the Space Shuttle main engine (SSME) structural components is described. The methodology is composed of: (1) composite load spectra, (2) probabilistic structural analysis methods, (3) the probabilistic finite element theory, and (4) probabilistic structural analysis. The methodology has led to significant technical progress in several important aspects of probabilistic structural analysis. The program and accomplishments to date are summarized.
NASA Technical Reports Server (NTRS)
Townsend, J.; Meyers, C.; Ortega, R.; Peck, J.; Rheinfurth, M.; Weinstock, B.
1993-01-01
Probabilistic structural analyses and design methods are steadily gaining acceptance within the aerospace industry. The safety factor approach to design has long been the industry standard, and it is believed by many to be overly conservative and thus, costly. A probabilistic approach to design may offer substantial cost savings. This report summarizes several probabilistic approaches: the probabilistic failure analysis (PFA) methodology developed by Jet Propulsion Laboratory, fast probability integration (FPI) methods, the NESSUS finite element code, and response surface methods. Example problems are provided to help identify the advantages and disadvantages of each method.
Convergence Time and Phase Transition in a Non-monotonic Family of Probabilistic Cellular Automata
NASA Astrophysics Data System (ADS)
Ramos, A. D.; Leite, A.
2017-08-01
In dynamical systems, some of the most important questions are related to phase transitions and convergence time. We consider a one-dimensional probabilistic cellular automaton where their components assume two possible states, zero and one, and interact with their two nearest neighbors at each time step. Under the local interaction, if the component is in the same state as its two neighbors, it does not change its state. In the other cases, a component in state zero turns into a one with probability α , and a component in state one turns into a zero with probability 1-β . For certain values of α and β , we show that the process will always converge weakly to δ 0, the measure concentrated on the configuration where all the components are zeros. Moreover, the mean time of this convergence is finite, and we describe an upper bound in this case, which is a linear function of the initial distribution. We also demonstrate an application of our results to the percolation PCA. Finally, we use mean-field approximation and Monte Carlo simulations to show coexistence of three distinct behaviours for some values of parameters α and β.
NASA Technical Reports Server (NTRS)
Mavris, Dimitri N.; Bandte, Oliver; Schrage, Daniel P.
1996-01-01
This paper outlines an approach for the determination of economically viable robust design solutions using the High Speed Civil Transport (HSCT) as a case study. Furthermore, the paper states the advantages of a probability based aircraft design over the traditional point design approach. It also proposes a new methodology called Robust Design Simulation (RDS) which treats customer satisfaction as the ultimate design objective. RDS is based on a probabilistic approach to aerospace systems design, which views the chosen objective as a distribution function introduced by so called noise or uncertainty variables. Since the designer has no control over these variables, a variability distribution is defined for each one of them. The cumulative effect of all these distributions causes the overall variability of the objective function. For cases where the selected objective function depends heavily on these noise variables, it may be desirable to obtain a design solution that minimizes this dependence. The paper outlines a step by step approach on how to achieve such a solution for the HSCT case study and introduces an evaluation criterion which guarantees the highest customer satisfaction. This customer satisfaction is expressed by the probability of achieving objective function values less than a desired target value.
Orhan, A Emin; Ma, Wei Ji
2017-07-26
Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.
Journals: Pathways to Thinking in Second-Year Algebra.
ERIC Educational Resources Information Center
Chapman, Kathleen P.
1996-01-01
Discusses the value of journal assignments for diagnosing and trouble-shooting misconceptions and for helping with lesson plans. Examples of assignments are included as well as student responses. (AIM)
NASA Applications and Lessons Learned in Reliability Engineering
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.; Fuller, Raymond P.
2011-01-01
Since the Shuttle Challenger accident in 1986, communities across NASA have been developing and extensively using quantitative reliability and risk assessment methods in their decision making process. This paper discusses several reliability engineering applications that NASA has used over the year to support the design, development, and operation of critical space flight hardware. Specifically, the paper discusses several reliability engineering applications used by NASA in areas such as risk management, inspection policies, components upgrades, reliability growth, integrated failure analysis, and physics based probabilistic engineering analysis. In each of these areas, the paper provides a brief discussion of a case study to demonstrate the value added and the criticality of reliability engineering in supporting NASA project and program decisions to fly safely. Examples of these case studies discussed are reliability based life limit extension of Shuttle Space Main Engine (SSME) hardware, Reliability based inspection policies for Auxiliary Power Unit (APU) turbine disc, probabilistic structural engineering analysis for reliability prediction of the SSME alternate turbo-pump development, impact of ET foam reliability on the Space Shuttle System risk, and reliability based Space Shuttle upgrade for safety. Special attention is given in this paper to the physics based probabilistic engineering analysis applications and their critical role in evaluating the reliability of NASA development hardware including their potential use in a research and technology development environment.
Quantum probabilistic logic programming
NASA Astrophysics Data System (ADS)
Balu, Radhakrishnan
2015-05-01
We describe a quantum mechanics based logic programming language that supports Horn clauses, random variables, and covariance matrices to express and solve problems in probabilistic logic. The Horn clauses of the language wrap random variables, including infinite valued, to express probability distributions and statistical correlations, a powerful feature to capture relationship between distributions that are not independent. The expressive power of the language is based on a mechanism to implement statistical ensembles and to solve the underlying SAT instances using quantum mechanical machinery. We exploit the fact that classical random variables have quantum decompositions to build the Horn clauses. We establish the semantics of the language in a rigorous fashion by considering an existing probabilistic logic language called PRISM with classical probability measures defined on the Herbrand base and extending it to the quantum context. In the classical case H-interpretations form the sample space and probability measures defined on them lead to consistent definition of probabilities for well formed formulae. In the quantum counterpart, we define probability amplitudes on Hinterpretations facilitating the model generations and verifications via quantum mechanical superpositions and entanglements. We cast the well formed formulae of the language as quantum mechanical observables thus providing an elegant interpretation for their probabilities. We discuss several examples to combine statistical ensembles and predicates of first order logic to reason with situations involving uncertainty.
Probabilistic seismic hazard zonation for the Cuban building code update
NASA Astrophysics Data System (ADS)
Garcia, J.; Llanes-Buron, C.
2013-05-01
A probabilistic seismic hazard assessment has been performed in response to a revision and update of the Cuban building code (NC-46-99) for earthquake-resistant building construction. The hazard assessment have been done according to the standard probabilistic approach (Cornell, 1968) and importing the procedures adopted by other nations dealing with the problem of revising and updating theirs national building codes. Problems of earthquake catalogue treatment, attenuation of peak and spectral ground acceleration, as well as seismic source definition have been rigorously analyzed and a logic-tree approach was used to represent the inevitable uncertainties encountered through the whole seismic hazard estimation process. The seismic zonation proposed here, is formed by a map where it is reflected the behaviour of the spectral acceleration values for short (0.2 seconds) and large (1.0 seconds) periods on rock conditions with a 1642 -year return period, which being considered as maximum credible earthquake (ASCE 07-05). In addition, other three design levels are proposed (severe earthquake: with a 808 -year return period, ordinary earthquake: with a 475 -year return period and minimum earthquake: with a 225 -year return period). The seismic zonation proposed here fulfils the international standards (IBC-ICC) as well as the world tendencies in this thematic.
NASA Astrophysics Data System (ADS)
Zarekarizi, M.; Moradkhani, H.; Yan, H.
2017-12-01
The Operational Probabilistic Drought Forecasting System (OPDFS) is an online tool recently developed at Portland State University for operational agricultural drought forecasting. This is an integrated statistical-dynamical framework issuing probabilistic drought forecasts monthly for the lead times of 1, 2, and 3 months. The statistical drought forecasting method utilizes copula functions in order to condition the future soil moisture values on the antecedent states. Due to stochastic nature of land surface properties, the antecedent soil moisture states are uncertain; therefore, data assimilation system based on Particle Filtering (PF) is employed to quantify the uncertainties associated with the initial condition of the land state, i.e. soil moisture. PF assimilates the satellite soil moisture data to Variable Infiltration Capacity (VIC) land surface model and ultimately updates the simulated soil moisture. The OPDFS builds on the NOAA's seasonal drought outlook by offering drought probabilities instead of qualitative ordinal categories and provides the user with the probability maps associated with a particular drought category. A retrospective assessment of the OPDFS showed that the forecasting of the 2012 Great Plains and 2014 California droughts were possible at least one month in advance. The OPDFS offers a timely assistance to water managers, stakeholders and decision-makers to develop resilience against uncertain upcoming droughts.
Methods for estimating the amount of vernal pool habitat in the northeastern United States
Van Meter, R.; Bailey, L.L.; Grant, E.H.C.
2008-01-01
The loss of small, seasonal wetlands is a major concern for a variety of state, local, and federal organizations in the northeastern U.S. Identifying and estimating the number of vernal pools within a given region is critical to developing long-term conservation and management strategies for these unique habitats and their faunal communities. We use three probabilistic sampling methods (simple random sampling, adaptive cluster sampling, and the dual frame method) to estimate the number of vernal pools on protected, forested lands. Overall, these methods yielded similar values of vernal pool abundance for each study area, and suggest that photographic interpretation alone may grossly underestimate the number of vernal pools in forested habitats. We compare the relative efficiency of each method and discuss ways of improving precision. Acknowledging that the objectives of a study or monitoring program ultimately determine which sampling designs are most appropriate, we recommend that some type of probabilistic sampling method be applied. We view the dual-frame method as an especially useful way of combining incomplete remote sensing methods, such as aerial photograph interpretation, with a probabilistic sample of the entire area of interest to provide more robust estimates of the number of vernal pools and a more representative sample of existing vernal pool habitats.
NASA Technical Reports Server (NTRS)
Boyce, Lola; Bast, Callie C.; Trimble, Greg A.
1992-01-01
This report presents the results of a fourth year effort of a research program, conducted for NASA-LeRC by the University of Texas at San Antonio (UTSA). The research included on-going development of methodology that provides probabilistic lifetime strength of aerospace materials via computational simulation. A probabilistic material strength degradation model, in the form of a randomized multifactor interaction equation, is postulated for strength degradation of structural components of aerospace propulsion systems subject to a number of effects or primitive variables. These primitive variables may include high temperature, fatigue or creep. In most cases, strength is reduced as a result of the action of a variable. This multifactor interaction strength degradation equation has been randomized and is included in the computer program, PROMISS. Also included in the research is the development of methodology to calibrate the above-described constitutive equation using actual experimental materials data together with regression analysis of that data, thereby predicting values for the empirical material constants for each effect or primitive variable. This regression methodology is included in the computer program, PROMISC. Actual experimental materials data were obtained from industry and the open literature for materials typically for applications in aerospace propulsion system components. Material data for Inconel 718 has been analyzed using the developed methodology.
NASA Technical Reports Server (NTRS)
Boyce, Lola; Bast, Callie C.; Trimble, Greg A.
1992-01-01
The results of a fourth year effort of a research program conducted for NASA-LeRC by The University of Texas at San Antonio (UTSA) are presented. The research included on-going development of methodology that provides probabilistic lifetime strength of aerospace materials via computational simulation. A probabilistic material strength degradation model, in the form of a randomized multifactor interaction equation, is postulated for strength degradation of structural components of aerospace propulsion systems subjected to a number of effects or primitive variables. These primitive variables may include high temperature, fatigue, or creep. In most cases, strength is reduced as a result of the action of a variable. This multifactor interaction strength degradation equation was randomized and is included in the computer program, PROMISC. Also included in the research is the development of methodology to calibrate the above-described constitutive equation using actual experimental materials data together with regression analysis of that data, thereby predicting values for the empirical material constants for each effect or primitive variable. This regression methodology is included in the computer program, PROMISC. Actual experimental materials data were obtained from industry and the open literature for materials typically for applications in aerospace propulsion system components. Material data for Inconel 718 was analyzed using the developed methodology.
Deployment Analysis of a Simple Tape-Spring Hinge Using Probabilistic Methods
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Horta, Lucas G.
2012-01-01
Acceptance of new deployable structures architectures and concepts requires validated design methods to minimize the expense involved with technology validation flight testing. Deployable concepts for large lightweight spacecraft include booms, antennae, and masts. This paper explores the implementation of probabilistic methods in the design process for the deployment of a strain-energy mechanism, specifically a simple tape-spring hinge. Strain-energy mechanisms are attractive for deployment in very lightweight systems because they do not require the added mass and complexity associated with motors and controllers. However, designers are hesitant to include free deployment, strain-energy mechanisms because of the potential for uncontrolled behavior. In the example presented here, the tapespring cross-sectional dimensions have been varied and a target displacement during deployment has been selected as the design metric. Specifically, the tape-spring should reach the final position in the shortest time with the minimal amount of overshoot and oscillations. Surrogate models have been used to reduce computational expense. Parameter values to achieve the target response have been computed and used to demonstrate the approach. Based on these results, the application of probabilistic methods for design of a tape-spring hinge has shown promise as a means of designing strain-energy components for more complex space concepts.
Kinias, Zoe; Sim, Jessica
2016-11-01
Two field experiments examined if and how values affirmations can ameliorate stereotype threat-induced gender performance gaps in an international competitive business environment. Based on self-affirmation theory (Steele, 1988), we predicted that writing about personal values unrelated to the perceived threat would attenuate the gender performance gap. Study 1 found that an online assignment to write about one's personal values (but not a similar writing assignment including organizational values) closed the gender gap in course grades by 89.0% among 423 Masters of Business Administration students (MBAs) at an international business school. Study 2 replicated this effect among 396 MBAs in a different cohort with random assignment and tested 3 related mediators (self-efficacy, self-doubt, and self-criticism). Personal values reflection (but not reflecting on values including those of the organization or writing about others' values) reduced the gender gap by 66.5%, and there was a significant indirect effect through reduced self-doubt. These findings show that a brief personal values writing exercise can dramatically improve women's performance in competitive environments where they are negatively stereotyped. The results also demonstrate that stereotype threat (Steele & Aronson, 1995) can occur within a largely non-American population with work experience and that affirming one's core personal values (without organizational values) can ameliorate the threat. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
ERIC Educational Resources Information Center
Hedberg, E. C.; Hedges, Larry V.
2014-01-01
Randomized experiments are often considered the strongest designs to study the impact of educational interventions. Perhaps the most prevalent class of designs used in large scale education experiments is the cluster randomized design in which entire schools are assigned to treatments. In cluster randomized trials (CRTs) that assign schools to…
Code of Federal Regulations, 2010 CFR
2010-10-01
...) Do not assign a value from the technology incentive designated range. (2) Modifications to contract type risk (Block 24 of the DD Form 1547). Use a designated range of −1 percent to 0 percent instead of... performance risk (Blocks 21-23 of the DD Form 1547). (i) If the contracting officer assigns a value from the...
Probabilistic classifiers with high-dimensional data
Kim, Kyung In; Simon, Richard
2011-01-01
For medical classification problems, it is often desirable to have a probability associated with each class. Probabilistic classifiers have received relatively little attention for small n large p classification problems despite of their importance in medical decision making. In this paper, we introduce 2 criteria for assessment of probabilistic classifiers: well-calibratedness and refinement and develop corresponding evaluation measures. We evaluated several published high-dimensional probabilistic classifiers and developed 2 extensions of the Bayesian compound covariate classifier. Based on simulation studies and analysis of gene expression microarray data, we found that proper probabilistic classification is more difficult than deterministic classification. It is important to ensure that a probabilistic classifier is well calibrated or at least not “anticonservative” using the methods developed here. We provide this evaluation for several probabilistic classifiers and also evaluate their refinement as a function of sample size under weak and strong signal conditions. We also present a cross-validation method for evaluating the calibration and refinement of any probabilistic classifier on any data set. PMID:21087946
Sharpless, K E; Gill, L M
2000-01-01
A number of food-matrix reference materials (RMs) are available from the National Institute of Standards and Technology (NIST) and from Agriculture Canada through NIST. Most of these materials were originally value-assigned for their elemental composition (major, minor, and trace elements), but no additional nutritional information was provided. Two of the materials were certified for selected organic constituents. Ten of these materials (Standard Reference Material [SRM] 1,563 Cholesterol and Fat-Soluble Vitamins in Coconut Oil [Natural and Fortified], SRM 1,566b Oyster Tissue, SRM 1,570a Spinach Leaves, SRM 1,974a Organics in Mussel Tissue (Mytilus edulis), RM 8,415 Whole Egg Powder, RM 8,418 Wheat Gluten, RM 8,432 Corn Starch, RM 8,433 Corn Bran, RM 8,435 Whole Milk Powder, and RM 8,436 Durum Wheat Flour) were recently distributed by NIST to 4 laboratories with expertise in food analysis for the measurement of proximates (solids, fat, protein, etc.), calories, and total dietary fiber, as appropriate. SRM 1846 Infant Formula was distributed as a quality control sample for the proximates and for analysis for individual fatty acids. Two of the materials (Whole Egg Powder and Whole Milk Powder) were distributed in an earlier interlaboratory comparison exercise in which they were analyzed for several vitamins. Value assignment of analyte concentrations in these 11 SRMs and RMs, based on analyses by the collaborating laboratories, is described in this paper. These materials are intended primarily for validation of analytical methods for the measurement of nutrients in foods of similar composition (based on AOAC INTERNATIONAL's fat-protein-carbohydrate triangle). They may also be used as "primary control materials" in the value assignment of in-house control materials of similar composition. The addition of proximate information for 10 existing reference materials means that RMs are now available from NIST with assigned values for proximates in 6 of the 9 sectors of the AOAC triangle. Five of these materials have values assigned for total dietary fiber-the first such information provided for materials available from NIST.
Dupont, Nana; Fertner, Mette; Kristensen, Charlotte Sonne; Toft, Nils; Stege, Helle
2016-05-03
Transparent calculation methods are crucial when investigating trends in antimicrobial consumption over time and between populations. Until 2011, one single standardized method was applied when quantifying the Danish pig antimicrobial consumption with the unit "Animal Daily Dose" (ADD). However, two new methods for assigning values for ADDs have recently emerged, one implemented by DANMAP, responsible for publishing annual reports on antimicrobial consumption, and one by the Danish Veterinary and Food Administration (DVFA), responsible for the Yellow Card initiative. In addition to new ADD assignment methods, Denmark has also experienced a shift in the production pattern, towards a larger export of live pigs. The aims of this paper were to (1) describe previous and current ADD assignment methods used by the major Danish institutions and (2) to illustrate how ADD assignment method and choice of population and population measurement affect the calculated national antimicrobial consumption in pigs (2007-2013). The old VetStat ADD-values were based on SPCs in contrast to the new ADD-values, which were based on active compound, concentration and administration route. The new ADD-values stated by both DANMAP and DVFA were only identical for 48 % of antimicrobial products approved for use in pigs. From 2007 to 2013, the total number of ADDs per year increased by 9 % when using the new DVFA ADD-values, but decreased by 2 and 7 % when using the new DANMAP ADD-values or the old VetStat ADD-values, respectively. Through 2007 to 2013, the production of pigs increased from 26.1 million pigs per year with 18 % exported live to 28.7 million with 34 % exported live. In the same time span, the annual pig antimicrobial consumption increased by 22.2 %, when calculated using the new DVFA ADD-values and pigs slaughtered per year as population measurement (13.0 ADDs/pig/year to 15.9 ADDs/pig/year). However, when based on the old VetStat ADD values and pigs produced per year (including live export), a 10.9 % decrease was seen (10.6 ADDs/pig/year to 9.4 ADDs/pig/year). The findings of this paper clearly highlight that calculated national antimicrobial consumption is highly affected by chosen population measurement and the applied ADD-values.
Oakley, Jeremy E.; Brennan, Alan; Breeze, Penny
2015-01-01
Health economic decision-analytic models are used to estimate the expected net benefits of competing decision options. The true values of the input parameters of such models are rarely known with certainty, and it is often useful to quantify the value to the decision maker of reducing uncertainty through collecting new data. In the context of a particular decision problem, the value of a proposed research design can be quantified by its expected value of sample information (EVSI). EVSI is commonly estimated via a 2-level Monte Carlo procedure in which plausible data sets are generated in an outer loop, and then, conditional on these, the parameters of the decision model are updated via Bayes rule and sampled in an inner loop. At each iteration of the inner loop, the decision model is evaluated. This is computationally demanding and may be difficult if the posterior distribution of the model parameters conditional on sampled data is hard to sample from. We describe a fast nonparametric regression-based method for estimating per-patient EVSI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method avoids the need to sample from the posterior distributions of the parameters and avoids the need to rerun the model. The only requirement is that sample data sets can be generated. The method is applicable with a model of any complexity and with any specification of model parameter distribution. We demonstrate in a case study the superior efficiency of the regression method over the 2-level Monte Carlo method. PMID:25810269
A Proposed Probabilistic Extension of the Halpern and Pearl Definition of ‘Actual Cause’
2017-01-01
ABSTRACT Joseph Halpern and Judea Pearl ([2005]) draw upon structural equation models to develop an attractive analysis of ‘actual cause’. Their analysis is designed for the case of deterministic causation. I show that their account can be naturally extended to provide an elegant treatment of probabilistic causation. 1Introduction2Preemption3Structural Equation Models4The Halpern and Pearl Definition of ‘Actual Cause’5Preemption Again6The Probabilistic Case7Probabilistic Causal Models8A Proposed Probabilistic Extension of Halpern and Pearl’s Definition9Twardy and Korb’s Account10Probabilistic Fizzling11Conclusion PMID:29593362
NASA Astrophysics Data System (ADS)
Zhou, Anran; Xie, Weixin; Pei, Jihong; Chen, Yapei
2018-02-01
For ship targets detection in cluttered infrared image sequences, a robust detection method, based on the probabilistic single Gaussian model of sea background in Fourier domain, is put forward. The amplitude spectrum sequences at each frequency point of the pure seawater images in Fourier domain, being more stable than the gray value sequences of each background pixel in the spatial domain, are regarded as a Gaussian model. Next, a probability weighted matrix is built based on the stability of the pure seawater's total energy spectrum in the row direction, to make the Gaussian model more accurate. Then, the foreground frequency points are separated from the background frequency points by the model. Finally, the false-alarm points are removed utilizing ships' shape features. The performance of the proposed method is tested by visual and quantitative comparisons with others.
Territories typification technique with use of statistical models
NASA Astrophysics Data System (ADS)
Galkin, V. I.; Rastegaev, A. V.; Seredin, V. V.; Andrianov, A. V.
2018-05-01
Territories typification is required for solution of many problems. The results of geological zoning received by means of various methods do not always agree. That is why the main goal of the research given is to develop a technique of obtaining a multidimensional standard classified indicator for geological zoning. In the course of the research, the probabilistic approach was used. In order to increase the reliability of geological information classification, the authors suggest using complex multidimensional probabilistic indicator P K as a criterion of the classification. The second criterion chosen is multidimensional standard classified indicator Z. These can serve as characteristics of classification in geological-engineering zoning. Above mentioned indicators P K and Z are in good correlation. Correlation coefficient values for the entire territory regardless of structural solidity equal r = 0.95 so each indicator can be used in geological-engineering zoning. The method suggested has been tested and the schematic map of zoning has been drawn.
Probabilistic Fatigue Damage Program (FATIG)
NASA Technical Reports Server (NTRS)
Michalopoulos, Constantine
2012-01-01
FATIG computes fatigue damage/fatigue life using the stress rms (root mean square) value, the total number of cycles, and S-N curve parameters. The damage is computed by the following methods: (a) traditional method using Miner s rule with stress cycles determined from a Rayleigh distribution up to 3*sigma; and (b) classical fatigue damage formula involving the Gamma function, which is derived from the integral version of Miner's rule. The integration is carried out over all stress amplitudes. This software solves the problem of probabilistic fatigue damage using the integral form of the Palmgren-Miner rule. The software computes fatigue life using an approach involving all stress amplitudes, up to N*sigma, as specified by the user. It can be used in the design of structural components subjected to random dynamic loading, or by any stress analyst with minimal training for fatigue life estimates of structural components.
bnstruct: an R package for Bayesian Network structure learning in the presence of missing data.
Franzin, Alberto; Sambo, Francesco; Di Camillo, Barbara
2017-04-15
A Bayesian Network is a probabilistic graphical model that encodes probabilistic dependencies between a set of random variables. We introduce bnstruct, an open source R package to (i) learn the structure and the parameters of a Bayesian Network from data in the presence of missing values and (ii) perform reasoning and inference on the learned Bayesian Networks. To the best of our knowledge, there is no other open source software that provides methods for all of these tasks, particularly the manipulation of missing data, which is a common situation in practice. The software is implemented in R and C and is available on CRAN under a GPL licence. francesco.sambo@unipd.it. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Heuristic and optimal policy computations in the human brain during sequential decision-making.
Korn, Christoph W; Bach, Dominik R
2018-01-23
Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.
CARES/Life Software for Designing More Reliable Ceramic Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Powers, Lynn M.; Baker, Eric H.
1997-01-01
Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion, and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CAPES/Life software eases this task by providing a tool to optimize the design and manufacture of brittle material components using probabilistic reliability analysis techniques. Probabilistic component design involves predicting the probability of failure for a thermomechanically loaded component from specimen rupture data. Typically, these experiments are performed using many simple geometry flexural or tensile test specimens. A static, dynamic, or cyclic load is applied to each specimen until fracture. Statistical strength and SCG (fatigue) parameters are then determined from these data. Using these parameters and the results obtained from a finite element analysis, the time-dependent reliability for a complex component geometry and loading is then predicted. Appropriate design changes are made until an acceptable probability of failure has been reached.
A quantum probability perspective on borderline vagueness.
Blutner, Reinhard; Pothos, Emmanuel M; Bruza, Peter
2013-10-01
The term "vagueness" describes a property of natural concepts, which normally have fuzzy boundaries, admit borderline cases, and are susceptible to Zeno's sorites paradox. We will discuss the psychology of vagueness, especially experiments investigating the judgment of borderline cases and contradictions. In the theoretical part, we will propose a probabilistic model that describes the quantitative characteristics of the experimental finding and extends Alxatib's and Pelletier's () theoretical analysis. The model is based on a Hopfield network for predicting truth values. Powerful as this classical perspective is, we show that it falls short of providing an adequate coverage of the relevant empirical results. In the final part, we will argue that a substantial modification of the analysis put forward by Alxatib and Pelletier and its probabilistic pendant is needed. The proposed modification replaces the standard notion of probabilities by quantum probabilities. The crucial phenomenon of borderline contradictions can be explained then as a quantum interference phenomenon. © 2013 Cognitive Science Society, Inc.
Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
The fourth year of technical developments on the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) system for Probabilistic Structural Analysis Methods is summarized. The effort focused on the continued expansion of the Probabilistic Finite Element Method (PFEM) code, the implementation of the Probabilistic Boundary Element Method (PBEM), and the implementation of the Probabilistic Approximate Methods (PAppM) code. The principal focus for the PFEM code is the addition of a multilevel structural dynamics capability. The strategy includes probabilistic loads, treatment of material, geometry uncertainty, and full probabilistic variables. Enhancements are included for the Fast Probability Integration (FPI) algorithms and the addition of Monte Carlo simulation as an alternate. Work on the expert system and boundary element developments continues. The enhanced capability in the computer codes is validated by applications to a turbine blade and to an oxidizer duct.
Code of Federal Regulations, 2012 CFR
2012-07-01
... of an assignment or encumbrance include any representations of any nature? 51.95 Section 51.95 Parks... encumbrance include any representations of any nature? In approving an assignment or encumbrance, the Director... any nature to any person about any matter, including, but not limited to, the value, allocation, or...
Code of Federal Regulations, 2014 CFR
2014-07-01
... of an assignment or encumbrance include any representations of any nature? 51.95 Section 51.95 Parks... encumbrance include any representations of any nature? In approving an assignment or encumbrance, the Director... any nature to any person about any matter, including, but not limited to, the value, allocation, or...
Code of Federal Regulations, 2013 CFR
2013-07-01
... of an assignment or encumbrance include any representations of any nature? 51.95 Section 51.95 Parks... encumbrance include any representations of any nature? In approving an assignment or encumbrance, the Director... any nature to any person about any matter, including, but not limited to, the value, allocation, or...
Code of Federal Regulations, 2010 CFR
2010-07-01
... of an assignment or encumbrance include any representations of any nature? 51.95 Section 51.95 Parks... encumbrance include any representations of any nature? In approving an assignment or encumbrance, the Director... any nature to any person about any matter, including, but not limited to, the value, allocation, or...
Code of Federal Regulations, 2011 CFR
2011-07-01
... of an assignment or encumbrance include any representations of any nature? 51.95 Section 51.95 Parks... encumbrance include any representations of any nature? In approving an assignment or encumbrance, the Director... any nature to any person about any matter, including, but not limited to, the value, allocation, or...
Probabilistic seismic hazard estimates incorporating site effects - An example from Indiana, U.S.A
Hasse, J.S.; Park, C.H.; Nowack, R.L.; Hill, J.R.
2010-01-01
The U.S. Geological Survey (USGS) has published probabilistic earthquake hazard maps for the United States based on current knowledge of past earthquake activity and geological constraints on earthquake potential. These maps for the central and eastern United States assume standard site conditions with Swave velocities of 760 m/s in the top 30 m. For urban and infrastructure planning and long-term budgeting, the public is interested in similar probabilistic seismic hazard maps that take into account near-surface geological materials. We have implemented a probabilistic method for incorporating site effects into the USGS seismic hazard analysis that takes into account the first-order effects of the surface geologic conditions. The thicknesses of sediments, which play a large role in amplification, were derived from a P-wave refraction database with over 13, 000 profiles, and a preliminary geology-based velocity model was constructed from available information on S-wave velocities. An interesting feature of the preliminary hazard maps incorporating site effects is the approximate factor of two increases in the 1-Hz spectral acceleration with 2 percent probability of exceedance in 50 years for parts of the greater Indianapolis metropolitan region and surrounding parts of central Indiana. This effect is primarily due to the relatively thick sequence of sediments infilling ancient bedrock topography that has been deposited since the Pleistocene Epoch. As expected, the Late Pleistocene and Holocene depositional systems of the Wabash and Ohio Rivers produce additional amplification in the southwestern part of Indiana. Ground motions decrease, as would be expected, toward the bedrock units in south-central Indiana, where motions are significantly lower than the values on the USGS maps.
Oh-Descher, Hanna; Beck, Jeffrey M; Ferrari, Silvia; Sommer, Marc A; Egner, Tobias
2017-11-15
Real-life decision-making often involves combining multiple probabilistic sources of information under finite time and cognitive resources. To mitigate these pressures, people "satisfice", foregoing a full evaluation of all available evidence to focus on a subset of cues that allow for fast and "good-enough" decisions. Although this form of decision-making likely mediates many of our everyday choices, very little is known about the way in which the neural encoding of cue information changes when we satisfice under time pressure. Here, we combined human functional magnetic resonance imaging (fMRI) with a probabilistic classification task to characterize neural substrates of multi-cue decision-making under low (1500 ms) and high (500 ms) time pressure. Using variational Bayesian inference, we analyzed participants' choices to track and quantify cue usage under each experimental condition, which was then applied to model the fMRI data. Under low time pressure, participants performed near-optimally, appropriately integrating all available cues to guide choices. Both cortical (prefrontal and parietal cortex) and subcortical (hippocampal and striatal) regions encoded individual cue weights, and activity linearly tracked trial-by-trial variations in the amount of evidence and decision uncertainty. Under increased time pressure, participants adaptively shifted to using a satisficing strategy by discounting the least informative cue in their decision process. This strategic change in decision-making was associated with an increased involvement of the dopaminergic midbrain, striatum, thalamus, and cerebellum in representing and integrating cue values. We conclude that satisficing the probabilistic inference process under time pressure leads to a cortical-to-subcortical shift in the neural drivers of decisions. Copyright © 2017 Elsevier Inc. All rights reserved.
ECHO: A reference-free short-read error correction algorithm
Kao, Wei-Chun; Chan, Andrew H.; Song, Yun S.
2011-01-01
Developing accurate, scalable algorithms to improve data quality is an important computational challenge associated with recent advances in high-throughput sequencing technology. In this study, a novel error-correction algorithm, called ECHO, is introduced for correcting base-call errors in short-reads, without the need of a reference genome. Unlike most previous methods, ECHO does not require the user to specify parameters of which optimal values are typically unknown a priori. ECHO automatically sets the parameters in the assumed model and estimates error characteristics specific to each sequencing run, while maintaining a running time that is within the range of practical use. ECHO is based on a probabilistic model and is able to assign a quality score to each corrected base. Furthermore, it explicitly models heterozygosity in diploid genomes and provides a reference-free method for detecting bases that originated from heterozygous sites. On both real and simulated data, ECHO is able to improve the accuracy of previous error-correction methods by several folds to an order of magnitude, depending on the sequence coverage depth and the position in the read. The improvement is most pronounced toward the end of the read, where previous methods become noticeably less effective. Using a whole-genome yeast data set, it is demonstrated here that ECHO is capable of coping with nonuniform coverage. Also, it is shown that using ECHO to perform error correction as a preprocessing step considerably facilitates de novo assembly, particularly in the case of low-to-moderate sequence coverage depth. PMID:21482625
Seismic Hazard analysis of Adjaria Region in Georgia
NASA Astrophysics Data System (ADS)
Jorjiashvili, Nato; Elashvili, Mikheil
2014-05-01
The most commonly used approach to determining seismic-design loads for engineering projects is probabilistic seismic-hazard analysis (PSHA). The primary output from a PSHA is a hazard curve showing the variation of a selected ground-motion parameter, such as peak ground acceleration (PGA) or spectral acceleration (SA), against the annual frequency of exceedance (or its reciprocal, return period). The design value is the ground-motion level that corresponds to a preselected design return period. For many engineering projects, such as standard buildings and typical bridges, the seismic loading is taken from the appropriate seismic-design code, the basis of which is usually a PSHA. For more important engineering projects— where the consequences of failure are more serious, such as dams and chemical plants—it is more usual to obtain the seismic-design loads from a site-specific PSHA, in general, using much longer return periods than those governing code based design. Calculation of Probabilistic Seismic Hazard was performed using Software CRISIS2007 by Ordaz, M., Aguilar, A., and Arboleda, J., Instituto de Ingeniería, UNAM, Mexico. CRISIS implements a classical probabilistic seismic hazard methodology where seismic sources can be modelled as points, lines and areas. In the case of area sources, the software offers an integration procedure that takes advantage of a triangulation algorithm used for seismic source discretization. This solution improves calculation efficiency while maintaining a reliable description of source geometry and seismicity. Additionally, supplementary filters (e.g. fix a sitesource distance that excludes from calculation sources at great distance) allow the program to balance precision and efficiency during hazard calculation. Earthquake temporal occurrence is assumed to follow a Poisson process, and the code facilitates two types of MFDs: a truncated exponential Gutenberg-Richter [1944] magnitude distribution and a characteristic magnitude distribution [Youngs and Coppersmith, 1985]. Notably, the software can deal with uncertainty in the seismicity input parameters such as maximum magnitude value. CRISIS offers a set of built-in GMPEs, as well as the possibility of defining new ones by providing information in a tabular format. Our study shows that in case of Ajaristkali HPP study area, significant contribution to Seismic Hazard comes from local sources with quite low Mmax values, thus these two attenuation lows give us quite different PGA and SA values.
2010-09-01
region were defined as a background. The flux concentrators were drawn as solid pieces and assigned the material properties of permalloy ( NiFe ), with...assigning properties (coercivity, permeability, etc.) to the objects, assigning boundaries or sources, seeding the objects and creating a mesh, and then...a permeability of 5000 as that is a value readily achieved in this material. The material properties assigned to this background are those of a
Secrets in the eyes of Black Oystercatchers: A new sexing technique
Guzzetti, B.M.; Talbot, S.L.; Tessler, D.F.; Gill, V.A.; Murphy, E.C.
2008-01-01
Sexing oystercatchers in the field is difficult because males and females have identical plumage and are similar in size. Although Black Oystercatchers (Haematopus bachmani) are sexually dimorphic, using morphology to determine sex requires either capturing both pair members for comparison or using discriminant analyses to assign sex probabilistically based on morphometric traits. All adult Black Oystercatchers have bright yellow eyes, but some of them have dark specks, or eye flecks, in their irides. We hypothesized that this easily observable trait was sex-linked and could be used as a novel diagnostic tool for identifying sex. To test this, we compared data for oystercatchers from genetic molecular markers (CHD-W/CHD-Z and HINT-W/HINT-Z), morphometric analyses, and eye-fleck category (full eye flecks, slight eye flecks, and no eye flecks). Compared to molecular markers, we found that discriminant analyses based on morphological characteristics yielded variable results that were confounded by geographical differences in morphology. However, we found that eye flecks were sex-linked. Using an eye-fleck model where all females have full eye flecks and males have either slight eye flecks or no eye flecks, we correctly assigned the sex of 117 of 125 (94%) oystercatchers. Using discriminant analysis based on morphological characteristics, we correctly assigned the sex of 105 of 119 (88%) birds. Using the eye-fleck technique for sexing Black Oystercatchers may be preferable for some investigators because it is as accurate as discriminant analysis based on morphology and does not require capturing the birds. ??2008 Association of Field Ornithologists.
Uncertain deduction and conditional reasoning
Evans, Jonathan St. B. T.; Thompson, Valerie A.; Over, David E.
2015-01-01
There has been a paradigm shift in the psychology of deductive reasoning. Many researchers no longer think it is appropriate to ask people to assume premises and decide what necessarily follows, with the results evaluated by binary extensional logic. Most every day and scientific inference is made from more or less confidently held beliefs and not assumptions, and the relevant normative standard is Bayesian probability theory. We argue that the study of “uncertain deduction” should directly ask people to assign probabilities to both premises and conclusions, and report an experiment using this method. We assess this reasoning by two Bayesian metrics: probabilistic validity and coherence according to probability theory. On both measures, participants perform above chance in conditional reasoning, but they do much better when statements are grouped as inferences, rather than evaluated in separate tasks. PMID:25904888
NASA Astrophysics Data System (ADS)
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
Advancing Usability Evaluation through Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; David I. Gertman
2005-07-01
This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probabilitymore » of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.« less
Evaluation of Goal Programming for the Optimal Assignment of Inspectors to Construction Projects
1988-09-01
Inputs ..... .............. 90 Equation Coefficients . ....... .. 90 Weights, Priorities and the AHP . . 91 Right-Hand Side Values ........ .. 91...the AHP Hierarchy with k Levels . . 36 3. Sample Matrix for Pairwise Comparison ........ .. 37 4. Assignment of I and p for Example Problem...Weights for Example Problem ... 61 3. AHP Weights and Coefficient ci, Values. ........ 63 vii AFIT/GEM/LSM/88S-16 Abstract The purpose of this study was
Adaptive Decision Making Using Probabilistic Programming and Stochastic Optimization
2018-01-01
world optimization problems (and hence 16 Approved for Public Release (PA); Distribution Unlimited Pred. demand (uncertain; discrete ...simplify the setting, we further assume that the demands are discrete , taking on values d1, . . . , dk with probabilities (conditional on x) (pθ)i ≡ p...Tyrrell Rockafellar. Implicit functions and solution mappings. Springer Monogr. Math ., 2009. Anthony V Fiacco and Yo Ishizuka. Sensitivity and stability
Is probabilistic bias analysis approximately Bayesian?
MacLehose, Richard F.; Gustafson, Paul
2011-01-01
Case-control studies are particularly susceptible to differential exposure misclassification when exposure status is determined following incident case status. Probabilistic bias analysis methods have been developed as ways to adjust standard effect estimates based on the sensitivity and specificity of exposure misclassification. The iterative sampling method advocated in probabilistic bias analysis bears a distinct resemblance to a Bayesian adjustment; however, it is not identical. Furthermore, without a formal theoretical framework (Bayesian or frequentist), the results of a probabilistic bias analysis remain somewhat difficult to interpret. We describe, both theoretically and empirically, the extent to which probabilistic bias analysis can be viewed as approximately Bayesian. While the differences between probabilistic bias analysis and Bayesian approaches to misclassification can be substantial, these situations often involve unrealistic prior specifications and are relatively easy to detect. Outside of these special cases, probabilistic bias analysis and Bayesian approaches to exposure misclassification in case-control studies appear to perform equally well. PMID:22157311
Carriger, John F; Barron, Mace G
2011-09-15
Decision science tools can be used in evaluating response options and making inferences on risks to ecosystem services (ES) from ecological disasters. Influence diagrams (IDs) are probabilistic networks that explicitly represent the decisions related to a problem and their influence on desired or undesired outcomes. To examine how IDs might be useful in probabilistic risk management for spill response efforts, an ID was constructed to display the potential interactions between exposure events and the trade-offs between costs and ES impacts from spilled oil and response decisions in the DWH spill event. Quantitative knowledge was not formally incorporated but an ID platform for doing this was examined. Probabilities were assigned for conditional relationships in the ID and scenarios examining the impact of different response actions on components of spilled oil were investigated in hypothetical scenarios. Given the structure of the ID, potential knowledge gaps included understanding of the movement of oil, the ecological risk of different spill-related stressors to key receptors (e.g., endangered species, fisheries), and the need for stakeholder valuation of the ES benefits that could be impacted by a spill. Framing the Deepwater Horizon problem domain in an ID conceptualized important variables and relationships that could be optimally accounted for in preparing and managing responses in future spills. These features of the developed IDs may assist in better investigating the uncertainty, costs, and the trade-offs if large-scale, deep ocean spills were to occur again.
A three-way approach for protein function classification
2017-01-01
The knowledge of protein functions plays an essential role in understanding biological cells and has a significant impact on human life in areas such as personalized medicine, better crops and improved therapeutic interventions. Due to expense and inherent difficulty of biological experiments, intelligent methods are generally relied upon for automatic assignment of functions to proteins. The technological advancements in the field of biology are improving our understanding of biological processes and are regularly resulting in new features and characteristics that better describe the role of proteins. It is inevitable to neglect and overlook these anticipated features in designing more effective classification techniques. A key issue in this context, that is not being sufficiently addressed, is how to build effective classification models and approaches for protein function prediction by incorporating and taking advantage from the ever evolving biological information. In this article, we propose a three-way decision making approach which provides provisions for seeking and incorporating future information. We considered probabilistic rough sets based models such as Game-Theoretic Rough Sets (GTRS) and Information-Theoretic Rough Sets (ITRS) for inducing three-way decisions. An architecture of protein functions classification with probabilistic rough sets based three-way decisions is proposed and explained. Experiments are carried out on Saccharomyces cerevisiae species dataset obtained from Uniprot database with the corresponding functional classes extracted from the Gene Ontology (GO) database. The results indicate that as the level of biological information increases, the number of deferred cases are reduced while maintaining similar level of accuracy. PMID:28234929
A three-way approach for protein function classification.
Ur Rehman, Hafeez; Azam, Nouman; Yao, JingTao; Benso, Alfredo
2017-01-01
The knowledge of protein functions plays an essential role in understanding biological cells and has a significant impact on human life in areas such as personalized medicine, better crops and improved therapeutic interventions. Due to expense and inherent difficulty of biological experiments, intelligent methods are generally relied upon for automatic assignment of functions to proteins. The technological advancements in the field of biology are improving our understanding of biological processes and are regularly resulting in new features and characteristics that better describe the role of proteins. It is inevitable to neglect and overlook these anticipated features in designing more effective classification techniques. A key issue in this context, that is not being sufficiently addressed, is how to build effective classification models and approaches for protein function prediction by incorporating and taking advantage from the ever evolving biological information. In this article, we propose a three-way decision making approach which provides provisions for seeking and incorporating future information. We considered probabilistic rough sets based models such as Game-Theoretic Rough Sets (GTRS) and Information-Theoretic Rough Sets (ITRS) for inducing three-way decisions. An architecture of protein functions classification with probabilistic rough sets based three-way decisions is proposed and explained. Experiments are carried out on Saccharomyces cerevisiae species dataset obtained from Uniprot database with the corresponding functional classes extracted from the Gene Ontology (GO) database. The results indicate that as the level of biological information increases, the number of deferred cases are reduced while maintaining similar level of accuracy.
Development of probabilistic multimedia multipathway computer codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, C.; LePoire, D.; Gnanapragasam, E.
2002-01-01
The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributionsmore » for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.« less
Probabilistic structural analysis methods for select space propulsion system components
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Cruse, T. A.
1989-01-01
The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.
Control of Risks Through the Use of Procedures: A Method for Evaluating the Change in Risk
NASA Technical Reports Server (NTRS)
Praino, Gregory T.; Sharit, Joseph
2010-01-01
This paper considers how procedures can be used to control risks faced by an organization and proposes a means of recognizing if a particular procedure reduces risk or contributes to the organization's exposure. The proposed method was developed out of the review of work documents and the governing procedures performed in the wake of the Columbia accident by NASA and the Space Shuttle prime contractor, United Space Alliance, LLC. A technique was needed to understand the rules, or procedural controls, in place at the time in the context of how important the role of each rule was. The proposed method assesses procedural risks, the residual risk associated with a hazard after a procedure's influence is accounted for, by considering each clause of a procedure as a unique procedural control that may be beneficial or harmful. For procedural risks with consequences severe enough to threaten the survival of the organization, the method measures the characteristics of each risk on a scale that is an alternative to the traditional consequence/likelihood couple. The dual benefits of the substitute scales are that they eliminate both the need to quantify a relationship between different consequence types and the need for the extensive history a probabilistic risk assessment would require. Control Value is used as an analog for the consequence, where the value of a rule is based on how well the control reduces the severity of the consequence when operating successfully. This value is composed of two parts: the inevitability of the consequence in the absence of the control, and the opportunity to intervene before the consequence is realized. High value controls will be ones where there is minimal need for intervention but maximum opportunity to actively prevent the outcome. Failure Likelihood is used as the substitute for the conventional likelihood of the outcome. For procedural controls, a failure is considered to be any non-malicious violation of the rule, whether intended or not. The model used for describing the Failure Likelihood considers how well a task was established by evaluating that task on five components. The components selected to define a well established task are: that it be defined, assigned to someone capable, that they be trained appropriately, that the actions be organized to enable proper completion and that some form of independent monitoring be performed. Validation of the method was based on the information provided by a group of experts in Space Shuttle ground processing when they were presented with 5 scenarios that identified a clause from a procedure. For each scenario, they recorded their perception of how important the associated rule was and how likely it was to fail. They then rated the components of Control Value and Failure Likelihood for all the scenarios. The order in which each reviewer ranked the scenarios Control Value and Failure Likelihood was compared to the order in which they ranked the scenarios for each of the associated components; inevitability and opportunity for Control Value and definition, assignment, training, organization and monitoring for Failure Likelihood. This order comparison showed how the components contributed to a relative relationship to the substitute risk element. With the relationship established for Space Shuttle ground processing, this method can be used to gauge if the introduction or removal of a particular rule will increase or decrease the .risk associated with the hazard it is intended to control.
Frontal and Parietal Contributions to Probabilistic Association Learning
Rushby, Jacqueline A.; Vercammen, Ans; Loo, Colleen; Short, Brooke
2011-01-01
Neuroimaging studies have shown both dorsolateral prefrontal (DLPFC) and inferior parietal cortex (iPARC) activation during probabilistic association learning. Whether these cortical brain regions are necessary for probabilistic association learning is presently unknown. Participants' ability to acquire probabilistic associations was assessed during disruptive 1 Hz repetitive transcranial magnetic stimulation (rTMS) of the left DLPFC, left iPARC, and sham using a crossover single-blind design. On subsequent sessions, performance improved relative to baseline except during DLPFC rTMS that disrupted the early acquisition beneficial effect of prior exposure. A second experiment examining rTMS effects on task-naive participants showed that neither DLPFC rTMS nor sham influenced naive acquisition of probabilistic associations. A third experiment examining consecutive administration of the probabilistic association learning test revealed early trial interference from previous exposure to different probability schedules. These experiments, showing disrupted acquisition of probabilistic associations by rTMS only during subsequent sessions with an intervening night's sleep, suggest that the DLPFC may facilitate early access to learned strategies or prior task-related memories via consolidation. Although neuroimaging studies implicate DLPFC and iPARC in probabilistic association learning, the present findings suggest that early acquisition of the probabilistic cue-outcome associations in task-naive participants is not dependent on either region. PMID:21216842
Crowdsourcing a Collective Sense of Place
Jenkins, Andrew; Croitoru, Arie; Crooks, Andrew T.; Stefanidis, Anthony
2016-01-01
Place can be generally defined as a location that has been assigned meaning through human experience, and as such it is of multidisciplinary scientific interest. Up to this point place has been studied primarily within the context of social sciences as a theoretical construct. The availability of large amounts of user-generated content, e.g. in the form of social media feeds or Wikipedia contributions, allows us for the first time to computationally analyze and quantify the shared meaning of place. By aggregating references to human activities within urban spaces we can observe the emergence of unique themes that characterize different locations, thus identifying places through their discernible sociocultural signatures. In this paper we present results from a novel quantitative approach to derive such sociocultural signatures from Twitter contributions and also from corresponding Wikipedia entries. By contrasting the two we show how particular thematic characteristics of places (referred to herein as platial themes) are emerging from such crowd-contributed content, allowing us to observe the meaning that the general public, either individually or collectively, is assigning to specific locations. Our approach leverages probabilistic topic modelling, semantic association, and spatial clustering to find locations are conveying a collective sense of place. Deriving and quantifying such meaning allows us to observe how people transform a location to a place and shape its characteristics. PMID:27050432
Crowdsourcing a Collective Sense of Place.
Jenkins, Andrew; Croitoru, Arie; Crooks, Andrew T; Stefanidis, Anthony
2016-01-01
Place can be generally defined as a location that has been assigned meaning through human experience, and as such it is of multidisciplinary scientific interest. Up to this point place has been studied primarily within the context of social sciences as a theoretical construct. The availability of large amounts of user-generated content, e.g. in the form of social media feeds or Wikipedia contributions, allows us for the first time to computationally analyze and quantify the shared meaning of place. By aggregating references to human activities within urban spaces we can observe the emergence of unique themes that characterize different locations, thus identifying places through their discernible sociocultural signatures. In this paper we present results from a novel quantitative approach to derive such sociocultural signatures from Twitter contributions and also from corresponding Wikipedia entries. By contrasting the two we show how particular thematic characteristics of places (referred to herein as platial themes) are emerging from such crowd-contributed content, allowing us to observe the meaning that the general public, either individually or collectively, is assigning to specific locations. Our approach leverages probabilistic topic modelling, semantic association, and spatial clustering to find locations are conveying a collective sense of place. Deriving and quantifying such meaning allows us to observe how people transform a location to a place and shape its characteristics.
Development of probabilistic internal dosimetry computer code
NASA Astrophysics Data System (ADS)
Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki
2017-02-01
Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values ( e.g. the 2.5th, 5th, median, 95th, and 97.5th percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various situations. In cases of severe internal exposure, the causation probability of a deterministic health effect can be derived from the dose distribution, and a high statistical value ( e.g., the 95th percentile of the distribution) can be used to determine the appropriate intervention. The distribution-based sensitivity analysis can also be used to quantify the contribution of each factor to the dose uncertainty, which is essential information for reducing and optimizing the uncertainty in the internal dose assessment. Therefore, the present study can contribute to retrospective dose assessment for accidental internal exposure scenarios, as well as to internal dose monitoring optimization and uncertainty reduction.
ERIC Educational Resources Information Center
Moni, Roger W.; Moni, Karen B.; Poronnik, Philip
2007-01-01
The teaching of highly valued scientific writing skills in the first year of university is challenging. This report describes the design, implementation, and evaluation of a novel written assignment, "The Personal Response" and accompanying Peer Review, in the course, Human Biology (BIOL1015) at The University of Queensland. These assignments were…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutherland, J. G. H.; Miksys, N.; Thomson, R. M., E-mail: rthomson@physics.carleton.ca
2014-01-15
Purpose: To investigate methods of generating accurate patient-specific computational phantoms for the Monte Carlo calculation of lung brachytherapy patient dose distributions. Methods: Four metallic artifact mitigation methods are applied to six lung brachytherapy patient computed tomography (CT) images: simple threshold replacement (STR) identifies high CT values in the vicinity of the seeds and replaces them with estimated true values; fan beam virtual sinogram replaces artifact-affected values in a virtual sinogram and performs a filtered back-projection to generate a corrected image; 3D median filter replaces voxel values that differ from the median value in a region of interest surrounding the voxelmore » and then applies a second filter to reduce noise; and a combination of fan beam virtual sinogram and STR. Computational phantoms are generated from artifact-corrected and uncorrected images using several tissue assignment schemes: both lung-contour constrained and unconstrained global schemes are considered. Voxel mass densities are assigned based on voxel CT number or using the nominal tissue mass densities. Dose distributions are calculated using the EGSnrc user-code BrachyDose for{sup 125}I, {sup 103}Pd, and {sup 131}Cs seeds and are compared directly as well as through dose volume histograms and dose metrics for target volumes surrounding surgical sutures. Results: Metallic artifact mitigation techniques vary in ability to reduce artifacts while preserving tissue detail. Notably, images corrected with the fan beam virtual sinogram have reduced artifacts but residual artifacts near sources remain requiring additional use of STR; the 3D median filter removes artifacts but simultaneously removes detail in lung and bone. Doses vary considerably between computational phantoms with the largest differences arising from artifact-affected voxels assigned to bone in the vicinity of the seeds. Consequently, when metallic artifact reduction and constrained tissue assignment within lung contours are employed in generated phantoms, this erroneous assignment is reduced, generally resulting in higher doses. Lung-constrained tissue assignment also results in increased doses in regions of interest due to a reduction in the erroneous assignment of adipose to voxels within lung contours. Differences in dose metrics calculated for different computational phantoms are sensitive to radionuclide photon spectra with the largest differences for{sup 103}Pd seeds and smallest but still considerable differences for {sup 131}Cs seeds. Conclusions: Despite producing differences in CT images, dose metrics calculated using the STR, fan beam + STR, and 3D median filter techniques produce similar dose metrics. Results suggest that the accuracy of dose distributions for permanent implant lung brachytherapy is improved by applying lung-constrained tissue assignment schemes to metallic artifact corrected images.« less
NASA Astrophysics Data System (ADS)
Hirata, K.; Fujiwara, H.; Nakamura, H.; Osada, M.; Morikawa, N.; Kawai, S.; Ohsumi, T.; Aoi, S.; Yamamoto, N.; Matsuyama, H.; Toyama, N.; Kito, T.; Murashima, Y.; Murata, Y.; Inoue, T.; Saito, R.; Takayama, J.; Akiyama, S.; Korenaga, M.; Abe, Y.; Hashimoto, N.
2015-12-01
The Earthquake Research Committee(ERC)/HERP, Government of Japan (2013) revised their long-term evaluation of the forthcoming large earthquake along the Nankai Trough; the next earthquake is estimated M8 to 9 class, and the probability (P30) that the next earthquake will occur within the next 30 years (from Jan. 1, 2013) is 60% to 70%. In this study, we assess tsunami hazards (maximum coastal tsunami heights) in the near future, in terms of a probabilistic approach, from the next earthquake along Nankai Trough, on the basis of ERC(2013)'s report. The probabilistic tsunami hazard assessment that we applied is as follows; (1) Characterized earthquake fault models (CEFMs) are constructed on each of the 15 hypothetical source areas (HSA) that ERC(2013) showed. The characterization rule follows Toyama et al.(2015, JpGU). As results, we obtained total of 1441 CEFMs. (2) We calculate tsunamis due to CEFMs by solving nonlinear, finite-amplitude, long-wave equations with advection and bottom friction terms by finite-difference method. Run-up computation on land is included. (3) A time predictable model predicts the recurrent interval of the present seismic cycle is T=88.2 years (ERC,2013). We fix P30 = 67% by applying the renewal process based on BPT distribution with T and alpha=0.24 as its aperiodicity. (4) We divide the probability P30 into P30(i) for i-th subgroup consisting of the earthquakes occurring in each of 15 HSA by following a probability re-distribution concept (ERC,2014). Then each earthquake (CEFM) in i-th subgroup is assigned a probability P30(i)/N where N is the number of CEFMs in each sub-group. Note that such re-distribution concept of the probability is nothing but tentative because the present seismology cannot give deep knowledge enough to do it. Epistemic logic-tree approach may be required in future. (5) We synthesize a number of tsunami hazard curves at every evaluation points on coasts by integrating the information about 30 years occurrence probabilities P30(i) for all earthquakes (CEFMs) and calculated maximum coastal tsunami heights. In the synthesis, aleatory uncertainties relating to incompleteness of governing equations, CEFM modeling, bathymetry and topography data, etc, are modeled assuming a log-normal probabilistic distribution. Examples of tsunami hazard curves will be presented.
Probabilistic Ontology Architecture for a Terrorist Identification Decision Support System
2014-06-01
in real-world problems requires probabilistic ontologies, which integrate the inferential reasoning power of probabilistic representations with the... inferential reasoning power of probabilistic representations with the first-order expressivity of ontologies. The Reference Architecture for...ontology, terrorism, inferential reasoning, architecture I. INTRODUCTION A. Background Whether by nature or design, the personas of terrorists are
Probabilistic Evaluation of Blade Impact Damage
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Abumeri, G. H.
2003-01-01
The response to high velocity impact of a composite blade is probabilistically evaluated. The evaluation is focused on quantifying probabilistically the effects of uncertainties (scatter) in the variables that describe the impact, the blade make-up (geometry and material), the blade response (displacements, strains, stresses, frequencies), the blade residual strength after impact, and the blade damage tolerance. The results of probabilistic evaluations results are in terms of probability cumulative distribution functions and probabilistic sensitivities. Results show that the blade has relatively low damage tolerance at 0.999 probability of structural failure and substantial at 0.01 probability.
Probabilistic Simulation of Stress Concentration in Composite Laminates
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Murthy, P. L. N.; Liaw, D. G.
1994-01-01
A computational methodology is described to probabilistically simulate the stress concentration factors (SCF's) in composite laminates. This new approach consists of coupling probabilistic composite mechanics with probabilistic finite element structural analysis. The composite mechanics is used to probabilistically describe all the uncertainties inherent in composite material properties, whereas the finite element is used to probabilistically describe the uncertainties associated with methods to experimentally evaluate SCF's, such as loads, geometry, and supports. The effectiveness of the methodology is demonstrated by using is to simulate the SCF's in three different composite laminates. Simulated results match experimental data for probability density and for cumulative distribution functions. The sensitivity factors indicate that the SCF's are influenced by local stiffness variables, by load eccentricities, and by initial stress fields.
Coefficients of conservatism for the vascular flora of the Dakotas and adjacent grasslands
,
2001-01-01
Floristic quality assessment can be used to identity natural areas, to facilitate comparisons among different sites, to provide long-term monitoring of natural area quality, and to evaluate habitat management and restoration efforts. To facilitate the use of floristic quality assessment in North Dakota, South Dakota (excluding the Black Hills), and adjacent grasslands, we developed a species list and assigned coefficients of conservatism (C values; range 0 to 10) to each plant species in the region's flora. The C values we assigned represented our collective knowledge of the patterns of occurrence of each plant species in the Dakotas and our confidence that a particular taxon is natural-area dependent. Because state boundaries usually do not follow ecological boundaries, the C values we assigned should be equally valid in nearby areas with the same vegetation types. Of the 1,584 taxa we recognized in this effort, 275 (17%) were determined to be nonnative to the region. We assigned C values of 4 or higher to 77% of our taxa, and the entire native flora had a mean C value (C) of 6.l. A floristic quality index (FQI) can be calculated to rank sites in order of their floristic quality. By applying the coefficients of conservatism supplied here and calculating C and FQI, an effective means of evaluating the quality of plant communities can be obtained. Additionally, by repeating plant surveys and calculations of C and FQI over time, temporal changes in floristic quality can be identified.
Probabilistic Change of Wheat Productivity and Water Use in China
NASA Astrophysics Data System (ADS)
Liu, Yujie; Chen, Qiaomin
2017-04-01
Impacts of climate change on agriculture are a major concern worldwide, but uncertainties of climate models and emission scenarios may hamper efforts to adapt to climate change. In this paper, a probabilistic approach is used to estimate the uncertainties and simulate impacts of global warming on wheat production and water use in the main wheat cultivation regions of China, with a global mean temperature (GMT) increase scale relative to 1961-90 values. From output of 20 climate scenarios of the Intergovernmental Panel on Climate Change Data Distribution Centre, median values of projected changes in monthly mean climate variables for representative stations are adapted. These are used to drive the Crop Environment Resource Synthesis (CERES)-Wheat model to simulate wheat production and water use under baseline and global warming scenarios, with and without consideration of carbon dioxide (CO2) fertilization effects. Results show that, because of temperature increase, projected wheat-growing periods for GMT changes of 18, 28, and 38C would shorten, with averaged median values of 3.94%, 6.90%, and 9.67%, respectively. There is a high probability of decreasing (increasing) changes in yield and water-use efficiency under higher temperature scenarios without (with) consideration of CO2 fertilization effects. Elevated CO2 concentration generally compensates for the negative effects of warming temperatures on production. Moreover, positive effects of elevated CO2 concentration on grain yield increase with warming temperatures. The findings could be critical for climate-change-driven agricultural production that ensures global food security.
Draelos, Timothy J.; Ballard, Sanford; Young, Christopher J.; ...
2015-10-01
Given a set of observations within a specified time window, a fitness value is calculated at each grid node by summing station-specific conditional fitness values. Assuming each observation was generated by a refracted P wave, these values are proportional to the conditional probabilities that each observation was generated by a seismic event at the grid node. The node with highest fitness value is accepted as a hypothetical event location, subject to some minimal fitness value, and all arrivals within a longer time window consistent with that event are associated with it. During the association step, a variety of different phasesmore » are considered. In addition, once associated with an event, an arrival is removed from further consideration. While unassociated arrivals remain, the search for other events is repeated until none are identified.« less
NASA Astrophysics Data System (ADS)
Fatimah, F.; Rosadi, D.; Hakim, R. B. F.
2018-03-01
In this paper, we motivate and introduce probabilistic soft sets and dual probabilistic soft sets for handling decision making problem in the presence of positive and negative parameters. We propose several types of algorithms related to this problem. Our procedures are flexible and adaptable. An example on real data is also given.
Learning Probabilistic Logic Models from Probabilistic Examples
Chen, Jianzhong; Muggleton, Stephen; Santos, José
2009-01-01
Abstract We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches - abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples. PMID:19888348
Learning Probabilistic Logic Models from Probabilistic Examples.
Chen, Jianzhong; Muggleton, Stephen; Santos, José
2008-10-01
We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches - abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples.
Probabilistic simulation of uncertainties in thermal structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Shiao, Michael
1990-01-01
Development of probabilistic structural analysis methods for hot structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) quantification of the effects of uncertainties for several variables on high pressure fuel turbopump (HPFT) blade temperature, pressure, and torque of the Space Shuttle Main Engine (SSME); (2) the evaluation of the cumulative distribution function for various structural response variables based on assumed uncertainties in primitive structural variables; (3) evaluation of the failure probability; (4) reliability and risk-cost assessment, and (5) an outline of an emerging approach for eventual hot structures certification. Collectively, the results demonstrate that the structural durability/reliability of hot structural components can be effectively evaluated in a formal probabilistic framework. In addition, the approach can be readily extended to computationally simulate certification of hot structures for aerospace environments.
Structure based alignment and clustering of proteins (STRALCP)
Zemla, Adam T.; Zhou, Carol E.; Smith, Jason R.; Lam, Marisa W.
2013-06-18
Disclosed are computational methods of clustering a set of protein structures based on local and pair-wise global similarity values. Pair-wise local and global similarity values are generated based on pair-wise structural alignments for each protein in the set of protein structures. Initially, the protein structures are clustered based on pair-wise local similarity values. The protein structures are then clustered based on pair-wise global similarity values. For each given cluster both a representative structure and spans of conserved residues are identified. The representative protein structure is used to assign newly-solved protein structures to a group. The spans are used to characterize conservation and assign a "structural footprint" to the cluster.
Classic articles and workbook: EPRI monographs on simulation of electric power production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stremel, J.P.
1991-12-01
This monograph republishes several articles including a seminal one on probabilistic production costing for electric power generation. That article is given in the original French along with a English translation. Another article, written by R. Booth, gives a popular explanation of the theory, and a workbook by B. Manhire is included that carries through a simple example step by step. The classical analysis of non-probabilistic generator dispatch by L.K. Kirchmayer is republished along with an introductory essay by J.P. Stremel that puts in perspective the monograph material. The article in French was written by H. Baleriaux, E. Jamoulle, and Fr.more » Linard de Guertechin and first published in Brussels in 1967. It derived a method for calculating the expected value of production costs by modifying a load duration curve through the use of probability factors that account for unplanned random generator outages. Although the paper showed how pump storage plants could be included and how linear programming could be applied, the convolution technique used in the probabilistic calculations is the part most widely applied. The tutorial paper by Booth was written in a light style, and its lucidity helped popularize the method. The workbook by Manhire also shows how the calculation can be shortened significantly using cumulants to approximate the load duration curve.« less
A model-based test for treatment effects with probabilistic classifications.
Cavagnaro, Daniel R; Davis-Stober, Clintin P
2018-05-21
Within modern psychology, computational and statistical models play an important role in describing a wide variety of human behavior. Model selection analyses are typically used to classify individuals according to the model(s) that best describe their behavior. These classifications are inherently probabilistic, which presents challenges for performing group-level analyses, such as quantifying the effect of an experimental manipulation. We answer this challenge by presenting a method for quantifying treatment effects in terms of distributional changes in model-based (i.e., probabilistic) classifications across treatment conditions. The method uses hierarchical Bayesian mixture modeling to incorporate classification uncertainty at the individual level into the test for a treatment effect at the group level. We illustrate the method with several worked examples, including a reanalysis of the data from Kellen, Mata, and Davis-Stober (2017), and analyze its performance more generally through simulation studies. Our simulations show that the method is both more powerful and less prone to type-1 errors than Fisher's exact test when classifications are uncertain. In the special case where classifications are deterministic, we find a near-perfect power-law relationship between the Bayes factor, derived from our method, and the p value obtained from Fisher's exact test. We provide code in an online supplement that allows researchers to apply the method to their own data. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Slob, Wout
2006-07-01
Probabilistic dietary exposure assessments that are fully based on Monte Carlo sampling from the raw intake data may not be appropriate. This paper shows that the data should first be analysed by using a statistical model that is able to take the various dimensions of food consumption patterns into account. A (parametric) model is discussed that takes into account the interindividual variation in (daily) consumption frequencies, as well as in amounts consumed. Further, the model can be used to include covariates, such as age, sex, or other individual attributes. Some illustrative examples show how this model may be used to estimate the probability of exceeding an (acute or chronic) exposure limit. These results are compared with the results based on directly counting the fraction of observed intakes exceeding the limit value. This comparison shows that the latter method is not adequate, in particular for the acute exposure situation. A two-step approach for probabilistic (acute) exposure assessment is proposed: first analyse the consumption data by a (parametric) statistical model as discussed in this paper, and then use Monte Carlo techniques for combining the variation in concentrations with the variation in consumption (by sampling from the statistical model). This approach results in an estimate of the fraction of the population as a function of the fraction of days at which the exposure limit is exceeded by the individual.
A Probabilistic Approach to Uncertainty Analysis in NTPR Radiation Dose Assessments
2009-11-01
Zeman, members of the Subcommittee on Dose Reconstruction of the Veterans’ Advisory Board on Dose Reconstruction (VBDR) for their critical review of...production and decay of fission products, activation products, and actinides . (It is generally assumed in these calculations that no 30 Time...histories”), and on each history selecting random values from each of the pdf’s, they were able to conduct “numerical experiments” and derive critical
VizieR Online Data Catalog: A catalog of exoplanet physical parameters (Foreman-Mackey+, 2014)
NASA Astrophysics Data System (ADS)
Foreman-Mackey, D.; Hogg, D. W.; Morton, T. D.
2017-05-01
The first ingredient for any probabilistic inference is a likelihood function, a description of the probability of observing a specific data set given a set of model parameters. In this particular project, the data set is a catalog of exoplanet measurements and the model parameters are the values that set the shape and normalization of the occurrence rate density. (2 data files).
PREMChlor: Probabilistic Remediation Evaluation Model for Chlorinated Solvents
2010-03-01
Council O&M Operation & Management PAT pump-and-treat PCE tetrachloroethylene PDFs Probability density functions PRBs Permeable reactive barriers...includes a one-time capital cost and a total operation & management (O&M) cost in present net value (NPV) for a certain remediation period. The...Generally, the costs of plume treatment include the capital cost (treatment volume multiply by the unit cost) and the annual operation & Management (O&M
2017-12-01
Stated another way, one might feel less culpable for (a) failing to act when 8 Louis Pojman, Ethics...effectiveness. Rather, I mean that these schemata may have probabilistic predictive power when applied broadly, to groups or organizations rather than to...fundamental rights of the American people to national independence and sovereignty are principles derived from the above “self-evident truths.” These
Spatial forecasting of disease risk and uncertainty
De Cola, L.
2002-01-01
Because maps typically represent the value of a single variable over 2-dimensional space, cartographers must simplify the display of multiscale complexity, temporal dynamics, and underlying uncertainty. A choropleth disease risk map based on data for polygonal regions might depict incidence (cases per 100,000 people) within each polygon for a year but ignore the uncertainty that results from finer-scale variation, generalization, misreporting, small numbers, and future unknowns. In response to such limitations, this paper reports on the bivariate mapping of data "quantity" and "quality" of Lyme disease forecasts for states of the United States. Historical state data for 1990-2000 are used in an autoregressive model to forecast 2001-2010 disease incidence and a probability index of confidence, each of which is then kriged to provide two spatial grids representing continuous values over the nation. A single bivariate map is produced from the combination of the incidence grid (using a blue-to-red hue spectrum), and a probabilistic confidence grid (used to control the saturation of the hue at each grid cell). The resultant maps are easily interpretable, and the approach may be applied to such problems as detecting unusual disease occurences, visualizing past and future incidence, and assembling a consistent regional disease atlas showing patterns of forecasted risks in light of probabilistic confidence.
NASA Astrophysics Data System (ADS)
Chen, Xiao; Li, Yaan; Yu, Jing; Li, Yuxing
2018-01-01
For fast and more effective implementation of tracking multiple targets in a cluttered environment, we propose a multiple targets tracking (MTT) algorithm called maximum entropy fuzzy c-means clustering joint probabilistic data association that combines fuzzy c-means clustering and the joint probabilistic data association (PDA) algorithm. The algorithm uses the membership value to express the probability of the target originating from measurement. The membership value is obtained through fuzzy c-means clustering objective function optimized by the maximum entropy principle. When considering the effect of the public measurement, we use a correction factor to adjust the association probability matrix to estimate the state of the target. As this algorithm avoids confirmation matrix splitting, it can solve the high computational load problem of the joint PDA algorithm. The results of simulations and analysis conducted for tracking neighbor parallel targets and cross targets in a different density cluttered environment show that the proposed algorithm can realize MTT quickly and efficiently in a cluttered environment. Further, the performance of the proposed algorithm remains constant with increasing process noise variance. The proposed algorithm has the advantages of efficiency and low computational load, which can ensure optimum performance when tracking multiple targets in a dense cluttered environment.
Analytical probabilistic proton dose calculation and range uncertainties
NASA Astrophysics Data System (ADS)
Bangert, M.; Hennig, P.; Oelfke, U.
2014-03-01
We introduce the concept of analytical probabilistic modeling (APM) to calculate the mean and the standard deviation of intensity-modulated proton dose distributions under the influence of range uncertainties in closed form. For APM, range uncertainties are modeled with a multivariate Normal distribution p(z) over the radiological depths z. A pencil beam algorithm that parameterizes the proton depth dose d(z) with a weighted superposition of ten Gaussians is used. Hence, the integrals ∫ dz p(z) d(z) and ∫ dz p(z) d(z)2 required for the calculation of the expected value and standard deviation of the dose remain analytically tractable and can be efficiently evaluated. The means μk, widths δk, and weights ωk of the Gaussian components parameterizing the depth dose curves are found with least squares fits for all available proton ranges. We observe less than 0.3% average deviation of the Gaussian parameterizations from the original proton depth dose curves. Consequently, APM yields high accuracy estimates for the expected value and standard deviation of intensity-modulated proton dose distributions for two dimensional test cases. APM can accommodate arbitrary correlation models and account for the different nature of random and systematic errors in fractionated radiation therapy. Beneficial applications of APM in robust planning are feasible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheverton, R.D.; Dickson, T.L.; Merkle, J.G.
1992-03-01
The Yankee Atomic Electric Company has performed an Integrated Pressurized Thermal Shock (IPTS)-type evaluation of the Yankee Rowe reactor pressure vessel in accordance with the PTS Rule (10 CFR 50. 61) and a US Regulatory Guide 1.154. The Oak Ridge National Laboratory (ORNL) reviewed the YAEC document and performed an independent probabilistic fracture-mechnics analysis. The review included a comparison of the Pacific Northwest Laboratory (PNL) and the ORNL probabilistic fracture-mechanics codes (VISA-II and OCA-P, respectively). The review identified minor errors and one significant difference in philosophy. Also, the two codes have a few dissimilar peripheral features. Aside from these differences,more » VISA-II and OCA-P are very similar and with errors corrected and when adjusted for the difference in the treatment of fracture toughness distribution through the wall, yield essentially the same value of the conditional probability of failure. The ORNL independent evaluation indicated RT{sub NDT} values considerably greater than those corresponding to the PTS-Rule screening criteria and a frequency of failure substantially greater than that corresponding to the primary acceptance criterion'' in US Regulatory Guide 1.154. Time constraints, however, prevented as rigorous a treatment as the situation deserves. Thus, these results are very preliminary.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheverton, R.D.; Dickson, T.L.; Merkle, J.G.
1992-03-01
The Yankee Atomic Electric Company has performed an Integrated Pressurized Thermal Shock (IPTS)-type evaluation of the Yankee Rowe reactor pressure vessel in accordance with the PTS Rule (10 CFR 50. 61) and a US Regulatory Guide 1.154. The Oak Ridge National Laboratory (ORNL) reviewed the YAEC document and performed an independent probabilistic fracture-mechnics analysis. The review included a comparison of the Pacific Northwest Laboratory (PNL) and the ORNL probabilistic fracture-mechanics codes (VISA-II and OCA-P, respectively). The review identified minor errors and one significant difference in philosophy. Also, the two codes have a few dissimilar peripheral features. Aside from these differences,more » VISA-II and OCA-P are very similar and with errors corrected and when adjusted for the difference in the treatment of fracture toughness distribution through the wall, yield essentially the same value of the conditional probability of failure. The ORNL independent evaluation indicated RT{sub NDT} values considerably greater than those corresponding to the PTS-Rule screening criteria and a frequency of failure substantially greater than that corresponding to the ``primary acceptance criterion`` in US Regulatory Guide 1.154. Time constraints, however, prevented as rigorous a treatment as the situation deserves. Thus, these results are very preliminary.« less
Billoir, Elise; Denis, Jean-Baptiste; Cammeau, Natalie; Cornu, Marie; Zuliani, Veronique
2011-02-01
To assess the impact of the manufacturing process on the fate of Listeria monocytogenes, we built a generic probabilistic model intended to simulate the successive steps in the process. Contamination evolution was modeled in the appropriate units (breasts, dice, and then packaging units through the successive steps in the process). To calibrate the model, parameter values were estimated from industrial data, from the literature, and based on expert opinion. By means of simulations, the model was explored using a baseline calibration and alternative scenarios, in order to assess the impact of changes in the process and of accidental events. The results are reported as contamination distributions and as the probability that the product will be acceptable with regards to the European regulatory safety criterion. Our results are consistent with data provided by industrial partners and highlight that tumbling is a key step for the distribution of the contamination at the end of the process. Process chain models could provide an important added value for risk assessment models that basically consider only the outputs of the process in their risk mitigation strategies. Moreover, a model calibrated to correspond to a specific plant could be used to optimize surveillance. © 2010 Society for Risk Analysis.
Female pelvic synthetic CT generation based on joint intensity and shape analysis
NASA Astrophysics Data System (ADS)
Liu, Lianli; Jolly, Shruti; Cao, Yue; Vineberg, Karen; Fessler, Jeffrey A.; Balter, James M.
2017-04-01
Using MRI for radiotherapy treatment planning and image guidance is appealing as it provides superior soft tissue information over CT scans and avoids possible systematic errors introduced by aligning MR to CT images. This study presents a method that generates Synthetic CT (MRCT) volumes by performing probabilistic tissue classification of voxels from MRI data using a single imaging sequence (T1 Dixon). The intensity overlap between different tissues on MR images, a major challenge for voxel-based MRCT generation methods, is addressed by adding bone shape information to an intensity-based classification scheme. A simple pelvic bone shape model, built from principal component analysis of pelvis shape from 30 CT image volumes, is fitted to the MR volumes. The shape model generates a rough bone mask that excludes air and covers bone along with some surrounding soft tissues. Air regions are identified and masked out from the tissue classification process by intensity thresholding outside the bone mask. A regularization term is added to the fuzzy c-means classification scheme that constrains voxels outside the bone mask from being assigned memberships in the bone class. MRCT image volumes are generated by multiplying the probability of each voxel being represented in each class with assigned attenuation values of the corresponding class and summing the result across all classes. The MRCT images presented intensity distributions similar to CT images with a mean absolute error of 13.7 HU for muscle, 15.9 HU for fat, 49.1 HU for intra-pelvic soft tissues, 129.1 HU for marrow and 274.4 HU for bony tissues across 9 patients. Volumetric modulated arc therapy (VMAT) plans were optimized using MRCT-derived electron densities, and doses were recalculated using corresponding CT-derived density grids. Dose differences to planning target volumes were small with mean/standard deviation of 0.21/0.42 Gy for D0.5cc and 0.29/0.33 Gy for D99%. The results demonstrate the accuracy of the method and its potential in supporting MRI only radiotherapy treatment planning.
Learning Sparse Feature Representations using Probabilistic Quadtrees and Deep Belief Nets
2015-04-24
Feature Representations usingProbabilistic Quadtrees and Deep Belief Nets Learning sparse feature representations is a useful instru- ment for solving an...novel framework for the classifi cation of handwritten digits that learns sparse representations using probabilistic quadtrees and Deep Belief Nets... Learning Sparse Feature Representations usingProbabilistic Quadtrees and Deep Belief Nets Report Title Learning sparse feature representations is a useful
Principles for selecting earthquake motions in engineering design of large dams
Krinitzsky, E.L.; Marcuson, William F.
1983-01-01
This report gives a synopsis of the various tools and techniques used in selecting earthquake ground motion parameters for large dams. It presents 18 charts giving newly developed relations for acceleration, velocity, and duration versus site earthquake intensity for near- and far-field hard and soft sites and earthquakes having magnitudes above and below 7. The material for this report is based on procedures developed at the Waterways Experiment Station. Although these procedures are suggested primarily for large dams, they may also be applicable for other facilities. Because no standard procedure exists for selecting earthquake motions in engineering design of large dams, a number of precautions are presented to guide users. The selection of earthquake motions is dependent on which one of two types of engineering analyses are performed. A pseudostatic analysis uses a coefficient usually obtained from an appropriate contour map; whereas, a dynamic analysis uses either accelerograms assigned to a site or specified respunse spectra. Each type of analysis requires significantly different input motions. All selections of design motions must allow for the lack of representative strong motion records, especially near-field motions from earthquakes of magnitude 7 and greater, as well as an enormous spread in the available data. Limited data must be projected and its spread bracketed in order to fill in the gaps and to assure that there will be no surprises. Because each site may have differing special characteristics in its geology, seismic history, attenuation, recurrence, interpreted maximum events, etc., as integrated approach gives best results. Each part of the site investigation requires a number of decisions. In some cases, the decision to use a 'least ork' approach may be suitable, simply assuming the worst of several possibilities and testing for it. Because there are no standard procedures to follow, multiple approaches are useful. For example, peak motions at a site may be obtained from several methods that involve magnitude of earthquake, distance from source, and corresponding motions; or, alternately, peak motions may be assigned from other correlations based on earthquake intensity. Various interpretations exist to account for duration, recurrence, effects of site conditions, etc. Comparison of the various interpretations can be very useful. Probabilities can be assigned; however, they can present very serious problems unless appropriate care is taken when data are extrapolated beyond their data base. In making deterministic judgments, probabilistic data can provide useful guidance in estimating the uncertainties of the decision. The selection of a design ground motion for large dams is based in the end on subjective judgments which should depend, to an important extent, on the consequences of failure. Usually, use of a design value of ground motion representing a mean plus one standard deviation of possible variation in the mean of the data puts one in a conservative position. If failure presents no hazard to life, lower values of design ground motion may be justified, providing there are cost benefits and the risk is acceptable to the owner. Where a large hazard to life exists (i.e., a dam above an urbanized area) one may wish to use values of design ground motion that approximate the very worst case. The selection of a design ground motion must be appropriate for its particular set of circumstances.
Chisaki, Yugo; Nakamura, Nobuhiko; Yano, Yoshitaka
2017-01-01
The purpose of this study was to propose a time-series modeling and simulation (M&S) strategy for probabilistic cost-effective analysis in cancer chemotherapy using a Monte-Carlo method based on data available from the literature. The simulation included the cost for chemotherapy, for pharmaceutical care for adverse events (AEs) and other medical costs. As an application example, we describe the analysis for the comparison of four regimens, cisplatin plus irinotecan, carboplatin plus paclitaxel, cisplatin plus gemcitabine (GP), and cisplatin plus vinorelbine, for advanced non-small cell lung cancer. The factors, drug efficacy explained by overall survival or time to treatment failure, frequency and severity of AEs, utility value of AEs to determine QOL, the drugs' and other medical costs in Japan, were included in the model. The simulation was performed and quality adjusted life years (QALY) and incremental cost-effectiveness ratios (ICER) were calculated. An index, percentage of superiority (%SUP) which is the rate of the increased cost vs. QALY-gained plots within the area of positive QALY-gained and also below some threshold values of the ICER, was calculated as functions of threshold values of the ICER. An M&S process was developed, and for the simulation example, the GP regimen was the most cost-effective, in case of threshold values of the ICER=$70000/year, the %SUP for the GP are more than 50%. We developed an M&S process for probabilistic cost-effective analysis, this method would be useful for decision-making in choosing a cancer chemotherapy regimen in terms of pharmacoeconomic.
Epidemiologic research using probabilistic outcome definitions.
Cai, Bing; Hennessy, Sean; Lo Re, Vincent; Small, Dylan S
2015-01-01
Epidemiologic studies using electronic healthcare data often define the presence or absence of binary clinical outcomes by using algorithms with imperfect specificity, sensitivity, and positive predictive value. This results in misclassification and bias in study results. We describe and evaluate a new method called probabilistic outcome definition (POD) that uses logistic regression to estimate the probability of a clinical outcome using multiple potential algorithms and then uses multiple imputation to make valid inferences about the risk ratio or other epidemiologic parameters of interest. We conducted a simulation to evaluate the performance of the POD method with two variables that can predict the true outcome and compared the POD method with the conventional method. The simulation results showed that when the true risk ratio is equal to 1.0 (null), the conventional method based on a binary outcome provides unbiased estimates. However, when the risk ratio is not equal to 1.0, the traditional method, either using one predictive variable or both predictive variables to define the outcome, is biased when the positive predictive value is <100%, and the bias is very severe when the sensitivity or positive predictive value is poor (less than 0.75 in our simulation). In contrast, the POD method provides unbiased estimates of the risk ratio both when this measure of effect is equal to 1.0 and not equal to 1.0. Even when the sensitivity and positive predictive value are low, the POD method continues to provide unbiased estimates of the risk ratio. The POD method provides an improved way to define outcomes in database research. This method has a major advantage over the conventional method in that it provided unbiased estimates of risk ratios and it is easy to use. Copyright © 2014 John Wiley & Sons, Ltd.
BootGraph: probabilistic fiber tractography using bootstrap algorithms and graph theory.
Vorburger, Robert S; Reischauer, Carolin; Boesiger, Peter
2013-02-01
Bootstrap methods have recently been introduced to diffusion-weighted magnetic resonance imaging to estimate the measurement uncertainty of ensuing diffusion parameters directly from the acquired data without the necessity to assume a noise model. These methods have been previously combined with deterministic streamline tractography algorithms to allow for the assessment of connection probabilities in the human brain. Thereby, the local noise induced disturbance in the diffusion data is accumulated additively due to the incremental progression of streamline tractography algorithms. Graph based approaches have been proposed to overcome this drawback of streamline techniques. For this reason, the bootstrap method is in the present work incorporated into a graph setup to derive a new probabilistic fiber tractography method, called BootGraph. The acquired data set is thereby converted into a weighted, undirected graph by defining a vertex in each voxel and edges between adjacent vertices. By means of the cone of uncertainty, which is derived using the wild bootstrap, a weight is thereafter assigned to each edge. Two path finding algorithms are subsequently applied to derive connection probabilities. While the first algorithm is based on the shortest path approach, the second algorithm takes all existing paths between two vertices into consideration. Tracking results are compared to an established algorithm based on the bootstrap method in combination with streamline fiber tractography and to another graph based algorithm. The BootGraph shows a very good performance in crossing situations with respect to false negatives and permits incorporating additional constraints, such as a curvature threshold. By inheriting the advantages of the bootstrap method and graph theory, the BootGraph method provides a computationally efficient and flexible probabilistic tractography setup to compute connection probability maps and virtual fiber pathways without the drawbacks of streamline tractography algorithms or the assumption of a noise distribution. Moreover, the BootGraph can be applied to common DTI data sets without further modifications and shows a high repeatability. Thus, it is very well suited for longitudinal studies and meta-studies based on DTI. Copyright © 2012 Elsevier Inc. All rights reserved.
Satellite Based Probabilistic Snow Cover Extent Mapping (SCE) at Hydro-Québec
NASA Astrophysics Data System (ADS)
Teasdale, Mylène; De Sève, Danielle; Angers, Jean-François; Perreault, Luc
2016-04-01
Over 40% of Canada's water resources are in Quebec and Hydro-Quebec has developed potential to become one of the largest producers of hydroelectricity in the world, with a total installed capacity of 36,643 MW. The Hydro-Québec fleet park includes 27 large reservoirs with a combined storage capacity of 176 TWh, and 668 dams and 98 controls. Thus, over 98% of all electricity used to supply the domestic market comes from water resources and the excess output is sold on the wholesale markets. In this perspective the efficient management of water resources is needed and it is based primarily on a good river flow estimation including appropriate hydrological data. Snow on ground is one of the significant variables representing 30% to 40% of its annual energy reserve. More specifically, information on snow cover extent (SCE) and snow water equivalent (SWE) is crucial for hydrological forecasting, particularly in northern regions since the snowmelt provides the water that fills the reservoirs and is subsequently used for hydropower generation. For several years Hydro Quebec's research institute ( IREQ) developed several algorithms to map SCE and SWE. So far all the methods were deterministic. However, given the need to maximize the efficient use of all resources while ensuring reliability, the electrical systems must now be managed taking into account all risks. Since snow cover estimation is based on limited spatial information, it is important to quantify and handle its uncertainty in the hydrological forecasting system. This paper presents the first results of a probabilistic algorithm for mapping SCE by combining Bayesian mixture of probability distributions and multiple logistic regression models applied to passive microwave data. This approach allows assigning for each grid point, probabilities to the set of the mutually exclusive discrete outcomes: "snow" and "no snow". Its performance was evaluated using the Brier score since it is particularly appropriate to measure the accuracy of probabilistic discrete predictions. The scores were measured by comparing the snow probabilities produced by our models with the Hydro-Québec's snow ground data.
Probabilistic Structural Analysis Methods (PSAM) for Select Space Propulsion System Components
NASA Technical Reports Server (NTRS)
1999-01-01
Probabilistic Structural Analysis Methods (PSAM) are described for the probabilistic structural analysis of engine components for current and future space propulsion systems. Components for these systems are subjected to stochastic thermomechanical launch loads. Uncertainties or randomness also occurs in material properties, structural geometry, and boundary conditions. Material property stochasticity, such as in modulus of elasticity or yield strength, exists in every structure and is a consequence of variations in material composition and manufacturing processes. Procedures are outlined for computing the probabilistic structural response or reliability of the structural components. The response variables include static or dynamic deflections, strains, and stresses at one or several locations, natural frequencies, fatigue or creep life, etc. Sample cases illustrates how the PSAM methods and codes simulate input uncertainties and compute probabilistic response or reliability using a finite element model with probabilistic methods.
Weickert, Thomas W.; Goldberg, Terry E.; Egan, Michael F.; Apud, Jose A.; Meeter, Martijn; Myers, Catherine E.; Gluck, Mark A; Weinberger, Daniel R.
2010-01-01
Background While patients with schizophrenia display an overall probabilistic category learning performance deficit, the extent to which this deficit occurs in unaffected siblings of patients with schizophrenia is unknown. There are also discrepant findings regarding probabilistic category learning acquisition rate and performance in patients with schizophrenia. Methods A probabilistic category learning test was administered to 108 patients with schizophrenia, 82 unaffected siblings, and 121 healthy participants. Results Patients with schizophrenia displayed significant differences from their unaffected siblings and healthy participants with respect to probabilistic category learning acquisition rates. Although siblings on the whole failed to differ from healthy participants on strategy and quantitative indices of overall performance and learning acquisition, application of a revised learning criterion enabling classification into good and poor learners based on individual learning curves revealed significant differences between percentages of sibling and healthy poor learners: healthy (13.2%), siblings (34.1%), patients (48.1%), yielding a moderate relative risk. Conclusions These results clarify previous discrepant findings pertaining to probabilistic category learning acquisition rate in schizophrenia and provide the first evidence for the relative risk of probabilistic category learning abnormalities in unaffected siblings of patients with schizophrenia, supporting genetic underpinnings of probabilistic category learning deficits in schizophrenia. These findings also raise questions regarding the contribution of antipsychotic medication to the probabilistic category learning deficit in schizophrenia. The distinction between good and poor learning may be used to inform genetic studies designed to detect schizophrenia risk alleles. PMID:20172502
Probabilistic finite elements for fracture mechanics
NASA Technical Reports Server (NTRS)
Besterfield, Glen
1988-01-01
The probabilistic finite element method (PFEM) is developed for probabilistic fracture mechanics (PFM). A finite element which has the near crack-tip singular strain embedded in the element is used. Probabilistic distributions, such as expectation, covariance and correlation stress intensity factors, are calculated for random load, random material and random crack length. The method is computationally quite efficient and can be expected to determine the probability of fracture or reliability.
Probabilistic Structural Analysis Methods (PSAM) for select space propulsion systems components
NASA Technical Reports Server (NTRS)
1991-01-01
Summarized here is the technical effort and computer code developed during the five year duration of the program for probabilistic structural analysis methods. The summary includes a brief description of the computer code manuals and a detailed description of code validation demonstration cases for random vibrations of a discharge duct, probabilistic material nonlinearities of a liquid oxygen post, and probabilistic buckling of a transfer tube liner.
A Hough Transform Global Probabilistic Approach to Multiple-Subject Diffusion MRI Tractography
2010-04-01
distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT A global probabilistic fiber tracking approach based on the voting process provided by...umn.edu 2 ABSTRACT A global probabilistic fiber tracking approach based on the voting process provided by the Hough transform is introduced in...criteria for aligning curves and particularly tracts. In this work, we present a global probabilistic approach inspired by the voting procedure provided
None of the above: A Bayesian account of the detection of novel categories.
Navarro, Daniel J; Kemp, Charles
2017-10-01
Every time we encounter a new object, action, or event, there is some chance that we will need to assign it to a novel category. We describe and evaluate a class of probabilistic models that detect when an object belongs to a category that has not previously been encountered. The models incorporate a prior distribution that is influenced by the distribution of previous objects among categories, and we present 2 experiments that demonstrate that people are also sensitive to this distributional information. Two additional experiments confirm that distributional information is combined with similarity when both sources of information are available. We compare our approach to previous models of unsupervised categorization and to several heuristic-based models, and find that a hierarchical Bayesian approach provides the best account of our data. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Instruction in information structuring improves Bayesian judgment in intelligence analysts.
Mandel, David R
2015-01-01
An experiment was conducted to test the effectiveness of brief instruction in information structuring (i.e., representing and integrating information) for improving the coherence of probability judgments and binary choices among intelligence analysts. Forty-three analysts were presented with comparable sets of Bayesian judgment problems before and immediately after instruction. After instruction, analysts' probability judgments were more coherent (i.e., more additive and compliant with Bayes theorem). Instruction also improved the coherence of binary choices regarding category membership: after instruction, subjects were more likely to invariably choose the category to which they assigned the higher probability of a target's membership. The research provides a rare example of evidence-based validation of effectiveness in instruction to improve the statistical assessment skills of intelligence analysts. Such instruction could also be used to improve the assessment quality of other types of experts who are required to integrate statistical information or make probabilistic assessments.
Wave scheduling - Decentralized scheduling of task forces in multicomputers
NASA Technical Reports Server (NTRS)
Van Tilborg, A. M.; Wittie, L. D.
1984-01-01
Decentralized operating systems that control large multicomputers need techniques to schedule competing parallel programs called task forces. Wave scheduling is a probabilistic technique that uses a hierarchical distributed virtual machine to schedule task forces by recursively subdividing and issuing wavefront-like commands to processing elements capable of executing individual tasks. Wave scheduling is highly resistant to processing element failures because it uses many distributed schedulers that dynamically assign scheduling responsibilities among themselves. The scheduling technique is trivially extensible as more processing elements join the host multicomputer. A simple model of scheduling cost is used by every scheduler node to distribute scheduling activity and minimize wasted processing capacity by using perceived workload to vary decentralized scheduling rules. At low to moderate levels of network activity, wave scheduling is only slightly less efficient than a central scheduler in its ability to direct processing elements to accomplish useful work.
A COMPARISON OF GALAXY COUNTING TECHNIQUES IN SPECTROSCOPICALLY UNDERSAMPLED REGIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Specian, Mike A.; Szalay, Alex S., E-mail: mspecia1@jhu.edu, E-mail: szalay@jhu.edu
2016-11-01
Accurate measures of galactic overdensities are invaluable for precision cosmology. Obtaining these measurements is complicated when members of one’s galaxy sample lack radial depths, most commonly derived via spectroscopic redshifts. In this paper, we utilize the Sloan Digital Sky Survey’s Main Galaxy Sample to compare seven methods of counting galaxies in cells when many of those galaxies lack redshifts. These methods fall into three categories: assigning galaxies discrete redshifts, scaling the numbers counted using regions’ spectroscopic completeness properties, and employing probabilistic techniques. We split spectroscopically undersampled regions into three types—those inside the spectroscopic footprint, those outside but adjacent to it,more » and those distant from it. Through Monte Carlo simulations, we demonstrate that the preferred counting techniques are a function of region type, cell size, and redshift. We conclude by reporting optimal counting strategies under a variety of conditions.« less
Procelewska, Joanna; Galilea, Javier Llamas; Clerc, Frederic; Farrusseng, David; Schüth, Ferdi
2007-01-01
The objective of this work is the construction of a correlation between characteristics of heterogeneous catalysts, encoded in a descriptor vector, and their experimentally measured performances in the propene oxidation reaction. In this paper the key issue in the modeling process, namely the selection of adequate input variables, is explored. Several data-driven feature selection strategies were applied in order to obtain an estimate of the differences in variance and information content of various attributes, furthermore to compare their relative importance. Quantitative property activity relationship techniques using probabilistic neural networks have been used for the creation of various semi-empirical models. Finally, a robust classification model, assigning selected attributes of solid compounds as input to an appropriate performance class in the model reaction was obtained. It has been evident that the mathematical support for the primary attributes set proposed by chemists can be highly desirable.
Behavior and neural basis of near-optimal visual search
Ma, Wei Ji; Navalpakkam, Vidhya; Beck, Jeffrey M; van den Berg, Ronald; Pouget, Alexandre
2013-01-01
The ability to search efficiently for a target in a cluttered environment is one of the most remarkable functions of the nervous system. This task is difficult under natural circumstances, as the reliability of sensory information can vary greatly across space and time and is typically a priori unknown to the observer. In contrast, visual-search experiments commonly use stimuli of equal and known reliability. In a target detection task, we randomly assigned high or low reliability to each item on a trial-by-trial basis. An optimal observer would weight the observations by their trial-to-trial reliability and combine them using a specific nonlinear integration rule. We found that humans were near-optimal, regardless of whether distractors were homogeneous or heterogeneous and whether reliability was manipulated through contrast or shape. We present a neural-network implementation of near-optimal visual search based on probabilistic population coding. The network matched human performance. PMID:21552276
Seismic hazard in the Nation's breadbasket
Boyd, Oliver; Haller, Kathleen; Luco, Nicolas; Moschetti, Morgan P.; Mueller, Charles; Petersen, Mark D.; Rezaeian, Sanaz; Rubinstein, Justin L.
2015-01-01
The USGS National Seismic Hazard Maps were updated in 2014 and included several important changes for the central United States (CUS). Background seismicity sources were improved using a new moment-magnitude-based catalog; a new adaptive, nearest-neighbor smoothing kernel was implemented; and maximum magnitudes for background sources were updated. Areal source zones developed by the Central and Eastern United States Seismic Source Characterization for Nuclear Facilities project were simplified and adopted. The weighting scheme for ground motion models was updated, giving more weight to models with a faster attenuation with distance compared to the previous maps. Overall, hazard changes (2% probability of exceedance in 50 years, across a range of ground-motion frequencies) were smaller than 10% in most of the CUS relative to the 2008 USGS maps despite new ground motion models and their assigned logic tree weights that reduced the probabilistic ground motions by 5–20%.
Fragoulis, George; Merli, Annalisa; Reeves, Graham; Meregalli, Giovanna; Stenberg, Kristofer; Tanaka, Taku; Capri, Ettore
2011-06-01
Quinoxyfen is a fungicide of the phenoxyquinoline class used to control powdery mildew, Uncinula necator (Schw.) Burr. Owing to its high persistence and strong sorption in soil, it could represent a risk for soil organisms if they are exposed at ecologically relevant concentrations. The objective of this paper is to predict the bioconcentration factors (BCFs) of quinoxyfen in earthworms, selected as a representative soil organism, and to assess the uncertainty in the estimation of this parameter. Three fields in each of four vineyards in southern and northern Italy were sampled over two successive years. The measured BCFs varied over time, possibly owing to seasonal changes and the consequent changes in behaviour and ecology of earthworms. Quinoxyfen did not accumulate in soil, as the mean soil concentrations at the end of the 2 year monitoring period ranged from 9.16 to 16.0 µg kg⁻¹ dw for the Verona province and from 23.9 to 37.5 µg kg⁻¹ dw for the Taranto province, with up to eight applications per season. To assess the uncertainty of the BCF in earthworms, a probabilistic approach was used, firstly by building with weighted bootstrapping techniques a generic probabilistic density function (PDF) accounting for variability and incompleteness of knowledge. The generic PDF was then used to derive prior distribution functions, which, by application of Bayes' theorem, were updated with the new measurements and a posterior distribution was finally created. The study is a good example of probabilistic risk assessment. The means of mean and SD posterior estimates of log BCFworm (2.06, 0.91) are the 'best estimate values'. Further risk assessment of quinoxyfen and other phenoxyquinoline fungicides and realistic representative scenarios for modelling exercises required for future authorization and post-authorization requirements can now use this value as input. Copyright © 2011 Society of Chemical Industry.
Shi, R; Morgan, J L; Allopenna, P
1998-02-01
Maternal infant-directed speech in Mandarin Chinese and Turkish (two mother-child dyads each; ages of children between 0;11 and 1;8) was examined to see if cues exist in input that might assist infants' assignment of words to lexical and functional item categories. Distributional, phonological, and acoustic measures were analysed. In each language, lexical and functional items (i.e. syllabic morphemes) differed significantly on numerous measures. Despite differences in mean values between categories, distributions of values typically displayed substantial overlap. However, simulations with self-organizing neural networks supported the conclusion that although individual dimensions had low cue validity, in each language multidimensional constellations of presyntactic cues are sufficient to guide assignment of words to rudimentary grammatical categories.