Modulation Based on Probability Density Functions
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
2009-01-01
A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.
Direct Updating of an RNA Base-Pairing Probability Matrix with Marginal Probability Constraints
2012-01-01
Abstract A base-pairing probability matrix (BPPM) stores the probabilities for every possible base pair in an RNA sequence and has been used in many algorithms in RNA informatics (e.g., RNA secondary structure prediction and motif search). In this study, we propose a novel algorithm to perform iterative updates of a given BPPM, satisfying marginal probability constraints that are (approximately) given by recently developed biochemical experiments, such as SHAPE, PAR, and FragSeq. The method is easily implemented and is applicable to common models for RNA secondary structures, such as energy-based or machine-learning–based models. In this article, we focus mainly on the details of the algorithms, although preliminary computational experiments will also be presented. PMID:23210474
Vehicle Detection Based on Probability Hypothesis Density Filter.
Zhang, Feihu; Knoll, Alois
2016-01-01
In the past decade, the developments of vehicle detection have been significantly improved. By utilizing cameras, vehicles can be detected in the Regions of Interest (ROI) in complex environments. However, vision techniques often suffer from false positives and limited field of view. In this paper, a LiDAR based vehicle detection approach is proposed by using the Probability Hypothesis Density (PHD) filter. The proposed approach consists of two phases: the hypothesis generation phase to detect potential objects and the hypothesis verification phase to classify objects. The performance of the proposed approach is evaluated in complex scenarios, compared with the state-of-the-art. PMID:27070621
Vehicle Detection Based on Probability Hypothesis Density Filter
Zhang, Feihu; Knoll, Alois
2016-01-01
In the past decade, the developments of vehicle detection have been significantly improved. By utilizing cameras, vehicles can be detected in the Regions of Interest (ROI) in complex environments. However, vision techniques often suffer from false positives and limited field of view. In this paper, a LiDAR based vehicle detection approach is proposed by using the Probability Hypothesis Density (PHD) filter. The proposed approach consists of two phases: the hypothesis generation phase to detect potential objects and the hypothesis verification phase to classify objects. The performance of the proposed approach is evaluated in complex scenarios, compared with the state-of-the-art. PMID:27070621
Significance probability mapping: an aid in the topographic analysis of brain electrical activity.
Duffy, F H; Bartels, P H; Burchfiel, J L
1981-05-01
We illustrate the application of significance probability mapping (SPM) to the analysis of topographic maps of spectral analyzed EEG and visual evoked potential (VEP) activity from patients with brain tumors, boys with dyslexia, and control subjects. When the VEP topographic plots of tumor patients were displayed as number of standard deviations from a reference mean, more subjects were correctly identified than by inspection of the underlying raw data. When topographic plots of EEG alpha activity obtained while listening to speech or music were compared by t statistic to plots of resting alpha activity, regions of cortex presumably activated by speech or music were delineated. DIfferent regions were defined in dyslexic boys and controls. We propose that SPM will prove valuable in the regional localization of normal and abnormal functions in other clinical situations. PMID:6165544
Probability-based nitrate contamination map of groundwater in Kinmen.
Liu, Chen-Wuing; Wang, Yeuh-Bin; Jang, Cheng-Shin
2013-12-01
Groundwater supplies over 50% of drinking water in Kinmen. Approximately 16.8% of groundwater samples in Kinmen exceed the drinking water quality standard (DWQS) of NO3 (-)-N (10 mg/L). The residents drinking high nitrate-polluted groundwater pose a potential risk to health. To formulate effective water quality management plan and assure a safe drinking water in Kinmen, the detailed spatial distribution of nitrate-N in groundwater is a prerequisite. The aim of this study is to develop an efficient scheme for evaluating spatial distribution of nitrate-N in residential well water using logistic regression (LR) model. A probability-based nitrate-N contamination map in Kinmen is constructed. The LR model predicted the binary occurrence probability of groundwater nitrate-N concentrations exceeding DWQS by simple measurement variables as independent variables, including sampling season, soil type, water table depth, pH, EC, DO, and Eh. The analyzed results reveal that three statistically significant explanatory variables, soil type, pH, and EC, are selected for the forward stepwise LR analysis. The total ratio of correct classification reaches 92.7%. The highest probability of nitrate-N contamination map presents in the central zone, indicating that groundwater in the central zone should not be used for drinking purposes. Furthermore, a handy EC-pH-probability curve of nitrate-N exceeding the threshold of DWQS was developed. This curve can be used for preliminary screening of nitrate-N contamination in Kinmen groundwater. This study recommended that the local agency should implement the best management practice strategies to control nonpoint nitrogen sources and carry out a systematic monitoring of groundwater quality in residential wells of the high nitrate-N contamination zones. PMID:23892715
PROBABILITY BASED CORROSION CONTROL FOR WASTE TANKS - PART II
Hoffman, E.; Edwards, T.
2010-12-09
As part of an ongoing study to evaluate the discontinuity in the corrosion controls at the SRS tank farm, a study was conducted this year to assess the minimum concentrations below 1 molar nitrate, see Figure 1. Current controls on the tank farm solution chemistry are in place to prevent the initiation and propagation of pitting and stress corrosion cracking in the primary steel waste tanks. The controls are based upon a series of experiments performed with simulated solutions on materials used for construction of the tanks, namely ASTM A537 carbon steel (A537). During FY09, an experimental program was undertaken to investigate the risk associated with reducing the minimum molar nitrite concentration required to confidently inhibit pitting in dilute solutions (i.e., less than 1 molar nitrate). The experimental results and conclusions herein provide a statistical basis to quantify the probability of pitting for the tank wall exposed to various solutions with dilute concentrations of nitrate and nitrite. Understanding the probability for pitting will allow the facility to make tank-specific risk-based decisions for chemistry control. Based on previous electrochemical testing, a statistical test matrix was developed to refine and solidify the application of the statistical mixture/amount model to corrosion of A537 steel. A mixture/amount model was identified based on statistical analysis of recent and historically collected electrochemical data. This model provides a more complex relationship between the nitrate and nitrite concentrations and the probability of pitting than is represented by the model underlying the current chemistry control program, and its use may provide a technical basis for the utilization of less nitrite to inhibit pitting at concentrations below 1 molar nitrate. FY09 results fit within the mixture/amount model, and further refine the nitrate regime in which the model is applicable. The combination of visual observations and cyclic
Lancet, D; Sadovsky, E; Seidemann, E
1993-04-15
A generalized phenomenological model is presented for stereospecific recognition between biological receptors and their ligands. We ask what is the distribution of binding constants psi(K) between an arbitrary ligand and members of a large receptor repertoire, such as immunoglobulins or olfactory receptors. For binding surfaces with B potential subsite and S different types of subsite configurations, the number of successful elementary interactions obeys a binomial distribution. The discrete probability function psi(K) is then derived with assumptions on alpha, the free energy contribution per elementary interaction. The functional form of psi(K) may be universal, although the parameter values could vary for different ligand types. An estimate of the parameter values of psi(K) for iodovanillin, an analog of odorants and immunological haptens, is obtained by equilibrium dialysis experiments with nonimmune antibodies. Based on a simple relationship, predicted by the model, between the size of a receptor repertoire and its average maximal affinity toward an arbitrary ligand, the size of the olfactory receptor repertoire (Nolf) is calculated as 300-1000, in very good agreement with recent molecular biological studies. A very similar estimate, Nolf = 500, is independently derived by relating a theoretical distribution of maxima for psi(K) with published human olfactory threshold variations. The present model also has implications to the question of olfactory coding and to the analysis of specific anosmias, genetic deficits in perceiving particular odorants. More generally, the proposed model provides a better understanding of ligand specificity in biological receptors and could help in understanding their evolution. PMID:8475121
Lancet, D; Sadovsky, E; Seidemann, E
1993-01-01
A generalized phenomenological model is presented for stereospecific recognition between biological receptors and their ligands. We ask what is the distribution of binding constants psi(K) between an arbitrary ligand and members of a large receptor repertoire, such as immunoglobulins or olfactory receptors. For binding surfaces with B potential subsite and S different types of subsite configurations, the number of successful elementary interactions obeys a binomial distribution. The discrete probability function psi(K) is then derived with assumptions on alpha, the free energy contribution per elementary interaction. The functional form of psi(K) may be universal, although the parameter values could vary for different ligand types. An estimate of the parameter values of psi(K) for iodovanillin, an analog of odorants and immunological haptens, is obtained by equilibrium dialysis experiments with nonimmune antibodies. Based on a simple relationship, predicted by the model, between the size of a receptor repertoire and its average maximal affinity toward an arbitrary ligand, the size of the olfactory receptor repertoire (Nolf) is calculated as 300-1000, in very good agreement with recent molecular biological studies. A very similar estimate, Nolf = 500, is independently derived by relating a theoretical distribution of maxima for psi(K) with published human olfactory threshold variations. The present model also has implications to the question of olfactory coding and to the analysis of specific anosmias, genetic deficits in perceiving particular odorants. More generally, the proposed model provides a better understanding of ligand specificity in biological receptors and could help in understanding their evolution. PMID:8475121
PROBABILITY BASED CORROSION CONTROL FOR LIQUID WASTE TANKS - PART III
Hoffman, E.; Edwards, T.
2010-12-09
The liquid waste chemistry control program is designed to reduce the pitting corrosion occurrence on tank walls. The chemistry control program has been implemented, in part, by applying engineering judgment safety factors to experimental data. However, the simple application of a general safety factor can result in use of excessive corrosion inhibiting agents. The required use of excess corrosion inhibitors can be costly for tank maintenance, waste processing, and in future tank closure. It is proposed that a probability-based approach can be used to quantify the risk associated with the chemistry control program. This approach can lead to the application of tank-specific chemistry control programs reducing overall costs associated with overly conservative use of inhibitor. Furthermore, when using nitrite as an inhibitor, the current chemistry control program is based on a linear model of increased aggressive species requiring increased protective species. This linear model was primarily supported by experimental data obtained from dilute solutions with nitrate concentrations less than 0.6 M, but is used to produce the current chemistry control program up to 1.0 M nitrate. Therefore, in the nitrate space between 0.6 and 1.0 M, the current control limit is based on assumptions that the linear model developed from data in the <0.6 M region is applicable in the 0.6-1.0 M region. Due to this assumption, further investigation of the nitrate region of 0.6 M to 1.0 M has potential for significant inhibitor reduction, while maintaining the same level of corrosion risk associated with the current chemistry control program. Ongoing studies have been conducted in FY07, FY08, FY09 and FY10 to evaluate the corrosion controls at the SRS tank farm and to assess the minimum nitrite concentrations to inhibit pitting in ASTM A537 carbon steel below 1.0 molar nitrate. The experimentation from FY08 suggested a non-linear model known as the mixture/amount model could be used to predict
ProbOnto: ontology and knowledge base of probability distributions
Swat, Maciej J.; Grenon, Pierre; Wimalaratne, Sarala
2016-01-01
Motivation: Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. Results: ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. Availability and Implementation: http://probonto.org Contact: mjswat@ebi.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153608
Lake Superior Phytoplankton Characterization from the 2006 Probability Based Survey
We conducted a late summer probability based survey of Lake Superior in 2006 which consisted of 52 sites stratified across 3 depth zones. As part of this effort, we collected composite phytoplankton samples from the epilimnion and the fluorescence maxima (Fmax) at 29 of the site...
Walker, J.D.; Burchfiel, B.C.; Royden, L.H.
1983-02-01
The upper part of the Moenkopi Formation in the Northern Clark Mountains, Southeastern California, contains conglomerate beds whose clasts comprise igneous, metamorphic, and sedimentary rocks. Metamorphic clasts include foliated granite, meta-arkose, and quarzite, probably derived from older Precambrian basement and younger Precambrian clastic rocks. Volcanic clasts are altered plagioclase-bearing rocks, and sedimentary clasts were derived from Paleozoic miogeoclinal rocks. Paleocurrent data indicate that the clasts had a source to the southwest. An age of late Early or early Middle Triassic has been tentatively assigned to these conglomerates. These conglomerates indicate that Late Permian to Early Triassic deformational events in this part of the orogen affected rocks much farther east than has been previously recognized.
Assessing magnitude probability distribution through physics-based rupture scenarios
NASA Astrophysics Data System (ADS)
Hok, Sébastien; Durand, Virginie; Bernard, Pascal; Scotti, Oona
2016-04-01
When faced with complex network of faults in a seismic hazard assessment study, the first question raised is to what extent the fault network is connected and what is the probability that an earthquake ruptures simultaneously a series of neighboring segments. Physics-based dynamic rupture models can provide useful insight as to which rupture scenario is most probable, provided that an exhaustive exploration of the variability of the input parameters necessary for the dynamic rupture modeling is accounted for. Given the random nature of some parameters (e.g. hypocenter location) and the limitation of our knowledge, we used a logic-tree approach in order to build the different scenarios and to be able to associate them with a probability. The methodology is applied to the three main faults located along the southern coast of the West Corinth rift. Our logic tree takes into account different hypothesis for: fault geometry, location of hypocenter, seismic cycle position, and fracture energy on the fault plane. The variability of these parameters is discussed, and the different values tested are weighted accordingly. 64 scenarios resulting from 64 parameter combinations were included. Sensitivity studies were done to illustrate which parameters control the variability of the results. Given the weight of the input parameters, we evaluated the probability to obtain a full network break to be 15 %, while single segment rupture represents 50 % of the scenarios. These rupture scenario probability distribution along the three faults of the West Corinth rift fault network can then be used as input to a seismic hazard calculation.
Thomson, R G; Chander, S; Savan, M; Fox, M L
1975-01-01
Six groups of ten beef calves six to eight months of age were shipped from western Canada and observed untreated for one week after arrival. The following parameters were measured daily: body temperature, plasma fibrinogen, nasal bacterial mean colony counts of Pasteurella hemolytica and Pasteurella multocida, total and differential leukoyte counts, packed cell volumes and the following, twice during the week: serum and nasal antibody titres to P. hemolytica and parainfluenza-3 virus. The lungs from 44 of the calves were obtained at post mortem and given a numerical score based on the degree of pneumonia present. Animals were designated SICK and WELL according to body temperature and plasma fibrinogen. The SICK animals had higher nasal mean colony counts of P. hemolytica than the WELL animals. The SICK animals had lower levels of serum antibody to P. hemolytica than the WELL on day 1 but had a greater rise in titre over the week than did the WELL animals. Both groups were similar with regard to serum antibody to parainfluenza-3 virus and there was little change in these titres. The SICK animals had a much greater degree of pneumonia than the WELL. The values of some of the parameters were combined with the data of previously studied animals in order to provide a comparison of SICK and WELL with larger numbers of animals. PMID:164992
QKD-based quantum private query without a failure probability
NASA Astrophysics Data System (ADS)
Liu, Bin; Gao, Fei; Huang, Wei; Wen, QiaoYan
2015-10-01
In this paper, we present a quantum-key-distribution (QKD)-based quantum private query (QPQ) protocol utilizing single-photon signal of multiple optical pulses. It maintains the advantages of the QKD-based QPQ, i.e., easy to implement and loss tolerant. In addition, different from the situations in the previous QKD-based QPQ protocols, in our protocol, the number of the items an honest user will obtain is always one and the failure probability is always zero. This characteristic not only improves the stability (in the sense that, ignoring the noise and the attack, the protocol would always succeed), but also benefits the privacy of the database (since the database will no more reveal additional secrets to the honest users). Furthermore, for the user's privacy, the proposed protocol is cheat sensitive, and for security of the database, we obtain an upper bound for the leaked information of the database in theory.
Datz, F.L.; Bedont, R.A.; Taylor, A.
1985-05-01
Patients with a pleural effusion on chest x-ray often undergo a lung scan to exclude pulmonary embolism (PE). According to other studies, when the scan shows a perfusion defect equal in size to a radiographic abnormality on chest x-ray, the scan should be classified as indeterminate or intermediate probability for PE. However, since those studies dealt primarily with alveolar infiltrates rather than pleural effusions, the authors undertook a retrospective study to determine the probability of PE in patients with pleural effusion and a matching perfusion defect. The authors reviewed 451 scans and x-rays of patients studied for suspected PE. Of those, 53 had moderate or large perfusion defects secondary to pleural effusion without other significant (>25% of a segment) effusion without other significant (>25% of a segment) defects on the scan. Final diagnosis was confirmed by pulmonary angiography (16), thoracentesis (40), venography (11), other radiographic and laboratory studies, and clinical course. Of the 53 patients, only 2 patients had venous thrombotic disease. One patient had PE on pulmonary angiography, the other patient had thrombophlebitis on venography. The remainder of the patients had effusions due to congestive heart failure (12), malignancy (12), infection (7), trauma (7), collegen vascular disease (7), sympathetic effusion (3) and unknown etiology (3). The authors conclude that lung scans with significant perfusion defects limited to matching pleural effusions on chest x-ray have a low probability for PE.
The conditional risk probability-based seawall height design method
NASA Astrophysics Data System (ADS)
Yang, Xing; Hu, Xiaodong; Li, Zhiqing
2015-11-01
The determination of the required seawall height is usually based on the combination of wind speed (or wave height) and still water level according to a specified return period, e.g., 50-year return period wind speed and 50-year return period still water level. In reality, the two variables are be partially correlated. This may be lead to over-design (costs) of seawall structures. The above-mentioned return period for the design of a seawall depends on economy, society and natural environment in the region. This means a specified risk level of overtopping or damage of a seawall structure is usually allowed. The aim of this paper is to present a conditional risk probability-based seawall height design method which incorporates the correlation of the two variables. For purposes of demonstration, the wind speeds and water levels collected from Jiangsu of China are analyzed. The results show this method can improve seawall height design accuracy.
Gesture Recognition Based on the Probability Distribution of Arm Trajectories
NASA Astrophysics Data System (ADS)
Wan, Khairunizam; Sawada, Hideyuki
The use of human motions for the interaction between humans and computers is becoming an attractive alternative to verbal media, especially through the visual interpretation of the human body motion. In particular, hand gestures are used as non-verbal media for the humans to communicate with machines that pertain to the use of the human gestures to interact with them. This paper introduces a 3D motion measurement of the human upper body for the purpose of the gesture recognition, which is based on the probability distribution of arm trajectories. In this study, by examining the characteristics of the arm trajectories given by a signer, motion features are selected and classified by using a fuzzy technique. Experimental results show that the use of the features extracted from arm trajectories effectively works on the recognition of dynamic gestures of a human, and gives a good performance to classify various gesture patterns.
Image-based camera motion estimation using prior probabilities
NASA Astrophysics Data System (ADS)
Sargent, Dusty; Park, Sun Young; Spofford, Inbar; Vosburgh, Kirby
2011-03-01
Image-based camera motion estimation from video or still images is a difficult problem in the field of computer vision. Many algorithms have been proposed for estimating intrinsic camera parameters, detecting and matching features between images, calculating extrinsic camera parameters based on those features, and optimizing the recovered parameters with nonlinear methods. These steps in the camera motion inference process all face challenges in practical applications: locating distinctive features can be difficult in many types of scenes given the limited capabilities of current feature detectors, camera motion inference can easily fail in the presence of noise and outliers in the matched features, and the error surfaces in optimization typically contain many suboptimal local minima. The problems faced by these techniques are compounded when they are applied to medical video captured by an endoscope, which presents further challenges such as non-rigid scenery and severe barrel distortion of the images. In this paper, we study these problems and propose the use of prior probabilities to stabilize camera motion estimation for the application of computing endoscope motion sequences in colonoscopy. Colonoscopy presents a special case for camera motion estimation in which it is possible to characterize typical motion sequences of the endoscope. As the endoscope is restricted to move within a roughly tube-shaped structure, forward/backward motion is expected, with only small amounts of rotation and horizontal movement. We formulate a probabilistic model of endoscope motion by maneuvering an endoscope and attached magnetic tracker through a synthetic colon model and fitting a distribution to the observed motion of the magnetic tracker. This model enables us to estimate the probability of the current endoscope motion given previously observed motion in the sequence. We add these prior probabilities into the camera motion calculation as an additional penalty term in RANSAC
Naive Probability: Model-Based Estimates of Unique Events.
Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N
2015-08-01
We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning. PMID:25363706
A probability-based formula for calculating interobserver agreement1
Yelton, Ann R.; Wildman, Beth G.; Erickson, Marilyn T.
1977-01-01
Estimates of observer agreement are necessary to assess the acceptability of interval data. A common method for assessing observer agreement, per cent agreement, includes several major weaknesses and varies as a function of the frequency of behavior recorded and the inclusion or exclusion of agreements on nonoccurrences. Also, agreements that might be expected to occur by chance are not taken into account. An alternative method for assessing observer agreement that determines the exact probability that the obtained number of agreements or better would have occurred by chance is presented and explained. Agreements on both occurrences and nonoccurrences of behavior are considered in the calculation of this probability. PMID:16795541
Success Probability Analysis for Shuttle Based Microgravity Experiments
NASA Technical Reports Server (NTRS)
Liou, Ying-Hsin Andrew
1996-01-01
Presented in this report are the results of data analysis of shuttle-based microgravity flight experiments. Potential factors were identified in the previous grant period, and in this period 26 factors were selected for data analysis. In this project, the degree of success was developed and used as the performance measure. 293 of the 391 experiments in Lewis Research Center Microgravity Database were assigned degrees of success. The frequency analysis and the analysis of variance were conducted to determine the significance of the factors that effect the experiment success.
NASA Astrophysics Data System (ADS)
Loukianov, Andrey A.; Sugisaka, Masanori
This paper presents a vision and landmark based approach to improve the efficiency of probability grid Markov localization for mobile robots. The proposed approach uses visual landmarks that can be detected by a rotating video camera on the robot. We assume that visual landmark positions in the map are known and that each landmark can be assigned to a certain landmark class. The method uses classes of observed landmarks and their relative arrangement to select regions in the robot posture space where the location probability density function is to be updated. Subsequent computations are performed only in these selected update regions thus the computational workload is significantly reduced. Probabilistic landmark-based localization method, details of the map and robot perception are discussed. A technique to compute the update regions and their parameters for selective computation is introduced. Simulation results are presented to show the effectiveness of the approach.
Accelerating rejection-based simulation of biochemical reactions with bounded acceptance probability
NASA Astrophysics Data System (ADS)
Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto
2016-06-01
Stochastic simulation of large biochemical reaction networks is often computationally expensive due to the disparate reaction rates and high variability of population of chemical species. An approach to accelerate the simulation is to allow multiple reaction firings before performing update by assuming that reaction propensities are changing of a negligible amount during a time interval. Species with small population in the firings of fast reactions significantly affect both performance and accuracy of this simulation approach. It is even worse when these small population species are involved in a large number of reactions. We present in this paper a new approximate algorithm to cope with this problem. It is based on bounding the acceptance probability of a reaction selected by the exact rejection-based simulation algorithm, which employs propensity bounds of reactions and the rejection-based mechanism to select next reaction firings. The reaction is ensured to be selected to fire with an acceptance rate greater than a predefined probability in which the selection becomes exact if the probability is set to one. Our new algorithm improves the computational cost for selecting the next reaction firing and reduces the updating the propensities of reactions.
Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto
2016-06-14
Stochastic simulation of large biochemical reaction networks is often computationally expensive due to the disparate reaction rates and high variability of population of chemical species. An approach to accelerate the simulation is to allow multiple reaction firings before performing update by assuming that reaction propensities are changing of a negligible amount during a time interval. Species with small population in the firings of fast reactions significantly affect both performance and accuracy of this simulation approach. It is even worse when these small population species are involved in a large number of reactions. We present in this paper a new approximate algorithm to cope with this problem. It is based on bounding the acceptance probability of a reaction selected by the exact rejection-based simulation algorithm, which employs propensity bounds of reactions and the rejection-based mechanism to select next reaction firings. The reaction is ensured to be selected to fire with an acceptance rate greater than a predefined probability in which the selection becomes exact if the probability is set to one. Our new algorithm improves the computational cost for selecting the next reaction firing and reduces the updating the propensities of reactions. PMID:27305997
Quantitative Determination of the Probability of Multiple-Motor Transport in Bead-Based Assays.
Li, Qiaochu; King, Stephen J; Gopinathan, Ajay; Xu, Jing
2016-06-21
With their longest dimension typically being less than 100 nm, molecular motors are significantly below the optical-resolution limit. Despite substantial advances in fluorescence-based imaging methodologies, labeling with beads remains critical for optical-trapping-based investigations of molecular motors. A key experimental challenge in bead-based assays is that the number of motors on a bead is not well defined. Particularly for single-molecule investigations, the probability of single- versus multiple-motor events has not been experimentally investigated. Here, we used bead travel distance as an indicator of multiple-motor transport and determined the lower-bound probability of bead transport by two or more motors. We limited the ATP concentration to increase our detection sensitivity for multiple- versus single-kinesin transport. Surprisingly, for all but the lowest motor number examined, our measurements exceeded estimations of a previous model by ≥2-fold. To bridge this apparent gap between theory and experiment, we derived a closed-form expression for the probability of bead transport by multiple motors, and constrained the only free parameter in this model using our experimental measurements. Our data indicate that kinesin extends to ∼57 nm during bead transport, suggesting that kinesin exploits its conformational flexibility to interact with microtubules at highly curved interfaces such as those present for vesicle transport in cells. To our knowledge, our findings provide the first experimentally constrained guide for estimating the probability of multiple-motor transport in optical trapping studies. The experimental approach utilized here (limiting ATP concentration) may be generally applicable to studies in which molecular motors are labeled with cargos that are artificial or are purified from cellular extracts. PMID:27332130
Bureau, Alexandre; Younkin, Samuel G.; Parker, Margaret M.; Bailey-Wilson, Joan E.; Marazita, Mary L.; Murray, Jeffrey C.; Mangold, Elisabeth; Albacha-Hejazi, Hasan; Beaty, Terri H.; Ruczinski, Ingo
2014-01-01
Motivation: Family-based designs are regaining popularity for genomic sequencing studies because they provide a way to test cosegregation with disease of variants that are too rare in the population to be tested individually in a conventional case–control study. Results: Where only a few affected subjects per family are sequenced, the probability that any variant would be shared by all affected relatives—given it occurred in any one family member—provides evidence against the null hypothesis of a complete absence of linkage and association. A P-value can be obtained as the sum of the probabilities of sharing events as (or more) extreme in one or more families. We generalize an existing closed-form expression for exact sharing probabilities to more than two relatives per family. When pedigree founders are related, we show that an approximation of sharing probabilities based on empirical estimates of kinship among founders obtained from genome-wide marker data is accurate for low levels of kinship. We also propose a more generally applicable approach based on Monte Carlo simulations. We applied this method to a study of 55 multiplex families with apparent non-syndromic forms of oral clefts from four distinct populations, with whole exome sequences available for two or three affected members per family. The rare single nucleotide variant rs149253049 in ADAMTS9 shared by affected relatives in three Indian families achieved significance after correcting for multiple comparisons (p=2×10−6). Availability and implementation: Source code and binaries of the R package RVsharing are freely available for download at http://cran.r-project.org/web/packages/RVsharing/index.html. Contact: alexandre.bureau@msp.ulaval.ca or ingo@jhu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24740360
Probability method for Cerenkov luminescence tomography based on conformance error minimization.
Ding, Xintao; Wang, Kun; Jie, Biao; Luo, Yonglong; Hu, Zhenhua; Tian, Jie
2014-07-01
Cerenkov luminescence tomography (CLT) was developed to reconstruct a three-dimensional (3D) distribution of radioactive probes inside a living animal. Reconstruction methods are generally performed within a unique framework by searching for the optimum solution. However, the ill-posed aspect of the inverse problem usually results in the reconstruction being non-robust. In addition, the reconstructed result may not match reality since the difference between the highest and lowest uptakes of the resulting radiotracers may be considerably large, therefore the biological significance is lost. In this paper, based on the minimization of a conformance error, a probability method is proposed that consists of qualitative and quantitative modules. The proposed method first pinpoints the organ that contains the light source. Next, we developed a 0-1 linear optimization subject to a space constraint to model the CLT inverse problem, which was transformed into a forward problem by employing a region growing method to solve the optimization. After running through all of the elements used to grow the sources, a source sequence was obtained. Finally, the probability of each discrete node being the light source inside the organ was reconstructed. One numerical study and two in vivo experiments were conducted to verify the performance of the proposed algorithm, and comparisons were carried out using the hp-finite element method (hp-FEM). The results suggested that our proposed probability method was more robust and reasonable than hp-FEM. PMID:25071951
Probability method for Cerenkov luminescence tomography based on conformance error minimization
Ding, Xintao; Wang, Kun; Jie, Biao; Luo, Yonglong; Hu, Zhenhua; Tian, Jie
2014-01-01
Cerenkov luminescence tomography (CLT) was developed to reconstruct a three-dimensional (3D) distribution of radioactive probes inside a living animal. Reconstruction methods are generally performed within a unique framework by searching for the optimum solution. However, the ill-posed aspect of the inverse problem usually results in the reconstruction being non-robust. In addition, the reconstructed result may not match reality since the difference between the highest and lowest uptakes of the resulting radiotracers may be considerably large, therefore the biological significance is lost. In this paper, based on the minimization of a conformance error, a probability method is proposed that consists of qualitative and quantitative modules. The proposed method first pinpoints the organ that contains the light source. Next, we developed a 0-1 linear optimization subject to a space constraint to model the CLT inverse problem, which was transformed into a forward problem by employing a region growing method to solve the optimization. After running through all of the elements used to grow the sources, a source sequence was obtained. Finally, the probability of each discrete node being the light source inside the organ was reconstructed. One numerical study and two in vivo experiments were conducted to verify the performance of the proposed algorithm, and comparisons were carried out using the hp-finite element method (hp-FEM). The results suggested that our proposed probability method was more robust and reasonable than hp-FEM. PMID:25071951
NASA Astrophysics Data System (ADS)
Zeng, Sen; Huang, Shuangxi; Liu, Yang
Cooperative business processes (CBP)-based service-oriented enterprise networks (SOEN) are emerging with the significant advances of enterprise integration and service-oriented architecture. The performance prediction and optimization for CBP-based SOEN is very complex. To meet these challenges, one of the key points is to try to reduce an abstract service’s waiting number of its physical services. This paper introduces a probability-based determination method (PBDM) of an abstract service’ waiting number, M l , and time span, τ i , for its physical services. The determination of M i and τ i is according to the physical services’ arriving rule and their overall performance’s distribution functions. In PBDM, the arriving probability of the physical services with the best overall performance value is a pre-defined reliability. PBDM has made use of the information of the physical services’ arriving rule and performance distribution functions thoroughly, which will improve the computational efficiency for the scheme design and performance optimization of the collaborative business processes in service-oriented computing environments.
ERIC Educational Resources Information Center
Koparan, Timur; Yilmaz, Gül Kaleli
2015-01-01
The effect of simulation-based probability teaching on the prospective teachers' inference skills has been examined with this research. In line with this purpose, it has been aimed to examine the design, implementation and efficiency of a learning environment for experimental probability. Activities were built on modeling, simulation and the…
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).
A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.
2005-01-01
We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.
Reducing the Probability of Incidents Through Behavior-Based Safety -- An Anomaly or Not?
Turek, John A
2002-07-23
Reducing the probability of incidents through Behavior-Based Safety-an anomaly or not? Can a Behavior-Based Safety (BBS) process reduce the probability of an employee sustaining a work-related injury or illness? This presentation describes the actions taken to implement a sustainable BBS process and evaluates its effectiveness. The BBS process at the Stanford Linear Accelerator Center used a pilot population of national laboratory employees to: Achieve employee and management support; Reduce the probability of employees' sustaining work-related injuries and illnesses; and Provide support for additional funding to expand within the laboratory.
Reducing the Probability of Incidents Through Behavior-Based Safety -- An Anomaly or Not?
Turek, John A
2002-07-23
Reducing the probability of incidents through Behavior-Based Safety--an anomaly or not? Can a Behavior-Based Safety (BBS) process reduce the probability of an employee sustaining a work-related injury or illness? This presentation describes the actions taken to implement a sustainable BBS process and evaluates its effectiveness. The BBS process at the Stanford Linear Accelerator Center used a pilot population of national laboratory employees to: Achieve employee and management support; Reduce the probability of employees' sustaining work-related injuries and illnesses; and Provide support for additional funding to expand within the laboratory.
Open cluster membership probability based on K-means clustering algorithm
NASA Astrophysics Data System (ADS)
El Aziz, Mohamed Abd; Selim, I. M.; Essam, A.
2016-05-01
In the field of galaxies images, the relative coordinate positions of each star with respect to all the other stars are adapted. Therefore the membership of star cluster will be adapted by two basic criterions, one for geometric membership and other for physical (photometric) membership. So in this paper, we presented a new method for the determination of open cluster membership based on K-means clustering algorithm. This algorithm allows us to efficiently discriminate the cluster membership from the field stars. To validate the method we applied it on NGC 188 and NGC 2266, membership stars in these clusters have been obtained. The color-magnitude diagram of the membership stars is significantly clearer and shows a well-defined main sequence and a red giant branch in NGC 188, which allows us to better constrain the cluster members and estimate their physical parameters. The membership probabilities have been calculated and compared to those obtained by the other methods. The results show that the K-means clustering algorithm can effectively select probable member stars in space without any assumption about the spatial distribution of stars in cluster or field. The similarity of our results is in a good agreement with results derived by previous works.
Kausar, A S M Zahid; Reza, Ahmed Wasif; Wo, Lau Chun; Ramiah, Harikrishnan
2014-01-01
Although ray tracing based propagation prediction models are popular for indoor radio wave propagation characterization, most of them do not provide an integrated approach for achieving the goal of optimum coverage, which is a key part in designing wireless network. In this paper, an accelerated technique of three-dimensional ray tracing is presented, where rough surface scattering is included for making a more accurate ray tracing technique. Here, the rough surface scattering is represented by microfacets, for which it becomes possible to compute the scattering field in all possible directions. New optimization techniques, like dual quadrant skipping (DQS) and closest object finder (COF), are implemented for fast characterization of wireless communications and making the ray tracing technique more efficient. In conjunction with the ray tracing technique, probability based coverage optimization algorithm is accumulated with the ray tracing technique to make a compact solution for indoor propagation prediction. The proposed technique decreases the ray tracing time by omitting the unnecessary objects for ray tracing using the DQS technique and by decreasing the ray-object intersection time using the COF technique. On the other hand, the coverage optimization algorithm is based on probability theory, which finds out the minimum number of transmitters and their corresponding positions in order to achieve optimal indoor wireless coverage. Both of the space and time complexities of the proposed algorithm surpass the existing algorithms. For the verification of the proposed ray tracing technique and coverage algorithm, detailed simulation results for different scattering factors, different antenna types, and different operating frequencies are presented. Furthermore, the proposed technique is verified by the experimental results. PMID:25202733
An efficient surrogate-based method for computing rare failure probability
NASA Astrophysics Data System (ADS)
Li, Jing; Li, Jinglai; Xiu, Dongbin
2011-10-01
In this paper, we present an efficient numerical method for evaluating rare failure probability. The method is based on a recently developed surrogate-based method from Li and Xiu [J. Li, D. Xiu, Evaluation of failure probability via surrogate models, J. Comput. Phys. 229 (2010) 8966-8980] for failure probability computation. The method by Li and Xiu is of hybrid nature, in the sense that samples of both the surrogate model and the true physical model are used, and its efficiency gain relies on using only very few samples of the true model. Here we extend the capability of the method to rare probability computation by using the idea of importance sampling (IS). In particular, we employ cross-entropy (CE) method, which is an effective method to determine the biasing distribution in IS. We demonstrate that, by combining with the CE method, a surrogate-based IS algorithm can be constructed and is highly efficient for rare failure probability computation—it incurs much reduced simulation efforts compared to the traditional CE-IS method. In many cases, the new method is capable of capturing failure probability as small as 10 -12 ˜ 10 -6 with only several hundreds samples.
PROBABILITY BASED CORROSION CONTROL FOR HIGH LEVEL WASTE TANKS: INTERIM REPORT
Hoffman, E; Karthik Subramanian, K
2008-04-23
Controls on the solution chemistry (minimum nitrite and hydroxide concentrations) are in place to prevent the initiation and propagation of pitting and stress corrosion cracking in high level waste (HLW) tanks. These controls are based upon a series of experiments performed on carbon steel coupons in simulated waste solutions. An experimental program was undertaken to investigate reducing the minimum molar nitrite concentration required to confidently inhibit pitting. A statistical basis to quantify the probability of pitting for the tank wall, when exposed to various dilute solutions, is being developed. Electrochemical and coupon testing are being performed within the framework of the statistical test matrix to determine the minimum necessary inhibitor concentrations and develop a quantitative model to predict pitting propensity. A subset of the original statistical test matrix was used to develop an applied understanding of the corrosion response of the carbon steel in the various environments. The interim results suggest that there exists some critical nitrite concentration that sufficiently inhibits against localized corrosion mechanisms due to nitrates/chlorides/sulfates, beyond which further nitrite additions are unnecessary. The combination of visual observation and the cyclic potentiodynamic polarization scans indicate the potential for significant inhibitor reductions without consequence specifically at nitrate concentrations near 1 M. The complete data sets will be used to determine the statistical basis to confidently inhibit against pitting using nitrite inhibition with the current pH controls. Once complete, a revised chemistry control program will be devised based upon the probability of pitting specifically for dilute solutions which will allow for tank specific chemistry control implementation.
Probability Elicitation Under Severe Time Pressure: A Rank-Based Method.
Jaspersen, Johannes G; Montibeller, Gilberto
2015-07-01
Probability elicitation protocols are used to assess and incorporate subjective probabilities in risk and decision analysis. While most of these protocols use methods that have focused on the precision of the elicited probabilities, the speed of the elicitation process has often been neglected. However, speed is also important, particularly when experts need to examine a large number of events on a recurrent basis. Furthermore, most existing elicitation methods are numerical in nature, but there are various reasons why an expert would refuse to give such precise ratio-scale estimates, even if highly numerate. This may occur, for instance, when there is lack of sufficient hard evidence, when assessing very uncertain events (such as emergent threats), or when dealing with politicized topics (such as terrorism or disease outbreaks). In this article, we adopt an ordinal ranking approach from multicriteria decision analysis to provide a fast and nonnumerical probability elicitation process. Probabilities are subsequently approximated from the ranking by an algorithm based on the principle of maximum entropy, a rule compatible with the ordinal information provided by the expert. The method can elicit probabilities for a wide range of different event types, including new ways of eliciting probabilities for stochastically independent events and low-probability events. We use a Monte Carlo simulation to test the accuracy of the approximated probabilities and try the method in practice, applying it to a real-world risk analysis recently conducted for DEFRA (the U.K. Department for the Environment, Farming and Rural Affairs): the prioritization of animal health threats. PMID:25850859
Flow Regime Based Climatologies of Lightning Probabilities for Spaceports and Airports
NASA Technical Reports Server (NTRS)
Bauman, William H., III; Sharp, David; Spratt, Scott; Lafosse, Richard A.
2008-01-01
The objective of this work was to provide forecasters with a tool to indicate the warm season climatological probability of one or more lightning strikes within a circle at a site within a specified time interval. This paper described the AMU work conducted in developing flow regime based climatologies of lightning probabilities for the SLF and seven airports in the NWS MLB CWA in east-central Florida. The paper also described the GUI developed by the AMU that is used to display the data for the operational forecasters. There were challenges working with gridded lightning data as well as the code that accompanied the gridded data. The AMU modified the provided code to be able to produce the climatologies of lightning probabilities based on eight flow regimes for 5-, 10-, 20-, and 30-n mi circles centered on eight sites in 1-, 3-, and 6-hour increments.
NASA Astrophysics Data System (ADS)
Smith, Leonard A.
2010-05-01
whether or not probabilities based on imperfect models can be expected to yield probabilistic odds which are sustainable. Evidence is provided that suggest this is not the case. Even with very good models (good in an Root-Mean-Square sense), the risk of ruin of probabilistic odds is significantly higher than might be expected. Methods for constructing model-based non-probabilistic odds which are sustainable are discussed. The aim here is to be relevant to real world decision support, and so unrealistic assumptions of equal knowledge, equal compute power, or equal access to information are to be avoided. Finally, the use of non-probabilistic odds as a method for communicating deep uncertainty (uncertainty in a probability forecast itself) is discussed in the context of other methods, such as stating one's subjective probability that the models will prove inadequate in each particular instance (that is, the Probability of a "Big Surprise").
The Role of Probability-Based Inference in an Intelligent Tutoring System.
ERIC Educational Resources Information Center
Mislevy, Robert J.; Gitomer, Drew H.
Probability-based inference in complex networks of interdependent variables is an active topic in statistical research, spurred by such diverse applications as forecasting, pedigree analysis, troubleshooting, and medical diagnosis. This paper concerns the role of Bayesian inference networks for updating student models in intelligent tutoring…
Learning Probabilities in Computer Engineering by Using a Competency- and Problem-Based Approach
ERIC Educational Resources Information Center
Khoumsi, Ahmed; Hadjou, Brahim
2005-01-01
Our department has redesigned its electrical and computer engineering programs by adopting a learning methodology based on competence development, problem solving, and the realization of design projects. In this article, we show how this pedagogical approach has been successfully used for learning probabilities and their application to computer…
HABITAT ASSESSMENT USING A RANDOM PROBABILITY BASED SAMPLING DESIGN: ESCAMBIA RIVER DELTA, FLORIDA
Smith, Lisa M., Darrin D. Dantin and Steve Jordan. In press. Habitat Assessment Using a Random Probability Based Sampling Design: Escambia River Delta, Florida (Abstract). To be presented at the SWS/GERS Fall Joint Society Meeting: Communication and Collaboration: Coastal Systems...
Hanayama, Nobutane; Sibuya, Masaaki
2016-08-01
In modern biology, theories of aging fall mainly into two groups: damage theories and programed theories. If programed theories are true, the probability that human beings live beyond a specific age will be zero. In contrast, if damage theories are true, such an age does not exist, and a longevity record will be eventually destroyed. In this article, for examining real state, a special type of binomial model based on the generalized Pareto distribution has been applied to data of Japanese centenarians. From the results, it is concluded that the upper limit of lifetime probability distribution in the Japanese population has been estimated 123 years. PMID:26362439
Performance of the Rayleigh task based on the posterior probability of tomographic reconstructions
Hanson, K.M.
1991-01-01
We seek the best possible performance of the Rayleigh task in which one must decide whether a perceived object is a pair of Gaussian-blurred points or a blurred line. Two Bayesian reconstruction algorithms are used, the first based on a Gaussian prior-probability distribution with a nonnegativity constraint and the second based on an entropic prior. In both cases, the reconstructions are found that maximize the posterior probability. We compare the performance of the Rayleigh task obtained with two decision variables, the logarithm of the posterior probability ratio and the change in the mean-squared deviation from the reconstruction. The method of evaluation is based on the results of a numerical testing procedure in which the stated discrimination task is carried out on reconstructions of a randomly generated sequence of images. The ability to perform the Rayleigh task is summarized in terms of a discrimination index that is derived from the area under the receiver-operating characteristic (ROC) curve. We find that the use of the posterior probability does not result in better performance of the Rayleigh task than the mean-squared deviation from the reconstruction. 10 refs., 6 figs.
Value and probability coding in a feedback-based learning task utilizing food rewards
Lempert, Karolina M.
2014-01-01
For the consequences of our actions to guide behavior, the brain must represent different types of outcome-related information. For example, an outcome can be construed as negative because an expected reward was not delivered or because an outcome of low value was delivered. Thus behavioral consequences can differ in terms of the information they provide about outcome probability and value. We investigated the role of the striatum in processing probability-based and value-based negative feedback by training participants to associate cues with food rewards and then employing a selective satiety procedure to devalue one food outcome. Using functional magnetic resonance imaging, we examined brain activity related to receipt of expected rewards, receipt of devalued outcomes, omission of expected rewards, omission of devalued outcomes, and expected omissions of an outcome. Nucleus accumbens activation was greater for rewarding outcomes than devalued outcomes, but activity in this region did not correlate with the probability of reward receipt. Activation of the right caudate and putamen, however, was largest in response to rewarding outcomes relative to expected omissions of reward. The dorsal striatum (caudate and putamen) at the time of feedback also showed a parametric increase correlating with the trialwise probability of reward receipt. Our results suggest that the ventral striatum is sensitive to the motivational relevance, or subjective value, of the outcome, while the dorsal striatum codes for a more complex signal that incorporates reward probability. Value and probability information may be integrated in the dorsal striatum, to facilitate action planning and allocation of effort. PMID:25339705
Nichols, J.D.; Sauer, J.R.; Pollock, K.H.; Hestbeck, J.B.
1992-01-01
In stage-based demography, animals are often categorized into size (or mass) classes, and size-based probabilities of surviving and changing mass classes must be estimated before demographic analyses can be conducted. In this paper, we develop two procedures for the estimation of mass transition probabilities from capture-recapture data. The first approach uses a multistate capture-recapture model that is parameterized directly with the transition probabilities of interest. Maximum likelihood estimates are then obtained numerically using program SURVIV. The second approach involves a modification of Pollock's robust design. Estimation proceeds by conditioning on animals caught in a particualr class at time i, and then using closed models to estimate the number of these that are alive in other classes at i + 1. Both methods are illustrated by application to meadow vole, Microtus pennsylvanicus, capture-recapture data. The two methods produced reasonable estimates that were similar. Advantages of these two approaches include the directness of estimation, the absence of need for restrictive assumptions about the independence of survival and growth, the testability of assumptions, and the testability of related hypotheses of ecological interest (e.g., the hypothesis of temporal variation in transition probabilities).
Delavande, Adeline; Rohwedder, Susann
2013-01-01
Cross-country comparisons of differential survival by socioeconomic status (SES) are useful in many domains. Yet, to date, such studies have been rare. Reliably estimating differential survival in a single country has been challenging because it requires rich panel data with a large sample size. Cross-country estimates have proven even more difficult because the measures of SES need to be comparable internationally. We present an alternative method for acquiring information on differential survival by SES. Rather than using observations of actual survival, we relate individuals’ subjective probabilities of survival to SES variables in cross section. To show that subjective survival probabilities are informative proxies for actual survival when estimating differential survival, we compare estimates of differential survival based on actual survival with estimates based on subjective probabilities of survival for the same sample. The results are remarkably similar. We then use this approach to compare differential survival by SES for 10 European countries and the United States. Wealthier people have higher survival probabilities than those who are less wealthy, but the strength of the association differs across countries. Nations with a smaller gradient appear to be Belgium, France, and Italy, while the United States, England, and Sweden appear to have a larger gradient. PMID:22042664
Doubravsky, Karel; Dohnal, Mirko
2015-01-01
Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details. PMID:26158662
Doubravsky, Karel; Dohnal, Mirko
2015-01-01
Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details. PMID:26158662
NASA Astrophysics Data System (ADS)
Toroody, Ahmad Bahoo; Abaiee, Mohammad Mahdi; Gholamnia, Reza; Ketabdari, Mohammad Javad
2016-07-01
Owing to the increase in unprecedented accidents with new root causes in almost all operational areas, the importance of risk management has dramatically risen. Risk assessment, one of the most significant aspects of risk management, has a substantial impact on the system-safety level of organizations, industries, and operations. If the causes of all kinds of failure and the interactions between them are considered, effective risk assessment can be highly accurate. A combination of traditional risk assessment approaches and modern scientific probability methods can help in realizing better quantitative risk assessment methods. Most researchers face the problem of minimal field data with respect to the probability and frequency of each failure. Because of this limitation in the availability of epistemic knowledge, it is important to conduct epistemic estimations by applying the Bayesian theory for identifying plausible outcomes. In this paper, we propose an algorithm and demonstrate its application in a case study for a light-weight lifting operation in the Persian Gulf of Iran. First, we identify potential accident scenarios and present them in an event tree format. Next, excluding human error, we use the event tree to roughly estimate the prior probability of other hazard-promoting factors using a minimal amount of field data. We then use the Success Likelihood Index Method (SLIM) to calculate the probability of human error. On the basis of the proposed event tree, we use the Bayesian network of the provided scenarios to compensate for the lack of data. Finally, we determine the resulting probability of each event based on its evidence in the epistemic estimation format by building on two Bayesian network types: the probability of hazard promotion factors and the Bayesian theory. The study results indicate that despite the lack of available information on the operation of floating objects, a satisfactory result can be achieved using epistemic data.
NASA Astrophysics Data System (ADS)
Lin, Wei; Chen, Yu-hua; Wang, Ji-yuan; Gao, Hong-sheng; Wang, Ji-jun; Su, Rong-hua; Mao, Wei
2011-04-01
Detection probability is an important index to represent and estimate target viability, which provides basis for target recognition and decision-making. But it will expend a mass of time and manpower to obtain detection probability in reality. At the same time, due to the different interpretation of personnel practice knowledge and experience, a great difference will often exist in the datum obtained. By means of studying the relationship between image features and perception quantity based on psychology experiments, the probability model has been established, in which the process is as following.Firstly, four image features have been extracted and quantified, which affect directly detection. Four feature similarity degrees between target and background were defined. Secondly, the relationship between single image feature similarity degree and perception quantity was set up based on psychological principle, and psychological experiments of target interpretation were designed which includes about five hundred people for interpretation and two hundred images. In order to reduce image features correlativity, a lot of artificial synthesis images have been made which include images with single brightness feature difference, images with single chromaticity feature difference, images with single texture feature difference and images with single shape feature difference. By analyzing and fitting a mass of experiments datum, the model quantitys have been determined. Finally, by applying statistical decision theory and experimental results, the relationship between perception quantity with target detection probability has been found. With the verification of a great deal of target interpretation in practice, the target detection probability can be obtained by the model quickly and objectively.
a Probability-Based Statistical Method to Extract Water Body of TM Images with Missing Information
NASA Astrophysics Data System (ADS)
Lian, Shizhong; Chen, Jiangping; Luo, Minghai
2016-06-01
Water information cannot be accurately extracted using TM images because true information is lost in some images because of blocking clouds and missing data stripes, thereby water information cannot be accurately extracted. Water is continuously distributed in natural conditions; thus, this paper proposed a new method of water body extraction based on probability statistics to improve the accuracy of water information extraction of TM images with missing information. Different disturbing information of clouds and missing data stripes are simulated. Water information is extracted using global histogram matching, local histogram matching, and the probability-based statistical method in the simulated images. Experiments show that smaller Areal Error and higher Boundary Recall can be obtained using this method compared with the conventional methods.
NASA Astrophysics Data System (ADS)
Kikuchi, Ryota; Misaka, Takashi; Obayashi, Shigeru
2015-10-01
An integrated method of a proper orthogonal decomposition based reduced-order model (ROM) and data assimilation is proposed for the real-time prediction of an unsteady flow field. In this paper, a particle filter (PF) and an ensemble Kalman filter (EnKF) are compared for data assimilation and the difference in the predicted flow fields is evaluated focusing on the probability density function (PDF) of the model variables. The proposed method is demonstrated using identical twin experiments of an unsteady flow field around a circular cylinder at the Reynolds number of 1000. The PF and EnKF are employed to estimate temporal coefficients of the ROM based on the observed velocity components in the wake of the circular cylinder. The prediction accuracy of ROM-PF is significantly better than that of ROM-EnKF due to the flexibility of PF for representing a PDF compared to EnKF. Furthermore, the proposed method reproduces the unsteady flow field several orders faster than the reference numerical simulation based on the Navier-Stokes equations.
Frank, S A; Smith, E
2011-03-01
Commonly observed patterns typically follow a few distinct families of probability distributions. Over one hundred years ago, Karl Pearson provided a systematic derivation and classification of the common continuous distributions. His approach was phenomenological: a differential equation that generated common distributions without any underlying conceptual basis for why common distributions have particular forms and what explains the familial relations. Pearson's system and its descendants remain the most popular systematic classification of probability distributions. Here, we unify the disparate forms of common distributions into a single system based on two meaningful and justifiable propositions. First, distributions follow maximum entropy subject to constraints, where maximum entropy is equivalent to minimum information. Second, different problems associate magnitude to information in different ways, an association we describe in terms of the relation between information invariance and measurement scale. Our framework relates the different continuous probability distributions through the variations in measurement scale that change each family of maximum entropy distributions into a distinct family. From our framework, future work in biology can consider the genesis of common patterns in a new and more general way. Particular biological processes set the relation between the information in observations and magnitude, the basis for information invariance, symmetry and measurement scale. The measurement scale, in turn, determines the most likely probability distributions and observed patterns associated with particular processes. This view presents a fundamentally derived alternative to the largely unproductive debates about neutrality in ecology and evolution. PMID:21265914
NASA Technical Reports Server (NTRS)
Kim, Hakil; Swain, Philip H.
1990-01-01
An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method.
Finding significantly connected voxels based on histograms of connection strengths
NASA Astrophysics Data System (ADS)
Kasenburg, Niklas; Pedersen, Morten Vester; Darkner, Sune
2016-03-01
We explore a new approach for structural connectivity based segmentations of subcortical brain regions. Connectivity based segmentations are usually based on fibre connections from a seed region to predefined target regions. We present a method for finding significantly connected voxels based on the distribution of connection strengths. Paths from seed voxels to all voxels in a target region are obtained from a shortest-path tractography. For each seed voxel we approximate the distribution with a histogram of path scores. We hypothesise that the majority of estimated connections are false-positives and that their connection strength is distributed differently from true-positive connections. Therefore, an empirical null-distribution is defined for each target region as the average normalized histogram over all voxels in the seed region. Single histograms are then tested against the corresponding null-distribution and significance is determined using the false discovery rate (FDR). Segmentations are based on significantly connected voxels and their FDR. In this work we focus on the thalamus and the target regions were chosen by dividing the cortex into a prefrontal/temporal zone, motor zone, somatosensory zone and a parieto-occipital zone. The obtained segmentations consistently show a sparse number of significantly connected voxels that are located near the surface of the anterior thalamus over a population of 38 subjects.
Probability based earthquake load and resistance factor design criteria for offshore platforms
Bea, R.G.
1996-12-31
This paper describes a probability reliability based formulation to determine earthquake Load and Resistance Factor Design (LRFD) parameters for conventional, steel, pile supported, tubular membered platforms that is proposed as a basis for earthquake design criteria and guidelines for offshore platforms that are intended to have worldwide applicability. The formulation is illustrated with application to platforms located in five areas: offshore California, Venezuela (Rio Caribe), the East Coast of Canada, in the Caspian Sea (Azeri), and the Norwegian sector of the North Sea.
NASA Astrophysics Data System (ADS)
Yang, Dixiong; Liu, Zhenjun; Zhou, Jilei
2014-04-01
Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.
A Comparison Of The Mycin Model For Reasoning Under Uncertainty To A Probability Based Model
NASA Astrophysics Data System (ADS)
Neapolitan, Richard E.
1986-03-01
Rule-based expert systems are those in which a certain number of IF-THEN rules are assumed to hold. Based on the verity of some assertions, the rules deduce new conclusions. In many cases, neither the rules nor the assertions are known with certainty. The system must then be able to obtain a measure of partial belief in the conclusion based upon measures of partial belief in the assertions and the rule. A problem arises when two or more rules (items of evidence) argue for the same conclusion. As proven in , certain assumptions concerning the independence of the two items of evidence is necessary before the certainties can be combined. In the current paper, it is shown how the well known MYCIN model combines the certainties from two items of evidence. The validity of the model is then proven based on the model's assumptions of independence of evidence. The assumptions are that the evidence must be independent in the whole space, in the space of the conclusion, and in the space of the complement of the conclusion. Next a probability-based model is described and compared to the MYCIN model. It is proven that the probabilistic assumptions for this model are weaker (independence is necessary only in the space of the conclusion and the space of the complement of conclusion), and therefore more appealing. An example is given to show how the added assumption in the MYCIN model is, in fact, the most restrictive assumption. It is also proven that, when two rules argue for the same conclusion, the combinatoric method in a MYCIN version of the probability-based model yields a higher combined certainty than that in the MYCIN model. It is finally concluded that the probability-based model, in light of the comparison, is the better choice.
NASA Astrophysics Data System (ADS)
Smith, L. A.
2007-12-01
We question the relevance of climate-model based Bayesian (or other) probability statements for decision support and impact assessment on spatial scales less than continental and temporal averages less than seasonal. Scientific assessment of higher resolution space and time scale information is urgently needed, given the commercial availability of "products" at high spatiotemporal resolution, their provision by nationally funded agencies for use both in industry decision making and governmental policy support, and their presentation to the public as matters of fact. Specifically we seek to establish necessary conditions for probability forecasts (projections conditioned on a model structure and a forcing scenario) to be taken seriously as reflecting the probability of future real-world events. We illustrate how risk management can profitably employ imperfect models of complicated chaotic systems, following NASA's study of near-Earth PHOs (Potentially Hazardous Objects). Our climate models will never be perfect, nevertheless the space and time scales on which they provide decision- support relevant information is expected to improve with the models themselves. Our aim is to establish a set of baselines of internal consistency; these are merely necessary conditions (not sufficient conditions) that physics based state-of-the-art models are expected to pass if their output is to be judged decision support relevant. Probabilistic Similarity is proposed as one goal which can be obtained even when our models are not empirically adequate. In short, probabilistic similarity requires that, given inputs similar to today's empirical observations and observational uncertainties, we expect future models to produce similar forecast distributions. Expert opinion on the space and time scales on which we might reasonably expect probabilistic similarity may prove of much greater utility than expert elicitation of uncertainty in parameter values in a model that is not empirically
A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography
NASA Astrophysics Data System (ADS)
Sun, S.; Chen, C.; WANG, H.; Wang, Q.
2014-12-01
The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M
Flow Regime Based Climatologies of Lightning Probabilities for Spaceports and Airports
NASA Technical Reports Server (NTRS)
Bauman, William H., III; Volmer, Matthew; Sharp, David; Spratt, Scott; Lafosse, Richard A.
2007-01-01
Objective: provide forecasters with a "first guess" climatological lightning probability tool (1) Focus on Space Shuttle landings and NWS T AFs (2) Four circles around sites: 5-, 10-, 20- and 30 n mi (4) Three time intervals: hourly, every 3 hr and every 6 hr It is based on: (1) NLDN gridded data (2) Flow regime (3) Warm season months of May-Sep for years 1989-2004 Gridded data and available code yields squares, not circles Over 850 spread sheets converted into manageable user-friendly web-based GUI
2012-02-24
GENI Project: Sandia National Laboratories is working with several commercial and university partners to develop software for market management systems (MMSs) that enable greater use of renewable energy sources throughout the grid. MMSs are used to securely and optimally determine which energy resources should be used to service energy demand across the country. Contributions of electricity to the grid from renewable energy sources such as wind and solar are intermittent, introducing complications for MMSs, which have trouble accommodating the multiple sources of price and supply uncertainties associated with bringing these new types of energy into the grid. Sandia’s software will bring a new, probability-based formulation to account for these uncertainties. By factoring in various probability scenarios for electricity production from renewable energy sources in real time, Sandia’s formula can reduce the risk of inefficient electricity transmission, save ratepayers money, conserve power, and support the future use of renewable energy.
NASA Astrophysics Data System (ADS)
Cheng, Wei Ping; Jia, Yafei
2010-04-01
A backward location probability density function (BL-PDF) method capable of identifying location of point sources in surface waters is presented in this paper. The relation of forward location probability density function (FL-PDF) and backward location probability density, based on adjoint analysis, is validated using depth-averaged free-surface flow and mass transport models and several surface water test cases. The solutions of the backward location PDF transport equation agreed well to the forward location PDF computed using the pollutant concentration at the monitoring points. Using this relation and the distribution of the concentration detected at the monitoring points, an effective point source identification method is established. The numerical error of the backward location PDF simulation is found to be sensitive to the irregularity of the computational meshes, diffusivity, and velocity gradients. The performance of identification method is evaluated regarding the random error and number of observed values. In addition to hypothetical cases, a real case was studied to identify the source location where a dye tracer was instantaneously injected into a stream. The study indicated the proposed source identification method is effective, robust, and quite efficient in surface waters; the number of advection-diffusion equations needed to solve is equal to the number of observations.
A multivariate copula-based framework for dealing with hazard scenarios and failure probabilities
NASA Astrophysics Data System (ADS)
Salvadori, G.; Durante, F.; De Michele, C.; Bernardi, M.; Petrella, L.
2016-05-01
This paper is of methodological nature, and deals with the foundations of Risk Assessment. Several international guidelines have recently recommended to select appropriate/relevant Hazard Scenarios in order to tame the consequences of (extreme) natural phenomena. In particular, the scenarios should be multivariate, i.e., they should take into account the fact that several variables, generally not independent, may be of interest. In this work, it is shown how a Hazard Scenario can be identified in terms of (i) a specific geometry and (ii) a suitable probability level. Several scenarios, as well as a Structural approach, are presented, and due comparisons are carried out. In addition, it is shown how the Hazard Scenario approach illustrated here is well suited to cope with the notion of Failure Probability, a tool traditionally used for design and risk assessment in engineering practice. All the results outlined throughout the work are based on the Copula Theory, which turns out to be a fundamental theoretical apparatus for doing multivariate risk assessment: formulas for the calculation of the probability of Hazard Scenarios in the general multidimensional case (d≥2) are derived, and worthy analytical relationships among the probabilities of occurrence of Hazard Scenarios are presented. In addition, the Extreme Value and Archimedean special cases are dealt with, relationships between dependence ordering and scenario levels are studied, and a counter-example concerning Tail Dependence is shown. Suitable indications for the practical application of the techniques outlined in the work are given, and two case studies illustrate the procedures discussed in the paper.
Forestry inventory based on multistage sampling with probability proportional to size
NASA Technical Reports Server (NTRS)
Lee, D. C. L.; Hernandez, P., Jr.; Shimabukuro, Y. E.
1983-01-01
A multistage sampling technique, with probability proportional to size, is developed for a forest volume inventory using remote sensing data. The LANDSAT data, Panchromatic aerial photographs, and field data are collected. Based on age and homogeneity, pine and eucalyptus classes are identified. Selection of tertiary sampling units is made through aerial photographs to minimize field work. The sampling errors for eucalyptus and pine ranged from 8.34 to 21.89 percent and from 7.18 to 8.60 percent, respectively.
Study of fusion probabilities with halo nuclei using different proximity based potentials
NASA Astrophysics Data System (ADS)
Kumari, Raj
2013-11-01
We study fusion of halo nuclei with heavy targets using proximity based potentials due to Aage Winther (AW) 95, Bass 80 and Proximity 2010. In order to consider the extended matter distribution of halo nuclei, the nuclei radii borrowed from cross section measurements are included in these potentials. Our study reveals that the barrier heights are effectively reduced and fusion cross sections are appreciably enhanced by including extended radii of these nuclei. We also find that the extended sizes of halos contribute towards enhancement of fusion probabilities in case of proton halo nuclei, but, contribute to transfer or break-up process rather than fusion yield in case of neutron halo nuclei.
Probability-based damage detection using model updating with efficient uncertainty propagation
NASA Astrophysics Data System (ADS)
Xu, Yalan; Qian, Yu; Chen, Jianjun; Song, Gangbing
2015-08-01
Model updating method has received increasing attention in damage detection of structures based on measured modal parameters. In this article, a probability-based damage detection procedure is presented, in which the random factor method for non-homogeneous random field is developed and used as the forward propagation to analytically evaluate covariance matrices in each iteration step of stochastic model updating. An improved optimization algorithm is introduced to guarantee the convergence and reduce the computational effort, in which the design variables are restricted in search region by region truncation of each iteration step. The developed algorithm is illustrated by a simulated 25-bar planar truss structure and the results have been compared and verified with those obtained from Monte Carlo simulation. In order to assess the influences of uncertainty sources on the results of model updating and damage detection of structures, a comparative study is also given under different cases of uncertainties, that is, structural uncertainty only, measurement uncertainty only and combination of the two. The simulation results show the proposed method can perform well in stochastic model updating and probability-based damage detection of structures with less computational effort.
NASA Astrophysics Data System (ADS)
Wang, Hu; Li, Enying; Li, G. Y.
2011-03-01
This paper presents a crashworthiness design optimization method based on a metamodeling technique. The crashworthiness optimization is a highly nonlinear and large scale problem, which is composed various nonlinearities, such as geometry, material and contact and needs a large number expensive evaluations. In order to obtain a robust approximation efficiently, a probability-based least square support vector regression is suggested to construct metamodels by considering structure risk minimization. Further, to save the computational cost, an intelligent sampling strategy is applied to generate sample points at the stage of design of experiment (DOE). In this paper, a cylinder, a full vehicle frontal collision is involved. The results demonstrate that the proposed metamodel-based optimization is efficient and effective in solving crashworthiness, design optimization problems.
3D model retrieval using probability density-based shape descriptors.
Akgül, Ceyhun Burak; Sankur, Bülent; Yemez, Yücel; Schmitt, Francis
2009-06-01
We address content-based retrieval of complete 3D object models by a probabilistic generative description of local shape properties. The proposed shape description framework characterizes a 3D object with sampled multivariate probability density functions of its local surface features. This density-based descriptor can be efficiently computed via kernel density estimation (KDE) coupled with fast Gauss transform. The non-parametric KDE technique allows reliable characterization of a diverse set of shapes and yields descriptors which remain relatively insensitive to small shape perturbations and mesh resolution. Density-based characterization also induces a permutation property which can be used to guarantee invariance at the shape matching stage. As proven by extensive retrieval experiments on several 3D databases, our framework provides state-of-the-art discrimination over a broad and heterogeneous set of shape categories. PMID:19372614
Probability voting and SVM-based vehicle detection in complex background airborne traffic video
NASA Astrophysics Data System (ADS)
Lei, Bo; Li, Qingquan; Zhang, Zhijie; Wang, Chensheng
2012-11-01
This paper introduces a novel vehicle detection method combined with probability voting based hypothesis generation (HG) and SVM based hypothesis verification (HV) specialized for the complex background airborne traffic video. In HG stage, a statistic based road area extraction method is applied and the lane marks are eliminated. Remained areas are clustered, and then the canny algorithm is performed to detect edges in clustered areas. A voting strategy is designed to detect rectangle objects in the scene. In HV stage, every possible vehicle area is rotated to align the vehicle along the vertical direction, and the vertical and horizontal gradients of them are calculated. SVM is adopted to classify vehicle and non-vehicle. The proposed method has been applied to several traffic scenes, and the experiment results show it's effective and veracious for the vehicle detection.
Amir, El-Ad David; Kalisman, Nir; Keasar, Chen
2008-07-01
Rotatable torsion angles are the major degrees of freedom in proteins. Adjacent angles are highly correlated and energy terms that rely on these correlations are intensively used in molecular modeling. However, the utility of torsion based terms is not yet fully exploited. Many of these terms do not capture the full scale of the correlations. Other terms, which rely on lookup tables, cannot be used in the context of force-driven algorithms because they are not fully differentiable. This study aims to extend the usability of torsion terms by presenting a set of high-dimensional and fully-differentiable energy terms that are derived from high-resolution structures. The set includes terms that describe backbone conformational probabilities and propensities, side-chain rotamer probabilities, and an elaborate term that couples all the torsion angles within the same residue. The terms are constructed by cubic spline interpolation with periodic boundary conditions that enable full differentiability and high computational efficiency. We show that the spline implementation does not compromise the accuracy of the original database statistics. We further show that the side-chain relevant terms are compatible with established rotamer probabilities. Despite their very local characteristics, the new terms are often able to identify native and native-like structures within decoy sets. Finally, force-based minimization of NMR structures with the new terms improves their torsion angle statistics with minor structural distortion (0.5 A RMSD on average). The new terms are freely available in the MESHI molecular modeling package. The spline coefficients are also available as a documented MATLAB file. PMID:18186478
NASA Astrophysics Data System (ADS)
Li, Jing; Thyer, Mark; Lambert, Martin; Kuzera, George; Metcalfe, Andrew
2016-02-01
Flood extremes are driven by highly variable and complex climatic and hydrological processes. Observational evidence has identified that seasonality of climate variables has a major impact on flood peaks. However, event-based joint probability approaches for predicting the flood frequency distribution (FFD), which are commonly used in practice, do not commonly incorporate climate seasonality. This study presents an advance in event-based joint probability approaches by incorporating seasonality using the hybrid causative events (HCE) approach. The HCE was chosen because it uses the true causative events of the floods of interest and is able to combine the accuracy of continuous simulation with the computational efficiency of event-based approaches. The incorporation of seasonality is evaluated using a virtual catchment approach at eight sites over a wide range of Australian climate zones, including tropical, temperature, Mediterranean and desert climates (virtual catchment data for the eight sites is freely available via digital repository). The seasonal HCE provided accurate predictions of the FFD at all sites. In contrast, the non-seasonal HCE significantly over-predicted the FFD at some sites. The need to include seasonality was influenced by the magnitude of the seasonal variation in soil moisture and its coherence with the seasonal variation in extreme rainfall. For sites with a low seasonal variation in soil moisture the non-seasonal HCE provided reliable estimates of the FFD. For the remaining sites, it was found difficult to predict a priori whether ignoring seasonality provided a reliable estimate of the FFD, hence it is recommended that the seasonal HCE always be used. The practical implications of this study are that the HCE approach with seasonality is an accurate and efficient event-based joint probability approach to derive the flood frequency distribution across a wide range of climatologies.
Protein single-model quality assessment by feature-based probability density functions.
Cao, Renzhi; Cheng, Jianlin
2016-01-01
Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob. PMID:27041353
Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; Martin-Martinez, Sergio; Zhang, Jie; Hodge, Bri -Mathias; Molina-Garcia, Angel
2016-02-02
Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less
Protein single-model quality assessment by feature-based probability density functions
Cao, Renzhi; Cheng, Jianlin
2016-01-01
Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method–Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob. PMID:27041353
A Trust-Based Adaptive Probability Marking and Storage Traceback Scheme for WSNs.
Liu, Anfeng; Liu, Xiao; Long, Jun
2016-01-01
Security is a pivotal issue for wireless sensor networks (WSNs), which are emerging as a promising platform that enables a wide range of military, scientific, industrial and commercial applications. Traceback, a key cyber-forensics technology, can play an important role in tracing and locating a malicious source to guarantee cybersecurity. In this work a trust-based adaptive probability marking and storage (TAPMS) traceback scheme is proposed to enhance security for WSNs. In a TAPMS scheme, the marking probability is adaptively adjusted according to the security requirements of the network and can substantially reduce the number of marking tuples and improve network lifetime. More importantly, a high trust node is selected to store marking tuples, which can avoid the problem of marking information being lost. Experimental results show that the total number of marking tuples can be reduced in a TAPMS scheme, thus improving network lifetime. At the same time, since the marking tuples are stored in high trust nodes, storage reliability can be guaranteed, and the traceback time can be reduced by more than 80%. PMID:27043566
A Trust-Based Adaptive Probability Marking and Storage Traceback Scheme for WSNs
Liu, Anfeng; Liu, Xiao; Long, Jun
2016-01-01
Security is a pivotal issue for wireless sensor networks (WSNs), which are emerging as a promising platform that enables a wide range of military, scientific, industrial and commercial applications. Traceback, a key cyber-forensics technology, can play an important role in tracing and locating a malicious source to guarantee cybersecurity. In this work a trust-based adaptive probability marking and storage (TAPMS) traceback scheme is proposed to enhance security for WSNs. In a TAPMS scheme, the marking probability is adaptively adjusted according to the security requirements of the network and can substantially reduce the number of marking tuples and improve network lifetime. More importantly, a high trust node is selected to store marking tuples, which can avoid the problem of marking information being lost. Experimental results show that the total number of marking tuples can be reduced in a TAPMS scheme, thus improving network lifetime. At the same time, since the marking tuples are stored in high trust nodes, storage reliability can be guaranteed, and the traceback time can be reduced by more than 80%. PMID:27043566
Monte Carlo based protocol for cell survival and tumour control probability in BNCT
NASA Astrophysics Data System (ADS)
Ye, Sung-Joon
1999-02-01
A mathematical model to calculate the theoretical cell survival probability (nominally, the cell survival fraction) is developed to evaluate preclinical treatment conditions for boron neutron capture therapy (BNCT). A treatment condition is characterized by the neutron beam spectra, single or bilateral exposure, and the choice of boron carrier drug (boronophenylalanine (BPA) or boron sulfhydryl hydride (BSH)). The cell survival probability defined from Poisson statistics is expressed with the cell-killing yield, the (n, ) reaction density, and the tolerable neutron fluence. The radiation transport calculation from the neutron source to tumours is carried out using Monte Carlo methods: (i) reactor-based BNCT facility modelling to yield the neutron beam library at an irradiation port; (ii) dosimetry to limit the neutron fluence below a tolerance dose (10.5 Gy-Eq); (iii) calculation of the (n, ) reaction density in tumours. A shallow surface tumour could be effectively treated by single exposure producing an average cell survival probability of - for probable ranges of the cell-killing yield for the two drugs, while a deep tumour will require bilateral exposure to achieve comparable cell kills at depth. With very pure epithermal beams eliminating thermal, low epithermal and fast neutrons, the cell survival can be decreased by factors of 2-10 compared with
Liu, Zhao; Zhu, Yunhong; Wu, Chenxue
2016-01-01
Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502
Zhang, Haitao; Chen, Zewei; Liu, Zhao; Zhu, Yunhong; Wu, Chenxue
2016-01-01
Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users' privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502
NASA Astrophysics Data System (ADS)
Xue, Ming; Wang, Jiang; Jia, Chenhui; Yu, Haitao; Deng, Bin; Wei, Xile; Che, Yanqiu
2013-03-01
In this paper, we proposed a new approach to estimate unknown parameters and topology of a neuronal network based on the adaptive synchronization control scheme. A virtual neuronal network is constructed as an observer to track the membrane potential of the corresponding neurons in the original network. When they achieve synchronization, the unknown parameters and topology of the original network are obtained. The method is applied to estimate the real-time status of the connection in the feedforward network and the neurotransmitter release probability of unreliable synapses is obtained by statistic computation. Numerical simulations are also performed to demonstrate the effectiveness of the proposed adaptive controller. The obtained results may have important implications in system identification in neural science.
A unified optical damage criterion based on the probability density distribution of detector signals
NASA Astrophysics Data System (ADS)
Somoskoi, T.; Vass, Cs.; Mero, M.; Mingesz, R.; Bozoki, Z.; Osvay, K.
2013-11-01
Various methods and procedures have been developed so far to test laser induced optical damage. The question naturally arises, that what are the respective sensitivities of these diverse methods. To make a suitable comparison, both the processing of the measured primary signal has to be at least similar to the various methods, and one needs to establish a proper damage criterion, which has to be universally applicable for every method. We defined damage criteria based on the probability density distribution of the obtained detector signals. This was determined by the kernel density estimation procedure. We have tested the entire evaluation procedure in four well-known detection techniques: direct observation of the sample by optical microscopy; monitoring of the change in the light scattering power of the target surface and the detection of the generated photoacoustic waves both in the bulk of the sample and in the surrounding air.
Estimation of the failure probability during EGS stimulation based on borehole data
NASA Astrophysics Data System (ADS)
Meller, C.; Kohl, Th.; Gaucher, E.
2012-04-01
In recent times the search for alternative sources of energy has been fostered by the scarcity of fossil fuels. With its ability to permanently provide electricity or heat with little emission of CO2, geothermal energy will have an important share in the energy mix of the future. Within Europe, scientists identified many locations with conditions suitable for Enhanced Geothermal System (EGS) projects. In order to provide sufficiently high reservoir permeability, EGS require borehole stimulations prior to installation of power plants (Gérard et al, 2006). Induced seismicity during water injection into reservoirs EGS systems is a factor that currently cannot be predicted nor controlled. Often, people living near EGS projects are frightened by smaller earthquakes occurring during stimulation or injection. As this fear can lead to widespread disapproval of geothermal power plants, it is appreciable to find a way to estimate the probability of fractures to shear when injecting water with a distinct pressure into a geothermal reservoir. This provides knowledge, which enables to predict the mechanical behavior of a reservoir in response to a change in pore pressure conditions. In the present study an approach for estimation of the shearing probability based on statistical analyses of fracture distribution, orientation and clusters, together with their geological properties is proposed. Based on geophysical logs of five wells in Soultz-sous-Forêts, France, and with the help of statistical tools, the Mohr criterion, geological and mineralogical properties of the host rock and the fracture fillings, correlations between the wells are analyzed. This is achieved with the self-written MATLAB-code Fracdens, which enables us to statistically analyze the log files in different ways. With the application of a pore pressure change, the evolution of the critical pressure on the fractures can be determined. A special focus is on the clay fillings of the fractures and how they reduce
Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu
2013-01-01
Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ . PMID:23163785
Sandoval, Santiago; Bertrand-Krajewski, Jean-Luc
2016-06-01
Total suspended solid (TSS) measurements in urban drainage systems are required for several reasons. Aiming to assess uncertainties in the mean TSS concentration due to the influence of sampling intake vertical position and vertical concentration gradients in a sewer pipe, two methods are proposed: a simplified method based on a theoretical vertical concentration profile (SM) and a time series grouping method (TSM). SM is based on flow rate and water depth time series. TSM requires additional TSS time series as input data. All time series are from the Chassieu urban catchment in Lyon, France (time series from 2007 with 2-min time step, 89 rainfall events). The probability of measuring a TSS value lower than the mean TSS along the vertical cross section (TSS underestimation) is about 0.88 with SM and about 0.64 with TSM. TSM shows more realistic TSS underestimation values (about 39 %) than SM (about 269 %). Interquartile ranges (IQR) over the probability values indicate that SM is more uncertain (IQR = 0.08) than TSM (IQR = 0.02). Differences between the two methods are mainly due to simplifications in SM (absence of TSS measurements). SM assumes a significant asymmetry of the TSS concentration profile along the vertical axis in the cross section. This is compatible with the distribution of TSS measurements found in the TSM approach. The methods provide insights towards an indicator of the measurement performance and representativeness for a TSS sampling protocol. PMID:27178049
We examine the proposition that water quality indicator data collected from large scale, probability based assessments of coastal condition such as the US Environmental Protection Agency National Coastal Assessment (NCA) can be used to support water quality criteria development f...
A generic probability based algorithm to derive regional patterns of crops in time and space
NASA Astrophysics Data System (ADS)
Wattenbach, Martin; Oijen, Marcel v.; Leip, Adrian; Hutchings, Nick; Balkovic, Juraj; Smith, Pete
2013-04-01
Croplands are not only the key to human food supply, they also change the biophysical and biogeochemical properties of the land surface leading to changes in the water cycle, energy partitioning, influence soil erosion and substantially contribute to the amount of greenhouse gases entering the atmosphere. The effects of croplands on the environment depend on the type of crop and the associated management which both are related to the site conditions, economic boundary settings as well as preferences of individual farmers. However, at a given point of time the pattern of crops in a landscape is not only determined by environmental and socioeconomic conditions but also by the compatibility to the crops which had been grown in the years before at the current field and its surrounding cropping area. The crop compatibility is driven by factors like pests and diseases, crop driven changes in soil structure and timing of cultivation steps. Given these effects of crops on the biochemical cycle and their interdependence with the mentioned boundary conditions, there is a demand in the regional and global modelling community to account for these regional patterns. Here we present a Bayesian crop distribution generator algorithm that is used to calculate the combined and conditional probability for a crop to appear in time and space using sparse and disparate information. The input information to define the most probable crop per year and grid cell is based on combined probabilities derived from the a crop transition matrix representing good agricultural practice, crop specific soil suitability derived from the European soil database and statistical information about harvested area from the Eurostat database. The reported Eurostat crop area also provides the target proportion to be matched by the algorithm on the level of administrative units (Nomenclature des Unités Territoriales Statistiques - NUTS). The algorithm is applied for the EU27 to derive regional spatial and
Experience-Based Probabilities Modulate Expectations in a Gender-Coded Artificial Language
Öttl, Anton; Behne, Dawn M.
2016-01-01
The current study combines artificial language learning with visual world eyetracking to investigate acquisition of representations associating spoken words and visual referents using morphologically complex pseudowords. Pseudowords were constructed to consistently encode referential gender by means of suffixation for a set of imaginary figures that could be either male or female. During training, the frequency of exposure to pseudowords and their imaginary figure referents were manipulated such that a given word and its referent would be more likely to occur in either the masculine form or the feminine form, or both forms would be equally likely. Results show that these experience-based probabilities affect the formation of new representations to the extent that participants were faster at recognizing a referent whose gender was consistent with the induced expectation than a referent whose gender was inconsistent with this expectation. Disambiguating gender information available from the suffix did not mask the induced expectations. Eyetracking data provide additional evidence that such expectations surface during online lexical processing. Taken together, these findings indicate that experience-based information is accessible during the earliest stages of processing, and are consistent with the view that language comprehension depends on the activation of perceptual memory traces. PMID:27602009
Kukla, G.; Gavin, J.
1994-05-01
This report was prepared at the Lamont-Doherty Geological Observatory of Columbia University at Palisades, New York, under subcontract to Pacific Northwest Laboratory it is a part of a larger project of global climate studies which supports site characterization work required for the selection of a potential high-level nuclear waste repository and forms part of the Performance Assessment Scientific Support (PASS) Program at PNL. The work under the PASS Program is currently focusing on the proposed site at Yucca Mountain, Nevada, and is under the overall direction of the Yucca Mountain Project Office US Department of Energy, Las Vegas, Nevada. The final results of the PNL project will provide input to global atmospheric models designed to test specific climate scenarios which will be used in the site specific modeling work of others. The primary purpose of the data bases compiled and of the astronomic predictive models is to aid in the estimation of the probabilities of future climate states. The results will be used by two other teams working on the global climate study under contract to PNL. They are located at and the University of Maine in Orono, Maine, and the Applied Research Corporation in College Station, Texas. This report presents the results of the third year`s work on the global climate change models and the data bases describing past climates.
Experience-Based Probabilities Modulate Expectations in a Gender-Coded Artificial Language.
Öttl, Anton; Behne, Dawn M
2016-01-01
The current study combines artificial language learning with visual world eyetracking to investigate acquisition of representations associating spoken words and visual referents using morphologically complex pseudowords. Pseudowords were constructed to consistently encode referential gender by means of suffixation for a set of imaginary figures that could be either male or female. During training, the frequency of exposure to pseudowords and their imaginary figure referents were manipulated such that a given word and its referent would be more likely to occur in either the masculine form or the feminine form, or both forms would be equally likely. Results show that these experience-based probabilities affect the formation of new representations to the extent that participants were faster at recognizing a referent whose gender was consistent with the induced expectation than a referent whose gender was inconsistent with this expectation. Disambiguating gender information available from the suffix did not mask the induced expectations. Eyetracking data provide additional evidence that such expectations surface during online lexical processing. Taken together, these findings indicate that experience-based information is accessible during the earliest stages of processing, and are consistent with the view that language comprehension depends on the activation of perceptual memory traces. PMID:27602009
Beheshti, I; Demirel, H
2015-09-01
High-dimensional classification methods have been a major target of machine learning for the automatic classification of patients who suffer from Alzheimer's disease (AD). One major issue of automatic classification is the feature-selection method from high-dimensional data. In this paper, a novel approach for statistical feature reduction and selection in high-dimensional magnetic resonance imaging (MRI) data based on the probability distribution function (PDF) is introduced. To develop an automatic computer-aided diagnosis (CAD) technique, this research explores the statistical patterns extracted from structural MRI (sMRI) data on four systematic levels. First, global and local differences of gray matter in patients with AD compared to healthy controls (HCs) using the voxel-based morphometric (VBM) technique with 3-Tesla 3D T1-weighted MRI are investigated. Second, feature extraction based on the voxel clusters detected by VBM on sMRI and voxel values as volume of interest (VOI) is used. Third, a novel statistical feature-selection process is employed, utilizing the PDF of the VOI to represent statistical patterns of the respective high-dimensional sMRI sample. Finally, the proposed feature-selection method for early detection of AD with support vector machine (SVM) classifiers compared to other standard feature selection methods, such as partial least squares (PLS) techniques, is assessed. The performance of the proposed technique is evaluated using 130 AD and 130 HC MRI data from the ADNI dataset with 10-fold cross validation(1). The results show that the PDF-based feature selection approach is a reliable technique that is highly competitive with respect to the state-of-the-art techniques in classifying AD from high-dimensional sMRI samples. PMID:26226415
SAR amplitude probability density function estimation based on a generalized Gaussian model.
Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B
2006-06-01
In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena. PMID:16764268
Visualization and probability-based scoring of structural variants within repetitive sequences
Halper-Stromberg, Eitan; Steranka, Jared; Burns, Kathleen H.; Sabunciyan, Sarven; Irizarry, Rafael A.
2014-01-01
Motivation: Repetitive sequences account for approximately half of the human genome. Accurately ascertaining sequences in these regions with next generation sequencers is challenging, and requires a different set of analytical techniques than for reads originating from unique sequences. Complicating the matter are repetitive regions subject to programmed rearrangements, as is the case with the antigen-binding domains in the Immunoglobulin (Ig) and T-cell receptor (TCR) loci. Results: We developed a probability-based score and visualization method to aid in distinguishing true structural variants from alignment artifacts. We demonstrate the usefulness of this method in its ability to separate real structural variants from false positives generated with existing upstream analysis tools. We validated our approach using both target-capture and whole-genome experiments. Capture sequencing reads were generated from primary lymphoid tumors, cancer cell lines and an EBV-transformed lymphoblast cell line over the Ig and TCR loci. Whole-genome sequencing reads were from a lymphoblastoid cell-line. Availability: We implement our method as an R package available at https://github.com/Eitan177/targetSeqView. Code to reproduce the figures and results are also available. Contact: ehalper2@jhmi.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24501098
NASA Astrophysics Data System (ADS)
Blessent, Daniela; Therrien, René; Lemieux, Jean-Michel
2011-12-01
This paper presents numerical simulations of a series of hydraulic interference tests conducted in crystalline bedrock at Olkiluoto (Finland), a potential site for the disposal of the Finnish high-level nuclear waste. The tests are in a block of crystalline bedrock of about 0.03 km3 that contains low-transmissivity fractures. Fracture density, orientation, and fracture transmissivity are estimated from Posiva Flow Log (PFL) measurements in boreholes drilled in the rock block. On the basis of those data, a geostatistical approach relying on a transitional probability and Markov chain models is used to define a conceptual model based on stochastic fractured rock facies. Four facies are defined, from sparsely fractured bedrock to highly fractured bedrock. Using this conceptual model, three-dimensional groundwater flow is then simulated to reproduce interference pumping tests in either open or packed-off boreholes. Hydraulic conductivities of the fracture facies are estimated through automatic calibration using either hydraulic heads or both hydraulic heads and PFL flow rates as targets for calibration. The latter option produces a narrower confidence interval for the calibrated hydraulic conductivities, therefore reducing the associated uncertainty and demonstrating the usefulness of the measured PFL flow rates. Furthermore, the stochastic facies conceptual model is a suitable alternative to discrete fracture network models to simulate fluid flow in fractured geological media.
Moment-Based Probability Modeling and Extreme Response Estimation, The FITS Routine Version 1.2
MANUEL,LANCE; KASHEF,TINA; WINTERSTEIN,STEVEN R.
1999-11-01
This report documents the use of the FITS routine, which provides automated fits of various analytical, commonly used probability models from input data. It is intended to complement the previously distributed FITTING routine documented in RMS Report 14 (Winterstein et al., 1994), which implements relatively complex four-moment distribution models whose parameters are fit with numerical optimization routines. Although these four-moment fits can be quite useful and faithful to the observed data, their complexity can make them difficult to automate within standard fitting algorithms. In contrast, FITS provides more robust (lower moment) fits of simpler, more conventional distribution forms. For each database of interest, the routine estimates the distribution of annual maximum response based on the data values and the duration, T, over which they were recorded. To focus on the upper tails of interest, the user can also supply an arbitrary lower-bound threshold, {chi}{sub low}, above which a shifted distribution model--exponential or Weibull--is fit.
A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities
Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.
1999-01-01
A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.
Micro-object motion tracking based on the probability hypothesis density particle tracker.
Shi, Chunmei; Zhao, Lingling; Wang, Junjie; Zhang, Chiping; Su, Xiaohong; Ma, Peijun
2016-04-01
Tracking micro-objects in the noisy microscopy image sequences is important for the analysis of dynamic processes in biological objects. In this paper, an automated tracking framework is proposed to extract the trajectories of micro-objects. This framework uses a probability hypothesis density particle filtering (PF-PHD) tracker to implement a recursive state estimation and trajectories association. In order to increase the efficiency of this approach, an elliptical target model is presented to describe the micro-objects using shape parameters instead of point-like targets which may cause inaccurate tracking. A novel likelihood function, not only covering the spatiotemporal distance but also dealing with geometric shape function based on the Mahalanobis norm, is proposed to improve the accuracy of particle weight in the update process of the PF-PHD tracker. Using this framework, a larger number of tracks are obtained. The experiments are performed on simulated data of microtubule movements and real mouse stem cells. We compare the PF-PHD tracker with the nearest neighbor method and the multiple hypothesis tracking method. Our PF-PHD tracker can simultaneously track hundreds of micro-objects in the microscopy image sequence. PMID:26084407
Probability based remaining capacity estimation using data-driven and neural network model
NASA Astrophysics Data System (ADS)
Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai
2016-05-01
Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.
Grossling, Bernardo F.
1975-01-01
Exploratory drilling is still in incipient or youthful stages in those areas of the world where the bulk of the potential petroleum resources is yet to be discovered. Methods of assessing resources from projections based on historical production and reserve data are limited to mature areas. For most of the world's petroleum-prospective areas, a more speculative situation calls for a critical review of resource-assessment methodology. The language of mathematical statistics is required to define more rigorously the appraisal of petroleum resources. Basically, two approaches have been used to appraise the amounts of undiscovered mineral resources in a geologic province: (1) projection models, which use statistical data on the past outcome of exploration and development in the province; and (2) estimation models of the overall resources of the province, which use certain known parameters of the province together with the outcome of exploration and development in analogous provinces. These two approaches often lead to widely different estimates. Some of the controversy that arises results from a confusion of the probabilistic significance of the quantities yielded by each of the two approaches. Also, inherent limitations of analytic projection models-such as those using the logistic and Gomperts functions --have often been ignored. The resource-assessment problem should be recast in terms that provide for consideration of the probability of existence of the resource and of the probability of discovery of a deposit. Then the two above-mentioned models occupy the two ends of the probability range. The new approach accounts for (1) what can be expected with reasonably high certainty by mere projections of what has been accomplished in the past; (2) the inherent biases of decision-makers and resource estimators; (3) upper bounds that can be set up as goals for exploration; and (4) the uncertainties in geologic conditions in a search for minerals. Actual outcomes can then
NASA Astrophysics Data System (ADS)
Gharouni-Nik, Morteza; Naeimi, Meysam; Ahadi, Sodayf; Alimoradi, Zahra
2014-06-01
In order to determine the overall safety of a tunnel support lining, a reliability-based approach is presented in this paper. Support elements in jointed rock tunnels are provided to control the ground movement caused by stress redistribution during the tunnel drive. Main support elements contribute to stability of the tunnel structure are recognized owing to identify various aspects of reliability and sustainability in the system. The selection of efficient support methods for rock tunneling is a key factor in order to reduce the number of problems during construction and maintain the project cost and time within the limited budget and planned schedule. This paper introduces a smart approach by which decision-makers will be able to find the overall reliability of tunnel support system before selecting the final scheme of the lining system. Due to this research focus, engineering reliability which is a branch of statistics and probability is being appropriately applied to the field and much effort has been made to use it in tunneling while investigating the reliability of the lining support system for the tunnel structure. Therefore, reliability analysis for evaluating the tunnel support performance is the main idea used in this research. Decomposition approaches are used for producing system block diagram and determining the failure probability of the whole system. Effectiveness of the proposed reliability model of tunnel lining together with the recommended approaches is examined using several case studies and the final value of reliability obtained for different designing scenarios. Considering the idea of linear correlation between safety factors and reliability parameters, the values of isolated reliabilities determined for different structural components of tunnel support system. In order to determine individual safety factors, finite element modeling is employed for different structural subsystems and the results of numerical analyses are obtained in
Latham, D.J.; Schlieter, J.A.
1989-09-01
Ignition of wildland fine fuels by lightning was simulated with an electric arc discharge in the laboratory. The results showed that fuel parameters such as depth, moisture content, bulk density, and mineral content can be combined with the duration of the simulated continuing current to give ignition probabilities. The fuel state parameters of importance and the ignition probabilities were determined using logistic regression. Graphs, tables, formulas, and a FORTRAN computer program are given for field use.
Evidence-Based Medicine as a Tool for Undergraduate Probability and Statistics Education.
Masel, J; Humphrey, P T; Blackburn, B; Levine, J A
2015-01-01
Most students have difficulty reasoning about chance events, and misconceptions regarding probability can persist or even strengthen following traditional instruction. Many biostatistics classes sidestep this problem by prioritizing exploratory data analysis over probability. However, probability itself, in addition to statistics, is essential both to the biology curriculum and to informed decision making in daily life. One area in which probability is particularly important is medicine. Given the preponderance of pre health students, in addition to more general interest in medicine, we capitalized on students' intrinsic motivation in this area to teach both probability and statistics. We use the randomized controlled trial as the centerpiece of the course, because it exemplifies the most salient features of the scientific method, and the application of critical thinking to medicine. The other two pillars of the course are biomedical applications of Bayes' theorem and science and society content. Backward design from these three overarching aims was used to select appropriate probability and statistics content, with a focus on eliciting and countering previously documented misconceptions in their medical context. Pretest/posttest assessments using the Quantitative Reasoning Quotient and Attitudes Toward Statistics instruments are positive, bucking several negative trends previously reported in statistics education. PMID:26582236
Evidence-Based Medicine as a Tool for Undergraduate Probability and Statistics Education
Masel, J.; Humphrey, P. T.; Blackburn, B.; Levine, J. A.
2015-01-01
Most students have difficulty reasoning about chance events, and misconceptions regarding probability can persist or even strengthen following traditional instruction. Many biostatistics classes sidestep this problem by prioritizing exploratory data analysis over probability. However, probability itself, in addition to statistics, is essential both to the biology curriculum and to informed decision making in daily life. One area in which probability is particularly important is medicine. Given the preponderance of pre health students, in addition to more general interest in medicine, we capitalized on students’ intrinsic motivation in this area to teach both probability and statistics. We use the randomized controlled trial as the centerpiece of the course, because it exemplifies the most salient features of the scientific method, and the application of critical thinking to medicine. The other two pillars of the course are biomedical applications of Bayes’ theorem and science and society content. Backward design from these three overarching aims was used to select appropriate probability and statistics content, with a focus on eliciting and countering previously documented misconceptions in their medical context. Pretest/posttest assessments using the Quantitative Reasoning Quotient and Attitudes Toward Statistics instruments are positive, bucking several negative trends previously reported in statistics education. PMID:26582236
NASA Astrophysics Data System (ADS)
Frič, Roman; Papčo, Martin
2015-12-01
Domains of generalized probability have been introduced in order to provide a general construction of random events, observables and states. It is based on the notion of a cogenerator and the properties of product. We continue our previous study and show how some other quantum structures fit our categorical approach. We discuss how various epireflections implicitly used in the classical probability theory are related to the transition to fuzzy probability theory and describe the latter probability theory as a genuine categorical extension of the former. We show that the IF-probability can be studied via the fuzzy probability theory. We outline a "tensor modification" of the fuzzy probability theory.
Experimental Probability in Elementary School
ERIC Educational Resources Information Center
Andrew, Lane
2009-01-01
Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.
Competency-based curricular design to encourage significant learning.
Hurtubise, Larry; Roman, Brenda
2014-07-01
Most significant learning (SL) experiences produce long-lasting learning experiences that meaningfully change the learner's thinking, feeling, and/or behavior. Most significant teaching experiences involve strong connections with the learner and recognition that the learner felt changed by the teaching effort. L. Dee Fink in Creating Significant Learning Experiences: An Integrated Approach to Designing College Course defines six kinds of learning goals: Foundational Knowledge, Application, Integration, Human Dimension, Caring, and Learning to Learn. SL occurs when learning experiences promote interaction between the different kinds of goals, for example, acquiring knowledge alone is not enough, but when paired with a learning experience, such as an effective patient experience as in Caring, then significant (and lasting) learning occurs. To promote SL, backward design principles that start with clearly defined learning goals and the context of the situation of the learner are particularly effective. Emphasis on defining assessment methods prior to developing teaching/learning activities is the key: this ensures that assessment (where the learner should be at the end of the educational activity/process) drives instruction and that assessment and learning/instruction are tightly linked so that assessment measures a defined outcome (competency) of the learner. Employing backward design and the AAMC's MedBiquitous standard vocabulary for medical education can help to ensure that curricular design and redesign efforts effectively enhance educational program quality and efficacy, leading to improved patient care. Such methods can promote successful careers in health care for learners through development of self-directed learning skills and active learning, in ways that help learners become fully committed to lifelong learning and continuous professional development. PMID:24981665
The Significance of Trust in School-Based Collaborative Leadership
ERIC Educational Resources Information Center
Coleman, Andrew
2012-01-01
The expectation that schools should work in partnership to promote the achievement of children has arguably been the defining feature of school policy over the last decade. This rise in school-to-school partnerships and increased emphasis on multi-agency-based interventions for vulnerable children have seen the emergence of a new form of school…
Value of genetic testing for hereditary colorectal cancer in a probability-based US online sample
Knight, Sara J.; Mohamed, Ateesha F.; Marshall, Deborah A.; Ladabaum, Uri; Phillips, Kathryn A.; Walsh, Judith M. E.
2015-01-01
Background While choices about genetic testing are increasingly common for patients and families, and public opinion surveys suggest public interest in genomics, it is not known how adults from the general population value genetic testing for heritable conditions. We sought to understand in a US sample the relative value of the characteristics of genetic tests to identify risk of hereditary colorectal cancer, among the first genomic applications with evidence to support its translation to clinical settings. Methods A Web-enabled choice-format conjoint survey was conducted with adults age 50 and older from a probability-based US panel. Participants were asked to make a series of choices between two hypothetical blood tests that differed in risk of false negative test, privacy, and cost. Random parameters logit models were used to estimate preferences, the dollar value of genetic information, and intent to have genetic testing. Results A total of 355 individuals completed choice-format questions. Cost and privacy were more highly valued than reducing the chance of a false negative result. Most (97%, 95% Confidence Interval (CI): 95% to 99%) would have genetic testing to reduce the risk of dying from colorectal cancer in the best scenario (no false negatives, results disclosed to primary care physician). Only 41% (95% CI: 25% to 57%) would have genetic testing in the worst case (20% false negatives, results disclosed to insurance company). Conclusions Given the characteristics and levels included in the choice, if false negative test results are unlikely and results are shared with a primary care physician, the majority would have genetic testing. As genomic services become widely available, primary care professionals will need to be increasingly knowledgeable about genetic testing decisions. PMID:25589525
Model assisted probability of detection for a guided waves based SHM technique
NASA Astrophysics Data System (ADS)
Memmolo, V.; Ricci, F.; Maio, L.; Boffa, N. D.; Monaco, E.
2016-04-01
Guided wave (GW) Structural Health Monitoring (SHM) allows to assess the health of aerostructures thanks to the great sensitivity to delamination and/or debondings appearance. Due to the several complexities affecting wave propagation in composites, an efficient GW SHM system requires its effective quantification associated to a rigorous statistical evaluation procedure. Probability of Detection (POD) approach is a commonly accepted measurement method to quantify NDI results and it can be effectively extended to an SHM context. However, it requires a very complex setup arrangement and many coupons. When a rigorous correlation with measurements is adopted, Model Assisted POD (MAPOD) is an efficient alternative to classic methods. This paper is concerned with the identification of small emerging delaminations in composite structural components. An ultrasonic GW tomography focused to impact damage detection in composite plate-like structures recently developed by authors is investigated, getting the bases for a more complex MAPOD analysis. Experimental tests carried out on a typical wing composite structure demonstrated the effectiveness of modeling approach in order to detect damages with the tomographic algorithm. Environmental disturbances, which affect signal waveforms and consequently damage detection, are considered simulating a mathematical noise in the modeling stage. A statistical method is used for an effective making decision procedure. A Damage Index approach is implemented as metric to interpret the signals collected from a distributed sensor network and a subsequent graphic interpolation is carried out to reconstruct the damage appearance. A model validation and first reliability assessment results are provided, in view of performance system quantification and its optimization as well.
Wagner, Daniel M.; Krieger, Joshua D.; Veilleux, Andrea G.
2016-01-01
In 2013, the U.S. Geological Survey initiated a study to update regional skew, annual exceedance probability discharges, and regional regression equations used to estimate annual exceedance probability discharges for ungaged locations on streams in the study area with the use of recent geospatial data, new analytical methods, and available annual peak-discharge data through the 2013 water year. An analysis of regional skew using Bayesian weighted least-squares/Bayesian generalized-least squares regression was performed for Arkansas, Louisiana, and parts of Missouri and Oklahoma. The newly developed constant regional skew of -0.17 was used in the computation of annual exceedance probability discharges for 281 streamgages used in the regional regression analysis. Based on analysis of covariance, four flood regions were identified for use in the generation of regional regression models. Thirty-nine basin characteristics were considered as potential explanatory variables, and ordinary least-squares regression techniques were used to determine the optimum combinations of basin characteristics for each of the four regions. Basin characteristics in candidate models were evaluated based on multicollinearity with other basin characteristics (variance inflation factor < 2.5) and statistical significance at the 95-percent confidence level (p ≤ 0.05). Generalized least-squares regression was used to develop the final regression models for each flood region. Average standard errors of prediction of the generalized least-squares models ranged from 32.76 to 59.53 percent, with the largest range in flood region D. Pseudo coefficients of determination of the generalized least-squares models ranged from 90.29 to 97.28 percent, with the largest range also in flood region D. The regional regression equations apply only to locations on streams in Arkansas where annual peak discharges are not substantially affected by regulation, diversion, channelization, backwater, or urbanization
A Scrabble Heuristic Based on Probability That Performs at Championship Level
NASA Astrophysics Data System (ADS)
Ramírez, Arturo; Acuña, Francisco González; Romero, Alejandro González; Alquézar, René; Hernández, Enric; Aguilar, Amador Roldán; Olmedo, Ian García
The game of Scrabble, in its competitive form (one vs. one), has been tackled mostly by using Monte Carlo simulation. Recently [1], Probability Theory (Bayes’ theorem) was used to gain knowledge about the opponents’ tiles; this proved to be a good approach to improve even more Computer Scrabble. We used probability to evaluate Scrabble leaves (rack residues); then using this evaluation, a heuristic function that dictates a move can be constructed. To calculate these probabilities it is necessary to have a lexicon, in our case a Spanish lexicon. To make proper investigations in the domain of Scrabble it is important to have the same lexicon as the one used by humans in official tournaments. We did a huge amount of work to build this free lexicon. In this paper a heuristic function that involves leaves probabilities is given. We have now an engine, Heuri, that uses this heuristic, and we have been able to perform some experiments to test it. The tests include matches against highly expert players; the games played so far give us promising results. For instance, recently a match between the current World Scrabble Champion (in Spanish) and Heuri was played. Heuri defeated the World Champion 6-0 ! Heuri includes a move generator which, using a lot of memory, is faster than using DAWG [2] or GADDAG [3]. Another plan to build a stronger Heuri that combines heuristics using probabilities, opponent modeling and Monte Carlo simulation is also proposed.
Significance of hair-dye base-induced sensory irritation.
Fujita, F; Azuma, T; Tajiri, M; Okamoto, H; Sano, M; Tominaga, M
2010-06-01
Oxidation hair-dyes, which are the principal hair-dyes, sometimes induce painful sensory irritation of the scalp caused by the combination of highly reactive substances, such as hydrogen peroxide and alkali agents. Although many cases of severe facial and scalp dermatitis have been reported following the use of hair-dyes, sensory irritation caused by contact of the hair-dye with the skin has not been reported clearly. In this study, we used a self-assessment questionnaire to measure the sensory irritation in various regions of the body caused by two model hair-dye bases that contained different amounts of alkali agents without dyes. Moreover, the occipital region was found as an alternative region of the scalp to test for sensory irritation of the hair-dye bases. We used this region to evaluate the relationship of sensitivity with skin properties, such as trans-epidermal water loss (TEWL), stratum corneum water content, sebum amount, surface temperature, current perception threshold (CPT), catalase activities in tape-stripped skin and sensory irritation score with the model hair-dye bases. The hair-dye sensitive group showed higher TEWL, a lower sebum amount, a lower surface temperature and higher catalase activity than the insensitive group, and was similar to that of damaged skin. These results suggest that sensory irritation caused by hair-dye could occur easily on the damaged dry scalp, as that caused by skin cosmetics reported previously. PMID:20557579
Nichols, Alice I.; Preskorn, Sheldon H.
2015-01-01
Objective: The avoidance of adverse drug-drug interactions (DDIs) is a high priority in terms of both the US Food and Drug Administration (FDA) and the individual prescriber. With this perspective in mind, this article illustrates the process for assessing the risk of a drug (example here being desvenlafaxine) causing or being the victim of DDIs, in accordance with FDA guidance. Data Sources/Study Selection: DDI studies for the serotonin-norepinephrine reuptake inhibitor desvenlafaxine conducted by the sponsor and published since 2009 are used as examples of the systematic way that the FDA requires drug developers to assess whether their new drug is either capable of causing clinically meaningful DDIs or being the victim of such DDIs. In total, 8 open-label studies tested the effects of steady-state treatment with desvenlafaxine (50–400 mg/d) on the pharmacokinetics of cytochrome (CYP) 2D6 and/or CYP 3A4 substrate drugs, or the effect of CYP 3A4 inhibition on desvenlafaxine pharmacokinetics. The potential for DDIs mediated by the P-glycoprotein (P-gp) transporter was assessed in in vitro studies using Caco-2 monolayers. Data Extraction: Changes in area under the plasma concentration-time curve (AUC; CYP studies) and efflux (P-gp studies) were reviewed for potential DDIs in accordance with FDA criteria. Results: Desvenlafaxine coadministration had minimal effect on CYP 2D6 and/or 3A4 substrates per FDA criteria. Changes in AUC indicated either no interaction (90% confidence intervals for the ratio of AUC geometric least-squares means [GM] within 80%–125%) or weak inhibition (AUC GM ratio 125% to < 200%). Coadministration with ketoconazole resulted in a weak interaction with desvenlafaxine (AUC GM ratio of 143%). Desvenlafaxine was not a substrate (efflux ratio < 2) or inhibitor (50% inhibitory drug concentration values > 250 μM) of P-gp. Conclusions: A 2-step process based on FDA guidance can be used first to determine whether a pharmacokinetically mediated
Heightened odds of large earthquakes near Istanbul: an interaction-based probability calculation
Parsons, T.; Toda, S.; Stein, R.S.; Barka, A.; Dieterich, J.H.
2000-01-01
We calculate the probability of strong shaking in Istanbul, an urban center of 10 million people, from the description of earthquakes on the North Anatolian fault system in the Marmara Sea during the past 500 years and test the resulting catalog against the frequency of damage in Istanbul during the preceding millennium, departing from current practice, we include the time-dependent effect of stress transferred by the 1999 moment magnitude M = 7.4 Izmit earthquake to faults nearer to Istanbul. We find a 62 ± 15% probability (one standard deviation) of strong shaking during the next 30 years and 32 ± 12% during the next decade.
RDX-based nanocomposite microparticles for significantly reduced shock sensitivity.
Qiu, Hongwei; Stepanov, Victor; Di Stasio, Anthony R; Chou, Tsengming; Lee, Woo Y
2011-01-15
Cyclotrimethylenetrinitramine (RDX)-based nanocomposite microparticles were produced by a simple, yet novel spray drying method. The microparticles were characterized by scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD) and high performance liquid chromatography (HPLC), which shows that they consist of small RDX crystals (∼0.1-1 μm) uniformly and discretely dispersed in a binder. The microparticles were subsequently pressed to produce dense energetic materials which exhibited a markedly lower shock sensitivity. The low sensitivity was attributed to small crystal size as well as small void size (∼250 nm). The method developed in this work may be suitable for the preparation of a wide range of insensitive explosive compositions. PMID:20940087
Detection of significant pathways in osteoporosis based on graph clustering.
Xiao, Haijun; Shan, Liancheng; Zhu, Haiming; Xue, Feng
2012-12-01
Osteoporosis is the most common and serious skeletal disorder among the elderly, characterized by a low bone mineral density (BMD). Low bone mass in the elderly is highly dependent on their peak bone mass (PBM) as young adults. Circulating monocytes serve as early progenitors of osteoclasts and produce significant molecules for bone metabolism. An improved understanding of the biology and genetics of osteoclast differentiation at the pathway level is likely to be beneficial for the development of novel targeted approaches for osteoporosis. The objective of this study was to explore gene expression profiles comprehensively by grouping individual differentially expressed genes (DEGs) into gene sets and pathways using the graph clustering approach and Gene Ontology (GO) term enrichment analysis. The results indicated that the DEGs between high and low PBM samples were grouped into nine gene sets. The genes in clusters 1 and 8 (including GBP1, STAT1, CXCL10 and EIF2AK2) may be associated with osteoclast differentiation by the immune system response. The genes in clusters 2, 7 and 9 (including SOCS3, SOD2, ATF3, ADM EGR2 and BCL2A1) may be associated with osteoclast differentiation by responses to various stimuli. This study provides a number of candidate genes that warrant further investigation, including DDX60, HERC5, RSAD2, SIGLEC1, CMPK2, MX1, SEPING1, EPSTI1, C9orf72, PHLDA2, PFKFB3, PLEKHG2, ANKRD28, IL1RN and RNF19B. PMID:22992777
Implicit Segmentation of a Stream of Syllables Based on Transitional Probabilities: An MEG Study
ERIC Educational Resources Information Center
Teinonen, Tuomas; Huotilainen, Minna
2012-01-01
Statistical segmentation of continuous speech, i.e., the ability to utilise transitional probabilities between syllables in order to detect word boundaries, is reflected in the brain's auditory event-related potentials (ERPs). The N1 and N400 ERP components are typically enhanced for word onsets compared to random syllables during active…
Eash, David A.; Barnes, Kimberlee K.; Veilleux, Andrea G.
2013-01-01
A statewide study was performed to develop regional regression equations for estimating selected annual exceedance-probability statistics for ungaged stream sites in Iowa. The study area comprises streamgages located within Iowa and 50 miles beyond the State’s borders. Annual exceedance-probability estimates were computed for 518 streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data through 2010. The estimation of the selected statistics included a Bayesian weighted least-squares/generalized least-squares regression analysis to update regional skew coefficients for the 518 streamgages. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low flows. Also, geographic information system software was used to measure 59 selected basin characteristics for each streamgage. Regional regression analysis, using generalized least-squares regression, was used to develop a set of equations for each flood region in Iowa for estimating discharges for ungaged stream sites with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. A total of 394 streamgages were included in the development of regional regression equations for three flood regions (regions 1, 2, and 3) that were defined for Iowa based on landform regions and soil regions. Average standard errors of prediction range from 31.8 to 45.2 percent for flood region 1, 19.4 to 46.8 percent for flood region 2, and 26.5 to 43.1 percent for flood region 3. The pseudo coefficients of determination for the generalized least-squares equations range from 90.8 to 96.2 percent for flood region 1, 91.5 to 97
NASA Astrophysics Data System (ADS)
Aronica, Giuseppe Tito; Candela, Angela; Fabio, Pamela; Santoro, Mario
In this paper a new procedure to derive flood hazard maps incorporating uncertainty concepts is presented. The layout of the procedure can be resumed as follows: (1) stochastic input of flood hydrograph modelled through a direct Monte-Carlo simulation based on flood recorded data. Generation of flood peaks and flow volumes has been obtained via copulas, which describe and model the correlation between these two variables independently of the marginal laws involved. The shape of hydrograph has been generated on the basis of a historical significant flood events, via cluster analysis; (2) modelling of flood propagation using a hyperbolic finite element model based on the DSV equations; (3) definition of global hazard indexes based on hydro-dynamic variables (i.e., water depth and flow velocities). The GLUE methodology has been applied in order to account for parameter uncertainty. The procedure has been tested on a flood prone area located in the southern part of Sicily, Italy. Three hazard maps have been obtained and then compared.
Social Science and the Bayesian Probability Explanation Model
NASA Astrophysics Data System (ADS)
Yin, Jie; Zhao, Lei
2014-03-01
C. G. Hempel, one of the logical empiricists, who builds up his probability explanation model by using the empiricist view of probability, this model encountered many difficulties in the scientific explanation in which Hempel is difficult to make a reasonable defense. Based on the bayesian probability theory, the Bayesian probability model provides an approach of a subjective probability explanation based on the subjective probability, using the subjectivist view of probability. On the one hand, this probability model establishes the epistemological status of the subject in the social science; On the other hand, it provides a feasible explanation model for the social scientific explanation, which has important methodological significance.
ERIC Educational Resources Information Center
Ghitza, Udi E.; Epstein, David H.; Schmittner, John; Vahabzadeh, Massoud; Lin, Jia-Ling; Preston, Kenzie L.
2008-01-01
Although treatment outcome in prize-based contingency management has been shown to depend on reinforcement schedule, the optimal schedule is still unknown. Therefore, we conducted a retrospective analysis of data from a randomized clinical trial (Ghitza et al., 2007) to determine the effects of the probability of winning a prize (low vs. high) and…
We conducted a probability-based sampling of Lake Superior in 2006 and compared the zooplankton biomass estimate with laser optical plankton counter (LOPC) predictions. The net survey consisted of 52 sites stratified across three depth zones (0-30, 30-150, >150 m). The LOPC tow...
The Minnesota Children's Pesticide Exposure Study is a probability-based sample of 102 children 3-13 years old who were monitored for commonly used pesticides. During the summer of 1997, first-morning-void urine samples (1-3 per child) were obtained for 88% of study children a...
ERIC Educational Resources Information Center
Kaplan, Danielle E.; Wu, Erin Chia-ling
2006-01-01
Our research suggests static and animated graphics can lead to more animated thinking and more correct problem solving in computer-based probability learning. Pilot software modules were developed for graduate online statistics courses and representation research. A study with novice graduate student statisticians compared problem solving in five…
ERIC Educational Resources Information Center
Huynh, Huynh
2006-01-01
By analyzing the Fisher information allotted to the correct response of a Rasch binary item, Huynh (1994) established the response probability criterion 0.67 (RP67) for standard settings based on bookmarks and item mapping. The purpose of this note is to help clarify the conceptual and psychometric framework of the RP criterion.
NASA Astrophysics Data System (ADS)
Gao, Dongyue; Wu, Zhanjun; Yang, Lei; Zheng, Yuebin
2016-04-01
Multi-damage identification is an important and challenging task in the research of guide waves-based structural health monitoring. In this paper, a multi-damage identification method is presented using a guide waves-based local probability-based diagnostic imaging (PDI) method. The method includes a path damage judgment stage, a multi-damage judgment stage and a multi-damage imaging stage. First, damage imaging was performed by partition. The damage imaging regions are divided into beside damage signal paths. The difference in guide waves propagation characteristics between cross and beside damage paths is proposed by theoretical analysis of the guide wave signal feature. The time-of-flight difference of paths is used as a factor to distinguish between cross and beside damage paths. Then, a global PDI method (damage identification using all paths in the sensor network) is performed using the beside damage path network. If the global PDI damage zone crosses the beside damage path, it means that the discrete multi-damage model (such as a group of holes or cracks) has been misjudged as a continuum single-damage model (such as a single hole or crack) by the global PDI method. Subsequently, damage imaging regions are separated by beside damage path and local PDI (damage identification using paths in the damage imaging regions) is performed in each damage imaging region. Finally, multi-damage identification results are obtained by superimposing the local damage imaging results and the marked cross damage paths. The method is employed to inspect the multi-damage in an aluminum plate with a surface-mounted piezoelectric ceramic sensors network. The results show that the guide waves-based multi-damage identification method is capable of visualizing the presence, quantity and location of structural damage.
Yu, Hancheng; Gao, Jianlin; Li, Aiting
2016-03-01
In this Letter, a probability-based non-local means filter is proposed for speckle reduction in optical coherence tomography (OCT). Originally developed for additive white Gaussian noise, the non-local means filter is not suitable for multiplicative speckle noise suppression. This Letter presents a two-stage non-local means algorithm using the uncorrupted probability of each pixel to effectively reduce speckle noise in OCT. Experiments on real OCT images demonstrate that the proposed filter is competitive with other state-of-the-art speckle removal techniques and able to accurately preserve edges and structural details with small computational cost. PMID:26974099
NASA Astrophysics Data System (ADS)
Rosa, A. N. F.; Wiatr, P.; Cavdar, C.; Carvalho, S. V.; Costa, J. C. W. A.; Wosinska, L.
2015-11-01
In Elastic Optical Network (EON), spectrum fragmentation refers to the existence of non-aligned, small-sized blocks of free subcarrier slots in the optical spectrum. Several metrics have been proposed in order to quantify a level of spectrum fragmentation. Approximation methods might be used for estimating average blocking probability and some fragmentation measures, but are so far unable to accurately evaluate the influence of different sizes of connection requests and do not allow in-depth investigation of blocking events and their relation to fragmentation. The analytical study of the effect of fragmentation on requests' blocking probability is still under-explored. In this work, we introduce new definitions for blocking that differentiate between the reasons for the blocking events. We developed a framework based on Markov modeling to calculate steady-state probabilities for the different blocking events and to analyze fragmentation related problems in elastic optical links under dynamic traffic conditions. This framework can also be used for evaluation of different definitions of fragmentation in terms of their relation to the blocking probability. We investigate how different allocation request sizes contribute to fragmentation and blocking probability. Moreover, we show to which extend blocking events, due to insufficient amount of available resources, become inevitable and, compared to the amount of blocking events due to fragmented spectrum, we draw conclusions on the possible gains one can achieve by system defragmentation. We also show how efficient spectrum allocation policies really are in reducing the part of fragmentation that in particular leads to actual blocking events. Simulation experiments are carried out showing good match with our analytical results for blocking probability in a small scale scenario. Simulated blocking probabilities for the different blocking events are provided for a larger scale elastic optical link.
NASA Astrophysics Data System (ADS)
Lazri, Mourad; Ameur, Soltane
2016-09-01
In this paper, an algorithm based on the probability of rainfall intensities classification for rainfall estimation from Meteosat Second Generation/Spinning Enhanced Visible and Infrared Imager (MSG-SEVIRI) has been developed. The classification scheme uses various spectral parameters of SEVIRI that provide information about cloud top temperature and optical and microphysical cloud properties. The presented method is developed and trained for the north of Algeria. The calibration of the method is carried out using as a reference rain classification fields derived from radar for rainy season from November 2006 to March 2007. Rainfall rates are assigned to rain areas previously identified and classified according to the precipitation formation processes. The comparisons between satellite-derived precipitation estimates and validation data show that the developed scheme performs reasonably well. Indeed, the correlation coefficient presents a significant level (r:0.87). The values of POD, POFD and FAR are 80%, 13% and 25%, respectively. Also, for a rainfall estimation of about 614 mm, the RMSD, Bias, MAD and PD indicate 102.06(mm), 2.18(mm), 68.07(mm) and 12.58, respectively.
NASA Astrophysics Data System (ADS)
Yoshikawa, Tomohiro; Nagashima, Hidetaka; Hasegawa, Hiroshi; Sato, Ken-ichi
2007-11-01
We propose a routing and wavelength assignment algorithm for Optical Burst Switching (OBS) networks that utilizes centralized control. First, a method that can estimate the expected total blocking time in the network is presented. Then the proposed algorithm minimizes the estimated blocking time by simple iterative local optimization in terms of the traffic demand between each pair of nodes. We demonstrate that the proposed algorithm attains much smaller blocking probability than conventional distributed control algorithms. It is also shown that with introduction of optical buffers and burst retransmission, the proposed method realizes low burst loss rates (<10 -6) acceptable for most applications.
ERIC Educational Resources Information Center
Weatherly, Myra S.
1984-01-01
Instruction in mathematical probability to enhance higher levels of critical and creative thinking with gifted students is described. Among thinking skills developed by such an approach are analysis, synthesis, evaluation, fluency, and complexity. (CL)
A Probability-Base Alerting Logic for Aircraft on Parallel Approach
NASA Technical Reports Server (NTRS)
Carpenter, Brenda D.; Kuchar, James K.
1997-01-01
This document discusses the development and evaluation of an airborne collision alerting logic for aircraft on closely-spaced approaches to parallel runways. A novel methodology is used when links alerts to collision probabilities: alerting thresholds are set such that when the probability of a collision exceeds an acceptable hazard level an alert is issued. The logic was designed to limit the hazard level to that estimated for the Precision Runway Monitoring system: one accident in every one thousand blunders which trigger alerts. When the aircraft were constrained to be coaltitude, evaluations of a two-dimensional version of the alerting logic show that the achieved hazard level is approximately one accident in every 250 blunders. Problematic scenarios have been identified and corrections to the logic can be made. The evaluations also show that over eighty percent of all unnecessary alerts were issued during scenarios in which the miss distance would have been less than 1000 ft, indicating that the alerts may have been justified. Also, no unnecessary alerts were generated during normal approaches.
NASA Astrophysics Data System (ADS)
Zhao, Tongtiegang; Wang, Q. J.; Bennett, James C.; Robertson, David E.; Shao, Quanxi; Zhao, Jianshi
2015-09-01
Uncertainty is inherent in streamflow forecasts and is an important determinant of the utility of forecasts for water resources management. However, predictions by deterministic models provide only single values without uncertainty attached. This study presents a method for using a Bayesian joint probability (BJP) model to post-process deterministic streamflow forecasts by quantifying predictive uncertainty. The BJP model is comprised of a log-sinh transformation that normalises hydrological data, and a bi-variate Gaussian distribution that characterises the dependence relationship. The parameters of the transformation and the distribution are estimated through Bayesian inference with a Monte Carlo Markov chain (MCMC) algorithm. The BJP model produces, from a raw deterministic forecast, an ensemble of values to represent forecast uncertainty. The model is applied to raw deterministic forecasts of inflows to the Three Gorges Reservoir in China as a case study. The heteroscedasticity and non-Gaussianity of forecast uncertainty are effectively addressed. The ensemble spread accounts for the forecast uncertainty and leads to considerable improvement in terms of the continuous ranked probability score. The forecasts become less accurate as lead time increases, and the ensemble spread provides reliable information on the forecast uncertainty. We conclude that the BJP model is a useful tool to quantify predictive uncertainty in post-processing deterministic streamflow forecasts.
Mesh-Based Entry Vehicle and Explosive Debris Re-Contact Probability Modeling
NASA Technical Reports Server (NTRS)
McPherson, Mark A.; Mendeck, Gavin F.
2011-01-01
The risk to a crewed vehicle arising from potential re-contact with fragments from an explosive breakup of any jettisoned spacecraft segments during entry has long sought to be quantified. However, great difficulty lies in efficiently capturing the potential locations of each fragment and their collective threat to the vehicle. The method presented in this paper addresses this problem by using a stochastic approach that discretizes simulated debris pieces into volumetric cells, and then assesses strike probabilities accordingly. Combining spatial debris density and relative velocity between the debris and the entry vehicle, the strike probability can be calculated from the integral of the debris flux inside each cell over time. Using this technique it is possible to assess the risk to an entry vehicle along an entire trajectory as it separates from the jettisoned segment. By decoupling the fragment trajectories from that of the entry vehicle, multiple potential separation maneuvers can then be evaluated rapidly to provide an assessment of the best strategy to mitigate the re-contact risk.
Model-Based Calculations of the Probability of a Country's Nuclear Proliferation Decisions
Li, Jun; Yim, Man-Sung; McNelis, David N.
2007-07-01
explain the occurrences of proliferation decisions. However, predicting major historical proliferation events using model-based predictions has been unreliable. Nuclear proliferation decisions by a country is affected by three main factors: (1) technology; (2) finance; and (3) political motivation [1]. Technological capability is important as nuclear weapons development needs special materials, detonation mechanism, delivery capability, and the supporting human resources and knowledge base. Financial capability is likewise important as the development of the technological capabilities requires a serious financial commitment. It would be difficult for any state with a gross national product (GNP) significantly less than that of about $100 billion to devote enough annual governmental funding to a nuclear weapon program to actually achieve positive results within a reasonable time frame (i.e., 10 years). At the same time, nuclear proliferation is not a matter determined by a mastery of technical details or overcoming financial constraints. Technology or finance is a necessary condition but not a sufficient condition for nuclear proliferation. At the most fundamental level, the proliferation decision by a state is controlled by its political motivation. To effectively address the issue of predicting proliferation events, all three of the factors must be included in the model. To the knowledge of the authors, none of the exiting models considered the 'technology' variable as part of the modeling. This paper presents an attempt to develop a methodology for statistical modeling and predicting a country's nuclear proliferation decisions. The approach is based on the combined use of data on a country's nuclear technical capability profiles economic development status, security environment factors and internal political and cultural factors. All of the information utilized in the study was from open source literature. (authors)
Qian, Weijun; Liu, Tao; Monroe, Matthew E.; Strittmatter, Eric F.; Jacobs, Jon M.; Kangas, Lars J.; Petritis, Konstantinos; Camp, David G.; Smith, Richard D.
2005-01-01
Large scale protein identifications from highly complex protein mixtures have recently been achieved using multidimensional liquid chromatography coupled with tandem mass spectrometry (LC/LC-MS/MS) and subsequent database searching with algorithms such as SEQUEST. Here, we describe a probability-based evaluation of false positive rates associated with peptide identifications from three different human proteome samples. Peptides from human plasma, human mammary epithelial cell (HMEC) lysate, and human hepatocyte (Huh)-7.5 cell lysate were separated by strong cation exchange (SCX) chromatography coupled offline with reversed-phase capillary LC-MS/MS analyses. The MS/MS spectra were first analyzed by SEQUEST, searching independently against both normal and sequence-reversed human protein databases, and the false positive rates of peptide identifications for the three proteome samples were then analyzed and compared. The observed false positive rates of peptide identifications for human plasma were significantly higher than those for the human cell lines when identical filtering criteria were used, which suggests that the false positive rates are highly dependent on sample characteristics, particularly the number of proteins found within the detectable dynamic range. Two new sets of filtering criteria are proposed for human plasma and human cell lines, respectively, to provide an overall confidence of >95% for peptide identifications. The new criteria were compared, using a normalized elution time (NET) criterion (Petritis et al. Anal. Chem. 2003, 75, 1039-48), with previously published criteria (Washburn et al. Nat. Biotechnol. 2001, 19, 242-7). The results demonstrate that the present criteria provide significantly higher levels of confidence for peptide identifications.
Miao, Zhichao; Westhof, Eric
2015-01-01
We describe a general binding score for predicting the nucleic acid binding probability in proteins. The score is directly derived from physicochemical and evolutionary features and integrates a residue neighboring network approach. Our process achieves stable and high accuracies on both DNA- and RNA-binding proteins and illustrates how the main driving forces for nucleic acid binding are common. Because of the effective integration of the synergetic effects of the network of neighboring residues and the fact that the prediction yields a hierarchical scoring on the protein surface, energy funnels for nucleic acid binding appear on protein surfaces, pointing to the dynamic process occurring in the binding of nucleic acids to proteins. PMID:25940624
Zhang, Feihu; Buckl, Christian; Knoll, Alois
2014-01-01
This paper studies the problem of multiple vehicle cooperative localization with spatial registration in the formulation of the probability hypothesis density (PHD) filter. Assuming vehicles are equipped with proprioceptive and exteroceptive sensors (with biases) to cooperatively localize positions, a simultaneous solution for joint spatial registration and state estimation is proposed. For this, we rely on the sequential Monte Carlo implementation of the PHD filtering. Compared to other methods, the concept of multiple vehicle cooperative localization with spatial registration is first proposed under Random Finite Set Theory. In addition, the proposed solution also addresses the challenges for multiple vehicle cooperative localization, e.g., the communication bandwidth issue and data association uncertainty. The simulation result demonstrates its reliability and feasibility in large-scale environments. PMID:24406860
Miao, Zhichao; Westhof, Eric
2015-06-23
We describe a general binding score for predicting the nucleic acid binding probability in proteins. The score is directly derived from physicochemical and evolutionary features and integrates a residue neighboring network approach. Our process achieves stable and high accuracies on both DNA- and RNA-binding proteins and illustrates how the main driving forces for nucleic acid binding are common. Because of the effective integration of the synergetic effects of the network of neighboring residues and the fact that the prediction yields a hierarchical scoring on the protein surface, energy funnels for nucleic acid binding appear on protein surfaces, pointing to the dynamic process occurring in the binding of nucleic acids to proteins. PMID:25940624
Analysis of extreme top event frequency percentiles based on fast probability integration
Staple, B.; Haskin, F.E.
1993-10-01
In risk assessments, a primary objective is to determine the frequency with which a collection of initiating and basic events, E{sub e} leads to some undesired top event, T. Uncertainties in the occurrence rates, x{sub t}, assigned to the initiating and basic events cause uncertainty in the top event frequency, z{sub T}. The quantification of the uncertainty in z{sub T} is an essential part of risk assessment called uncertainty analysis. In the past, it has been difficult to evaluate the extreme percentiles of output variables like z{sub T}. Analytic methods such as the method of moments do not provide estimates of output percentiles and the Monte Carlo (MC) method can be used to estimate extreme output percentiles only by resorting to large sample sizes. A promising altemative to these methods is the fast probability integration (FPI) methods. These methods approximate the integrals of multi-variate functions, representing percentiles of interest, without recourse to multi-dimensional numerical integration. FPI methods give precise results and have been demonstrated to be more efficient than MC methods for estimating extreme output percentiles. FPI allows the analyst to choose extreme percentiles of interest and perform sensitivity analyses in those regions. Such analyses can provide valuable insights as to the events driving the top event frequency response in extreme probability regions. In this paper, FPI methods are adapted a) to precisely estimate extreme top event frequency percentiles and b) to allow the quantification of sensitivity measures at these extreme percentiles. In addition, the relative precision and efficiency of alternative methods for treating lognormally distributed inputs is investigated. The methodology is applied to the top event frequency expression for the dominant accident sequence from a risk assessment of Grand Gulf nuclear power plant.
A Regression-based Approach to Assessing Stream Nitrogen Impairment Probabilities
NASA Astrophysics Data System (ADS)
McMahon, G.; Qian, S.; Roessler, C.
2002-05-01
A recently completed National Research Council study of the Total Maximum Daily Load (TMDL) program of the U.S. Environmental Protection Agency recommends an increased use of models to assess the conditions of waters for which nutrient load limits may need to be developed. Models can synthesize data to fill gaps associated with limited monitoring networks and estimate impairment probabilities for contaminants of interest. The U.S. Geological Survey, as part of the National Water-Quality Assessment Program, the North Carolina Division of Water Quality, has developed a nonlinear regression model to estimate impairment probabilities for all river segments, or reaches, in North Carolina's Neuse River. In this study, a reach is considered impaired if the annual mean concentration of total nitrogen is greater than 1.5 milligrams per liter (mg/L), a concentration associated with stream eutrophication. A SPARROW (Spatially Referenced Regressions on Watershed attributes) total nitrogen model was calibrated using data from three large basins in eastern North Carolina, including the Neuse River. The model specifies that in-stream nitrogen flux is a function of a nonlinear relation of nitrogen sources, including point sources, atmospheric deposition, inputs from agricultural and developed land, and terrestrial and aquatic nutrient processing. Because data are managed in a geographic information system, the SPARROW model uses information that can be derived from the stream reach network about the spatial relations among nitrogen fluxes, sources, landscape characteristics, and stream characteristics. This presentation describes a process for estimating the proportion (and 90-percent confidence interval) of Neuse River reaches with a total nitrogen concentration less than 1.5 mg/L and discusses the incorporation of prediction errors into the analysis.
Adgate, J L; Barr, D B; Clayton, C A; Eberly, L E; Freeman, N C; Lioy, P J; Needham, L L; Pellizzari, E D; Quackenboss, J J; Roy, A; Sexton, K
2001-01-01
The Minnesota Children's Pesticide Exposure Study is a probability-based sample of 102 children 3-13 years old who were monitored for commonly used pesticides. During the summer of 1997, first-morning-void urine samples (1-3 per child) were obtained for 88% of study children and analyzed for metabolites of insecticides and herbicides: carbamates and related compounds (1-NAP), atrazine (AM), malathion (MDA), and chlorpyrifos and related compounds (TCPy). TCPy was present in 93% of the samples, whereas 1-NAP, MDA, and AM were detected in 45%, 37%, and 2% of samples, respectively. Measured intrachild means ranged from 1.4 microg/L for MDA to 9.2 microg/L for TCPy, and there was considerable intrachild variability. For children providing three urine samples, geometric mean TCPy levels were greater than the detection limit in 98% of the samples, and nearly half the children had geometric mean 1-NAP and MDA levels greater than the detection limit. Interchild variability was significantly greater than intrachild variability for 1-NAP (p = 0.0037) and TCPy (p < 0.0001). The four metabolites measured were not correlated within urine samples, and children's metabolite levels did not vary systematically by sex, age, race, household income, or putative household pesticide use. On a log scale, mean TCPy levels were significantly higher in urban than in nonurban children (7.2 vs. 4.7 microg/L; p = 0.036). Weighted population mean concentrations were 3.9 [standard error (SE) = 0.7; 95% confidence interval (CI), 2.5, 5.3] microg/L for 1-NAP, 1.7 (SE = 0.3; 95% CI, 1.1, 2.3) microg/L for MDA, and 9.6 (SE = 0.9; 95% CI, 7.8, 11) microg/L for TCPy. The weighted population results estimate the overall mean and variability of metabolite levels for more than 84,000 children in the census tracts sampled. Levels of 1-NAP were lower than reported adult reference range concentrations, whereas TCPy concentrations were substantially higher. Concentrations of MDA were detected more frequently
Smith, Carlas S.; Stallinga, Sjoerd; Lidke, Keith A.; Rieger, Bernd; Grunwald, David
2015-01-01
Single-molecule detection in fluorescence nanoscopy has become a powerful tool in cell biology but can present vexing issues in image analysis, such as limited signal, unspecific background, empirically set thresholds, image filtering, and false-positive detection limiting overall detection efficiency. Here we present a framework in which expert knowledge and parameter tweaking are replaced with a probability-based hypothesis test. Our method delivers robust and threshold-free signal detection with a defined error estimate and improved detection of weaker signals. The probability value has consequences for downstream data analysis, such as weighing a series of detections and corresponding probabilities, Bayesian propagation of probability, or defining metrics in tracking applications. We show that the method outperforms all current approaches, yielding a detection efficiency of >70% and a false-positive detection rate of <5% under conditions down to 17 photons/pixel background and 180 photons/molecule signal, which is beneficial for any kind of photon-limited application. Examples include limited brightness and photostability, phototoxicity in live-cell single-molecule imaging, and use of new labels for nanoscopy. We present simulations, experimental data, and tracking of low-signal mRNAs in yeast cells. PMID:26424801
A generic probability based model to derive regional patterns of crops in time and space
NASA Astrophysics Data System (ADS)
Wattenbach, Martin; Luedtke, Stefan; Redweik, Richard; van Oijen, Marcel; Balkovic, Juraj; Reinds, Gert Jan
2015-04-01
Croplands are not only the key to human food supply, they also change the biophysical and biogeochemical properties of the land surface leading to changes in the water cycle, energy portioning, they influence soil erosion and substantially contribute to the amount of greenhouse gases entering the atmosphere. The effects of croplands on the environment depend on the type of crop and the associated management which both are related to the site conditions, economic boundary settings as well as preferences of individual farmers. The method described here is designed to predict the most probable crop to appear at a given location and time. The method uses statistical crop area information on NUTS2 level from EUROSTAT and the Common Agricultural Policy Regionalized Impact Model (CAPRI) as observation. These crops are then spatially disaggregated to the 1 x 1 km grid scale within the region, using the assumption that the probability of a crop appearing at a given location and a given year depends on a) the suitability of the land for the cultivation of the crop derived from the MARS Crop Yield Forecast System (MCYFS) and b) expert knowledge of agricultural practices. The latter includes knowledge concerning the feasibility of one crop following another (e.g. a late-maturing crop might leave too little time for the establishment of a winter cereal crop) and the need to combat weed infestations or crop diseases. The model is implemented in R and PostGIS. The quality of the generated crop sequences per grid cell is evaluated on the basis of the given statistics reported by the joint EU/CAPRI database. The assessment is given on NUTS2 level using per cent bias as a measure with a threshold of 15% as minimum quality. The results clearly indicates that crops with a large relative share within the administrative unit are not as error prone as crops that allocate only minor parts of the unit. However, still roughly 40% show an absolute per cent bias above the 15% threshold. This
Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation
NASA Astrophysics Data System (ADS)
Demir, Uygar; Toker, Cenk; Çenet, Duygu
2016-07-01
Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent
Fujimoto, Shinichiro; Kondo, Takeshi; Yamamoto, Hideya; Yokoyama, Naoyuki; Tarutani, Yasuhiro; Takamura, Kazuhisa; Urabe, Yoji; Konno, Kumiko; Nishizaki, Yuji; Shinozaki, Tomohiro; Kihara, Yasuki; Daida, Hiroyuki; Isshiki, Takaaki; Takase, Shinichi
2015-09-01
Existing methods to calculate pre-test probability of obstructive coronary artery disease (CAD) have been established using selected high-risk patients who were referred to conventional coronary angiography. The purpose of this study is to develop and validate our new method for pre-test probability of obstructive CAD using patients who underwent coronary CT angiography (CTA), which could be applicable to a wider range of patient population. Using consecutive 4137 patients with suspected CAD who underwent coronary CTA at our institution, a multivariate logistic regression model including clinical factors as covariates calculated the pre-test probability (K-score) of obstructive CAD determined by coronary CTA. The K-score was compared with the Duke clinical score using the area under the curve (AUC) for the receiver-operating characteristic curve. External validation was performed by an independent sample of 319 patients. The final model included eight significant predictors: age, gender, coronary risk factor (hypertension, diabetes mellitus, dyslipidemia, smoking), history of cerebral infarction, and chest symptom. The AUC of the K-score was significantly greater than that of the Duke clinical score for both derivation (0.736 vs. 0.699) and validation (0.714 vs. 0.688) data sets. Among patients who underwent coronary CTA, newly developed K-score had better pre-test prediction ability of obstructive CAD compared to Duke clinical score in Japanese population. PMID:24770610
Species-Level Deconvolution of Metagenome Assemblies with Hi-C–Based Contact Probability Maps
Burton, Joshua N.; Liachko, Ivan; Dunham, Maitreya J.; Shendure, Jay
2014-01-01
Microbial communities consist of mixed populations of organisms, including unknown species in unknown abundances. These communities are often studied through metagenomic shotgun sequencing, but standard library construction methods remove long-range contiguity information; thus, shotgun sequencing and de novo assembly of a metagenome typically yield a collection of contigs that cannot readily be grouped by species. Methods for generating chromatin-level contact probability maps, e.g., as generated by the Hi-C method, provide a signal of contiguity that is completely intracellular and contains both intrachromosomal and interchromosomal information. Here, we demonstrate how this signal can be exploited to reconstruct the individual genomes of microbial species present within a mixed sample. We apply this approach to two synthetic metagenome samples, successfully clustering the genome content of fungal, bacterial, and archaeal species with more than 99% agreement with published reference genomes. We also show that the Hi-C signal can secondarily be used to create scaffolded genome assemblies of individual eukaryotic species present within the microbial community, with higher levels of contiguity than some of the species’ published reference genomes. PMID:24855317
Filled Pause Refinement Based on the Pronunciation Probability for Lecture Speech
Long, Yan-Hua; Ye, Hong
2015-01-01
Nowadays, although automatic speech recognition has become quite proficient in recognizing or transcribing well-prepared fluent speech, the transcription of speech that contains many disfluencies remains problematic, such as spontaneous conversational and lecture speech. Filled pauses (FPs) are the most frequently occurring disfluencies in this type of speech. Most recent studies have shown that FPs are widely believed to increase the error rates for state-of-the-art speech transcription, primarily because most FPs are not well annotated or provided in training data transcriptions and because of the similarities in acoustic characteristics between FPs and some common non-content words. To enhance the speech transcription system, we propose a new automatic refinement approach to detect FPs in British English lecture speech transcription. This approach combines the pronunciation probabilities for each word in the dictionary and acoustic language model scores for FP refinement through a modified speech recognition forced-alignment framework. We evaluate the proposed approach on the Reith Lectures speech transcription task, in which only imperfect training transcriptions are available. Successful results are achieved for both the development and evaluation datasets. Acoustic models trained on different styles of speech genres have been investigated with respect to FP refinement. To further validate the effectiveness of the proposed approach, speech transcription performance has also been examined using systems built on training data transcriptions with and without FP refinement. PMID:25860959
Towards smart prosthetic hand: Adaptive probability based skeletan muscle fatigue model.
Kumar, Parmod; Sebastian, Anish; Potluri, Chandrasekhar; Urfer, Alex; Naidu, D; Schoen, Marco P
2010-01-01
Skeletal muscle force can be estimated using surface electromyographic (sEMG) signals. Usually, the surface location for the sensors is near the respective muscle motor unit points. Skeletal muscles generate a spatial EMG signal, which causes cross talk between different sEMG signal sensors. In this study, an array of three sEMG sensors is used to capture the information of muscle dynamics in terms of sEMG signals. The recorded sEMG signals are filtered utilizing optimized nonlinear Half-Gaussian Bayesian filters parameters, and the muscle force signal using a Chebyshev type-II filter. The filter optimization is accomplished using Genetic Algorithms. Three discrete time state-space muscle fatigue models are obtained using system identification and modal transformation for three sets of sensors for single motor unit. The outputs of these three muscle fatigue models are fused with a probabilistic Kullback Information Criterion (KIC) for model selection. The final fused output is estimated with an adaptive probability of KIC, which provides improved force estimates. PMID:21095927
Bažant, Zdeněk P.; Le, Jia-Liang; Bazant, Martin Z.
2009-01-01
The failure probability of engineering structures such as aircraft, bridges, dams, nuclear structures, and ships, as well as microelectronic components and medical implants, must be kept extremely low, typically <10−6. The safety factors needed to ensure it have so far been assessed empirically. For perfectly ductile and perfectly brittle structures, the empirical approach is sufficient because the cumulative distribution function (cdf) of random material strength is known and fixed. However, such an approach is insufficient for structures consisting of quasibrittle materials, which are brittle materials with inhomogeneities that are not negligible compared with the structure size. The reason is that the strength cdf of quasibrittle structure varies from Gaussian to Weibullian as the structure size increases. In this article, a recently proposed theory for the strength cdf of quasibrittle structure is refined by deriving it from fracture mechanics of nanocracks propagating by small, activation-energy-controlled, random jumps through the atomic lattice. This refinement also provides a plausible physical justification of the power law for subcritical creep crack growth, hitherto considered empirical. The theory is further extended to predict the cdf of structural lifetime at constant load, which is shown to be size- and geometry-dependent. The size effects on structure strength and lifetime are shown to be related and the latter to be much stronger. The theory fits previously unexplained deviations of experimental strength and lifetime histograms from the Weibull distribution. Finally, a boundary layer method for numerical calculation of the cdf of structural strength and lifetime is outlined. PMID:19561294
Rincon, Diego F; Hoy, Casey W; Cañas, Luis A
2015-04-01
Most predator-prey models extrapolate functional responses from small-scale experiments assuming spatially uniform within-plant predator-prey interactions. However, some predators focus their search in certain plant regions, and herbivores tend to select leaves to balance their nutrient uptake and exposure to plant defenses. Individual-based models that account for heterogeneous within-plant predator-prey interactions can be used to scale-up functional responses, but they would require the generation of explicit prey spatial distributions within-plant architecture models. The silverleaf whitefly, Bemisia tabaci biotype B (Gennadius) (Hemiptera: Aleyrodidae), is a significant pest of tomato crops worldwide that exhibits highly aggregated populations at several spatial scales, including within the plant. As part of an analytical framework to understand predator-silverleaf whitefly interactions, the objective of this research was to develop an algorithm to generate explicit spatial counts of silverleaf whitefly nymphs within tomato plants. The algorithm requires the plant size and the number of silverleaf whitefly individuals to distribute as inputs, and includes models that describe infestation probabilities per leaf nodal position and the aggregation pattern of the silverleaf whitefly within tomato plants and leaves. The output is a simulated number of silverleaf whitefly individuals for each leaf and leaflet on one or more plants. Parameter estimation was performed using nymph counts per leaflet censused from 30 artificially infested tomato plants. Validation revealed a substantial agreement between algorithm outputs and independent data that included the distribution of counts of both eggs and nymphs. This algorithm can be used in simulation models that explore the effect of local heterogeneity on whitefly-predator dynamics. PMID:26313173
NASA Astrophysics Data System (ADS)
Mahmud, Zamalia; Porter, Anne; Salikin, Masniyati; Ghani, Nor Azura Md
2015-12-01
Students' understanding of probability concepts have been investigated from various different perspectives. Competency on the other hand is often measured separately in the form of test structure. This study was set out to show that perceived understanding and competency can be calibrated and assessed together using Rasch measurement tools. Forty-four students from the STAT131 Understanding Uncertainty and Variation course at the University of Wollongong, NSW have volunteered to participate in the study. Rasch measurement which is based on a probabilistic model is used to calibrate the responses from two survey instruments and investigate the interactions between them. Data were captured from the e-learning platform Moodle where students provided their responses through an online quiz. The study shows that majority of the students perceived little understanding about conditional and independent events prior to learning about it but tend to demonstrate a slightly higher competency level afterward. Based on the Rasch map, there is indication of some increase in learning and knowledge about some probability concepts at the end of the two weeks lessons on probability concepts.
Results from probability-based, simplified, off-shore Louisiana CSEM hydrocarbon reservoir modeling
NASA Astrophysics Data System (ADS)
Stalnaker, J. L.; Tinley, M.; Gueho, B.
2009-12-01
Perhaps the biggest impediment to the commercial application of controlled-source electromagnetic (CSEM) geophysics marine hydrocarbon exploration is the inefficiency of modeling and data inversion. If an understanding of the typical (in a statistical sense) geometrical and electrical nature of a reservoir can be attained, then it is possible to derive therefrom a simplified yet accurate model of the electromagnetic interactions that produce a measured marine CSEM signal, leading ultimately to efficient modeling and inversion. We have compiled geometric and resistivity measurements from roughly 100 known, producing off-shore Louisiana Gulf of Mexico reservoirs. Recognizing that most reservoirs could be recreated roughly from a sectioned hemi-ellipsoid, we devised a unified, compact reservoir geometry description. Each reservoir was initially fit to the ellipsoid by eye, though we plan in the future to perform a more rigorous least-squares fit. We created, using kernel density estimation, initial probabilistic descriptions of reservoir parameter distributions, with the understanding that additional information would not fundamentally alter our results, but rather increase accuracy. From the probabilistic description, we designed an approximate model consisting of orthogonally oriented current segments distributed across the ellipsoid--enough to define the shape, yet few enough to be resolved during inversion. The moment and length of the currents are mapped to geometry and resistivity of the ellipsoid. The probability density functions (pdfs) derived from reservoir statistics serve as a workbench. We first use the pdfs in a Monte Carlo simulation designed to assess the detectability off-shore Louisiana reservoirs using magnitude versus offset (MVO) anomalies. From the pdfs, many reservoir instances are generated (using rejection sampling) and each normalized MVO response is calculated. The response strength is summarized by numerically computing MVO power, and that
An EEG-Based Fuzzy Probability Model for Early Diagnosis of Alzheimer's Disease.
Chiang, Hsiu-Sen; Pao, Shun-Chi
2016-05-01
Alzheimer's disease is a degenerative brain disease that results in cardinal memory deterioration and significant cognitive impairments. The early treatment of Alzheimer's disease can significantly reduce deterioration. Early diagnosis is difficult, and early symptoms are frequently overlooked. While much of the literature focuses on disease detection, the use of electroencephalography (EEG) in Alzheimer's diagnosis has received relatively little attention. This study combines the fuzzy and associative Petri net methodologies to develop a model for the effective and objective detection of Alzheimer's disease. Differences in EEG patterns between normal subjects and Alzheimer patients are used to establish prediction criteria for Alzheimer's disease, potentially providing physicians with a reference for early diagnosis, allowing for early action to delay the disease progression. PMID:27059738
The sampling design for the National Children¿s Study (NCS) calls for a population-based, multi-stage, clustered household sampling approach (visit our website for more information on the NCS : www.nationalchildrensstudy.gov). The full sample is designed to be representative of ...
Translating CFC-based piston ages into probability density functions of ground-water age in karst
Long, A.J.; Putnam, L.D.
2006-01-01
Temporal age distributions are equivalent to probability density functions (PDFs) of transit time. The type and shape of a PDF provides important information related to ground-water mixing at the well or spring and the complex nature of flow networks in karst aquifers. Chlorofluorocarbon (CFC) concentrations measured for samples from 12 locations in the karstic Madison aquifer were used to evaluate the suitability of various PDF types for this aquifer. Parameters of PDFs could not be estimated within acceptable confidence intervals for any of the individual sites. Therefore, metrics derived from CFC-based apparent ages were used to evaluate results of PDF modeling in a more general approach. The ranges of these metrics were established as criteria against which families of PDFs could be evaluated for their applicability to different parts of the aquifer. Seven PDF types, including five unimodal and two bimodal models, were evaluated. Model results indicate that unimodal models may be applicable to areas close to conduits that have younger piston (i.e., apparent) ages and that bimodal models probably are applicable to areas farther from conduits that have older piston ages. The two components of a bimodal PDF are interpreted as representing conduit and diffuse flow, and transit times of as much as two decades may separate these PDF components. Areas near conduits may be dominated by conduit flow, whereas areas farther from conduits having bimodal distributions probably have good hydraulic connection to both diffuse and conduit flow. ?? 2006 Elsevier B.V. All rights reserved.
Improvement of HMM-based action classification by using state transition probability
NASA Astrophysics Data System (ADS)
Kitamura, Yuka; Aruga, Haruki; Hashimoto, Manabu
2015-04-01
We propose a method to classify multiple similar actions which are contained in human behaviors by considering a weak-constrained order of "actions". The proposed method regards the human behavior as a combination of "action" patterns which have order constrained weakly. In this method, actions are classified by using not only image features but also consistency of transitions between an action and next action. By considering such an action transition, our method can recognize human behavior even if image features of different action are similar to each other. Based on this idea, we have improved the previous HMM-based algorithm effectively. Through some experiments using test image sequences of human behavior appeared in a bathroom, we have confirmed that the average classification success rate is 97 %, which is about 53 % higher than the previous method.
Probability-Based Design Criteria of the ASCE 7 Tsunami Loads and Effects Provisions (Invited)
NASA Astrophysics Data System (ADS)
Chock, G.
2013-12-01
Mitigation of tsunami risk requires a combination of emergency preparedness for evacuation in addition to providing structural resilience of critical facilities, infrastructure, and key resources necessary for immediate response and economic and social recovery. Critical facilities would include emergency response, medical, tsunami refuges and shelters, ports and harbors, lifelines, transportation, telecommunications, power, financial institutions, and major industrial/commercial facilities. The Tsunami Loads and Effects Subcommittee of the ASCE/SEI 7 Standards Committee is developing a proposed new Chapter 6 - Tsunami Loads and Effects for the 2016 edition of the ASCE 7 Standard. ASCE 7 provides the minimum design loads and requirements for structures subject to building codes such as the International Building Code utilized in the USA. In this paper we will provide a review emphasizing the intent of these new code provisions and explain the design methodology. The ASCE 7 provisions for Tsunami Loads and Effects enables a set of analysis and design methodologies that are consistent with performance-based engineering based on probabilistic criteria. . The ASCE 7 Tsunami Loads and Effects chapter will be initially applicable only to the states of Alaska, Washington, Oregon, California, and Hawaii. Ground shaking effects and subsidence from a preceding local offshore Maximum Considered Earthquake will also be considered prior to tsunami arrival for Alaska and states in the Pacific Northwest regions governed by nearby offshore subduction earthquakes. For national tsunami design provisions to achieve a consistent reliability standard of structural performance for community resilience, a new generation of tsunami inundation hazard maps for design is required. The lesson of recent tsunami is that historical records alone do not provide a sufficient measure of the potential heights of future tsunamis. Engineering design must consider the occurrence of events greater than
Computing posterior probabilities for score-based alignments using ppALIGN.
Wolfsheimer, Stefan; Hartmann, Alexander; Rabus, Ralf; Nuel, Gregory
2012-01-01
Score-based pairwise alignments are widely used in bioinformatics in particular with molecular database search tools, such as the BLAST family. Due to sophisticated heuristics, such algorithms are usually fast but the underlying scoring model unfortunately lacks a statistical description of the reliability of the reported alignments. In particular, close to gaps, in low-score or low-complexity regions, a huge number of alternative alignments arise which results in a decrease of the certainty of the alignment. ppALIGN is a software package that uses hidden Markov Model techniques to compute position-wise reliability of score-based pairwise alignments of DNA or protein sequences. The design of the model allows for a direct connection between the scoring function and the parameters of the probabilistic model. For this reason it is suitable to analyze the outcomes of popular score based aligners and search tools without having to choose a complicated set of parameters. By contrast, our program only requires the classical score parameters (the scoring function and gap costs). The package comes along with a library written in C++, a standalone program for user defined alignments (ppALIGN) and another program (ppBLAST) which can process a complete result set of BLAST. The main algorithms essentially exhibit a linear time complexity (in the alignment lengths), and they are hence suitable for on-line computations. We have also included alternative decoding algorithms to provide alternative alignments. ppALIGN is a fast program/library that helps detect and quantify questionable regions in pairwise alignments. Due to its structure, the input/output interface it can to be connected to other post-processing tools. Empirically, we illustrate its usefulness in terms of correctly predicted reliable regions for sequences generated using the ROSE model for sequence evolution, and identify sensor-specific regions in the denitrifying betaproteobacterium Aromatoleum aromaticum. PMID
NASA Astrophysics Data System (ADS)
Eleftheriadou, Anastasia K.; Baltzopoulou, Aikaterini D.; Karabinis, Athanasios I.
2016-06-01
The current seismic risk assessment is based on two discrete approaches, actual and probable, validating afterwards the produced results. In the first part of this research, the seismic risk is evaluated from the available data regarding the mean statistical repair/strengthening or replacement cost for the total number of damaged structures (180,427 buildings) after the 7/9/1999 Parnitha (Athens) earthquake. The actual evaluated seismic risk is afterwards compared to the estimated probable structural losses, which is presented in the second part of the paper, based on a damage scenario in the referring earthquake. The applied damage scenario is based on recently developed damage probability matrices (DPMs) from Athens (Greece) damage database. The seismic risk estimation refers to 750,085 buildings situated in the extended urban region of Athens. The building exposure is categorized in five typical structural types and represents 18.80 % of the entire building stock in Greece. The last information is provided by the National Statistics Service of Greece (NSSG) according to the 2000-2001 census. The seismic input is characterized by the ratio, a g/ a o, where a g is the regional peak ground acceleration (PGA) which is evaluated from the earlier estimated research macroseismic intensities, and a o is the PGA according to the hazard map of the 2003 Greek Seismic Code. Finally, the collected investigated financial data derived from different National Services responsible for the post-earthquake crisis management concerning the repair/strengthening or replacement costs or other categories of costs for the rehabilitation of earthquake victims (construction and function of settlements for earthquake homeless, rent supports, demolitions, shorings) are used to determine the final total seismic risk factor.
NASA Astrophysics Data System (ADS)
Eleftheriadou, Anastasia K.; Baltzopoulou, Aikaterini D.; Karabinis, Athanasios I.
2016-04-01
The current seismic risk assessment is based on two discrete approaches, actual and probable, validating afterwards the produced results. In the first part of this research, the seismic risk is evaluated from the available data regarding the mean statistical repair/strengthening or replacement cost for the total number of damaged structures (180,427 buildings) after the 7/9/1999 Parnitha (Athens) earthquake. The actual evaluated seismic risk is afterwards compared to the estimated probable structural losses, which is presented in the second part of the paper, based on a damage scenario in the referring earthquake. The applied damage scenario is based on recently developed damage probability matrices (DPMs) from Athens (Greece) damage database. The seismic risk estimation refers to 750,085 buildings situated in the extended urban region of Athens. The building exposure is categorized in five typical structural types and represents 18.80 % of the entire building stock in Greece. The last information is provided by the National Statistics Service of Greece (NSSG) according to the 2000-2001 census. The seismic input is characterized by the ratio, a g/a o, where a g is the regional peak ground acceleration (PGA) which is evaluated from the earlier estimated research macroseismic intensities, and a o is the PGA according to the hazard map of the 2003 Greek Seismic Code. Finally, the collected investigated financial data derived from different National Services responsible for the post-earthquake crisis management concerning the repair/strengthening or replacement costs or other categories of costs for the rehabilitation of earthquake victims (construction and function of settlements for earthquake homeless, rent supports, demolitions, shorings) are used to determine the final total seismic risk factor.
NASA Astrophysics Data System (ADS)
Slater, Paul B.; Dunkl, Charles F.
2012-03-01
Employing the Hilbert-Schmidt measure, we explicitly compute and analyze a number of determinantal product (bivariate) moments |ρ|k|ρPT|n, k, n = 0, 1, 2, 3, …, with PT denoting the partial transpose, for both generic (9-dimensional) two-rebit (\\alpha =\\frac{1}{2}) and generic (15-dimensional) two-qubit (α = 1) density matrices ρ. The results are, then, incorporated into a general formula, parameterized by k, n and α, with the case α = 2, presumptively corresponding to generic (27-dimensional) quaternionic systems. Holding the Dyson-index-like parameter α fixed, the induced univariate moments (|ρ||ρPT|)n and |ρPT|n are inputted into a Legendre-polynomial-based (least-squares) probability-distribution reconstruction algorithm of Provost (2005 Mathematica J. 9 727), yielding α-specific separability-probability estimates. Since, as the number of inputted moments grows, estimates based on the variable |ρ||ρPT| strongly decrease, while ones employing |ρPT| strongly increase (and converge faster), the gaps between upper and lower estimates diminish, yielding sharper and sharper bounds. Remarkably, for α = 2, with the use of 2325 moments, a separability-probability lower bound 0.999 999 987 as large as \\frac{26}{323} \\approx 0.080\\,4954 is found. For α = 1, based on 2415 moments, a lower bound results that is 0.999 997 066 times as large as \\frac{8}{33} \\approx 0.242\\,424, a (simpler still) fractional value that had previously been conjectured (Slater 2007 J. Phys. A: Math. Theor. 40 14279). Furthermore, for \\alpha =\\frac{1}{2}, employing 3310 moments, the lower bound is 0.999 955 times as large as \\frac{29}{64} = 0.453\\,125, a rational value previously considered (Slater 2010 J. Phys. A: Math. Theor. 43 195302).
NASA Astrophysics Data System (ADS)
Hufnagel, Heike; Ehrhardt, Jan; Pennec, Xavier; Schmidt-Richberg, Alexander; Handels, Heinz
2010-03-01
In this article, we propose a unified statistical framework for image segmentation with shape prior information. The approach combines an explicitely parameterized point-based probabilistic statistical shape model (SSM) with a segmentation contour which is implicitly represented by the zero level set of a higher dimensional surface. These two aspects are unified in a Maximum a Posteriori (MAP) estimation where the level set is evolved to converge towards the boundary of the organ to be segmented based on the image information while taking into account the prior given by the SSM information. The optimization of the energy functional obtained by the MAP formulation leads to an alternate update of the level set and an update of the fitting of the SSM. We then adapt the probabilistic SSM for multi-shape modeling and extend the approach to multiple-structure segmentation by introducing a level set function for each structure. During segmentation, the evolution of the different level set functions is coupled by the multi-shape SSM. First experimental evaluations indicate that our method is well suited for the segmentation of topologically complex, non spheric and multiple-structure shapes. We demonstrate the effectiveness of the method by experiments on kidney segmentation as well as on hip joint segmentation in CT images.
NASA Astrophysics Data System (ADS)
Zhang, Lei; Chen, Lingen; Sun, Fengrui
2016-03-01
The finite-time thermodynamic method based on probability analysis can more accurately describe various performance parameters of thermodynamic systems. Based on the relation between optimal efficiency and power output of a generalized Carnot heat engine with a finite high-temperature heat reservoir (heat source) and an infinite low-temperature heat reservoir (heat sink) and with the only irreversibility of heat transfer, this paper studies the problem of power optimization of chemically driven heat engine based on first and second order reaction kinetic theory, puts forward a model of the coupling heat engine which can be run periodically and obtains the effects of the finite-time thermodynamic characteristics of the coupling relation between chemical reaction and heat engine on the power optimization. The results show that the first order reaction kinetics model can use fuel more effectively, and can provide heat engine with higher temperature heat source to increase the power output of the heat engine. Moreover, the power fluctuation bounds of the chemically driven heat engine are obtained by using the probability analysis method. The results may provide some guidelines for the character analysis and power optimization of the chemically driven heat engines.
CP-TDMA: Coloring-and Probability-Based TDMA Scheduling for Wireless Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Zhang, Xuedan; Hong, Jun; Zhang, Lin; Shan, Xiuming; Li, Victor O. K.
This paper addresses the issue of transmission scheduling in wireless ad hoc networks. We propose a Time Division Multiple Access (TDMA) scheduling scheme based on edge coloring and probabilistic assignment, called CP-TDMA. We categorize the conflicts suffered by wireless links into two types: explicit conflicts and implicit conflicts, and utilize two different strategies to deal with them. Explicit conflicts are avoided completely by a simple distributed edge-coloring algorithm μ-M, and implicit conflicts are minimized by applying probabilistic time slot assignments to links. We evaluate CP-TDMA analytically and numerically, and find that CP-TDMA, which requires only local information exhibits a better performance than previous work.
EUPDF: An Eulerian-Based Monte Carlo Probability Density Function (PDF) Solver. User's Manual
NASA Technical Reports Server (NTRS)
Raju, M. S.
1998-01-01
EUPDF is an Eulerian-based Monte Carlo PDF solver developed for application with sprays, combustion, parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and spray solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type. The manual provides the user with the coding required to couple the PDF code to any given flow code and a basic understanding of the EUPDF code structure as well as the models involved in the PDF formulation. The source code of EUPDF will be available with the release of the National Combustion Code (NCC) as a complete package.
Borguesan, Bruno; Barbachan e Silva, Mariel; Grisci, Bruno; Inostroza-Ponta, Mario; Dorn, Márcio
2015-12-01
Tertiary protein structure prediction is one of the most challenging problems in structural bioinformatics. Despite the advances in algorithm development and computational strategies, predicting the folded structure of a protein only from its amino acid sequence remains as an unsolved problem. We present a new computational approach to predict the native-like three-dimensional structure of proteins. Conformational preferences of amino acid residues and secondary structure information were obtained from protein templates stored in the Protein Data Bank and represented as an Angle Probability List. Two knowledge-based prediction methods based on Genetic Algorithms and Particle Swarm Optimization were developed using this information. The proposed method has been tested with twenty-six case studies selected to validate our approach with different classes of proteins and folding patterns. Stereochemical and structural analysis were performed for each predicted three-dimensional structure. Results achieved suggest that the Angle Probability List can improve the effectiveness of metaheuristics used to predicted the three-dimensional structure of protein molecules by reducing its conformational search space. PMID:26495908
Field, Edward H.
2015-01-01
A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.
NASA Astrophysics Data System (ADS)
Lu, Xi; Lu, Mingyu; Zhou, Li-Min; Su, Zhongqing; Cheng, Li; Ye, Lin; Meng, Guang
2011-01-01
Welded tubular steel structures (WTSSs) are widely used in various engineering sectors, serving as major frameworks for many mechanical systems. There has been increasing awareness of introducing effective damage identification and up-to-the-minute health surveillance to WTSSs, so as to enhance structural reliability and integrity. In this study, propagation of guided waves (GWs) in a WTSS of rectangular cross-section, a true-scale model of a train bogie frame segment, was investigated using the finite element method (FEM) and experimental analysis with the purpose of evaluating welding damage in the WTSS. An active piezoelectric sensor network was designed and surface-bonded on the WTSS, to activate and collect GWs. Characteristics of GWs at different excitation frequencies were explored. A signal feature, termed 'time of maximal difference' (ToMD) in this study, was extracted from captured GW signals, based on which a concept, damage presence probability (DPP), was established. With ToMD and DPP, a probability-based damage imaging approach was developed. To enhance robustness of the approach to measurement noise and uncertainties, a two-level image fusion scheme was further proposed. As validation, the approach was employed to predict presence and location of slot-like damage in the welding zone of a WTSS. Identification results have demonstrated the effectiveness of the developed approach for identifying damage in WTSSs and its large potential for real-time health monitoring of WTSSs.
Flood hazard probability mapping method
NASA Astrophysics Data System (ADS)
Kalantari, Zahra; Lyon, Steve; Folkeson, Lennart
2015-04-01
In Sweden, spatially explicit approaches have been applied in various disciplines such as landslide modelling based on soil type data and flood risk modelling for large rivers. Regarding flood mapping, most previous studies have focused on complex hydrological modelling on a small scale whereas just a few studies have used a robust GIS-based approach integrating most physical catchment descriptor (PCD) aspects on a larger scale. The aim of the present study was to develop methodology for predicting the spatial probability of flooding on a general large scale. Factors such as topography, land use, soil data and other PCDs were analysed in terms of their relative importance for flood generation. The specific objective was to test the methodology using statistical methods to identify factors having a significant role on controlling flooding. A second objective was to generate an index quantifying flood probability value for each cell, based on different weighted factors, in order to provide a more accurate analysis of potential high flood hazards than can be obtained using just a single variable. The ability of indicator covariance to capture flooding probability was determined for different watersheds in central Sweden. Using data from this initial investigation, a method to subtract spatial data for multiple catchments and to produce soft data for statistical analysis was developed. It allowed flood probability to be predicted from spatially sparse data without compromising the significant hydrological features on the landscape. By using PCD data, realistic representations of high probability flood regions was made, despite the magnitude of rain events. This in turn allowed objective quantification of the probability of floods at the field scale for future model development and watershed management.
Anusavice, Kenneth J.; Jadaan, Osama M.; Esquivel–Upshaw, Josephine
2013-01-01
Recent reports on bilayer ceramic crown prostheses suggest that fractures of the veneering ceramic represent the most common reason for prosthesis failure. Objective The aims of this study were to test the hypotheses that: (1) an increase in core ceramic/veneer ceramic thickness ratio for a crown thickness of 1.6 mm reduces the time-dependent fracture probability (Pf) of bilayer crowns with a lithium-disilicate-based glass-ceramic core, and (2) oblique loading, within the central fossa, increases Pf for 1.6-mm-thick crowns compared with vertical loading. Materials and methods Time-dependent fracture probabilities were calculated for 1.6-mm-thick, veneered lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation in the central fossa area. Time-dependent fracture probability analyses were computed by CARES/Life software and finite element analysis, using dynamic fatigue strength data for monolithic discs of a lithium-disilicate glass-ceramic core (Empress 2), and ceramic veneer (Empress 2 Veneer Ceramic). Results Predicted fracture probabilities (Pf) for centrally-loaded 1,6-mm-thick bilayer crowns over periods of 1, 5, and 10 years are 1.2%, 2.7%, and 3.5%, respectively, for a core/veneer thickness ratio of 1.0 (0.8 mm/0.8 mm), and 2.5%, 5.1%, and 7.0%, respectively, for a core/veneer thickness ratio of 0.33 (0.4 mm/1.2 mm). Conclusion CARES/Life results support the proposed crown design and load orientation hypotheses. Significance The application of dynamic fatigue data, finite element stress analysis, and CARES/Life analysis represent an optimal approach to optimize fixed dental prosthesis designs produced from dental ceramics and to predict time-dependent fracture probabilities of ceramic-based fixed dental prostheses that can minimize the risk for clinical failures. PMID:24060349
Park, Dong-Uk; Colt, Joanne S.; Baris, Dalsu; Schwenn, Molly; Karagas, Margaret R.; Armenti, Karla R.; Johnson, Alison; Silverman, Debra T; Stewart, Patricia A
2014-01-01
We describe here an approach for estimating the probability that study subjects were exposed to metalworking fluids (MWFs) in a population-based case-control study of bladder cancer. Study subject reports on the frequency of machining and use of specific MWFs (straight, soluble, and synthetic/semi-synthetic) were used to estimate exposure probability when available. Those reports also were used to develop estimates for job groups, which were then applied to jobs without MWF reports. Estimates using both cases and controls and controls only were developed. The prevalence of machining varied substantially across job groups (10-90%), with the greatest percentage of jobs that machined being reported by machinists and tool and die workers. Reports of straight and soluble MWF use were fairly consistent across job groups (generally, 50-70%). Synthetic MWF use was lower (13-45%). There was little difference in reports by cases and controls vs. controls only. Approximately, 1% of the entire study population was assessed as definitely exposed to straight or soluble fluids in contrast to 0.2% definitely exposed to synthetic/semi-synthetics. A comparison between the reported use of the MWFs and the US production levels by decade found high correlations (r generally >0.7). Overall, the method described here is likely to have provided a systematic and reliable ranking that better reflects the variability of exposure to three types of MWFs than approaches applied in the past. PMID:25256317
NASA Technical Reports Server (NTRS)
Johnson, J. R. (Principal Investigator)
1974-01-01
The author has identified the following significant results. The broad scale vegetation classification was developed for a 3,200 sq mile area in southeastern Arizona. The 31 vegetation types were derived from association tables which contained information taken at about 500 ground sites. The classification provided an information base that was suitable for use with small scale photography. A procedure was developed and tested for objectively comparing photo images. The procedure consisted of two parts, image groupability testing and image complexity testing. The Apollo and ERTS photos were compared for relative suitability as first stage stratification bases in two stage proportional probability sampling. High altitude photography was used in common at the second stage.
Hoffman, E.
2012-08-23
A series of cyclic potentiodynamic polarization tests was performed on samples of A537 carbon steel in support of a probability-based approach to evaluate the effect of chloride and sulfate on corrosion susceptibility. Testing solutions were chosen to build off previous experimental results from FY07, FY08, FY09 and FY10 to systemically evaluate the influence of the secondary aggressive species, chloride, and sulfate. The FY11 results suggest that evaluating the combined effect of all aggressive species, nitrate, chloride, and sulfate, provides a consistent response for determining corrosion susceptibility. The results of this work emphasize the importance for not only nitrate concentration limits, but also chloride and sulfate concentration limits as well.
Park, Dong-Uk; Colt, Joanne S; Baris, Dalsu; Schwenn, Molly; Karagas, Margaret R; Armenti, Karla R; Johnson, Alison; Silverman, Debra T; Stewart, Patricia A
2014-01-01
We describe an approach for estimating the probability that study subjects were exposed to metalworking fluids (MWFs) in a population-based case-control study of bladder cancer. Study subject reports on the frequency of machining and use of specific MWFs (straight, soluble, and synthetic/semi-synthetic) were used to estimate exposure probability when available. Those reports also were used to develop estimates for job groups, which were then applied to jobs without MWF reports. Estimates using both cases and controls and controls only were developed. The prevalence of machining varied substantially across job groups (0.1->0.9%), with the greatest percentage of jobs that machined being reported by machinists and tool and die workers. Reports of straight and soluble MWF use were fairly consistent across job groups (generally 50-70%). Synthetic MWF use was lower (13-45%). There was little difference in reports by cases and controls vs. controls only. Approximately, 1% of the entire study population was assessed as definitely exposed to straight or soluble fluids in contrast to 0.2% definitely exposed to synthetic/semi-synthetics. A comparison between the reported use of the MWFs and U.S. production levels found high correlations (r generally >0.7). Overall, the method described here is likely to have provided a systematic and reliable ranking that better reflects the variability of exposure to three types of MWFs than approaches applied in the past. [Supplementary materials are available for this article. Go to the publisher's online edition of Journal of Occupational and Environmental Hygiene for the following free supplemental resources: a list of keywords in the occupational histories that were used to link study subjects to the metalworking fluids (MWFs) modules; recommendations from the literature on selection of MWFs based on type of machining operation, the metal being machined and decade; popular additives to MWFs; the number and proportion of controls who
Chen, Teng; Gong, Xingchu; Chen, Huali; Zhang, Ying; Qu, Haibin
2016-01-01
A Monte Carlo method was used to develop the design space of a chromatographic elution process for the purification of saponins in Panax notoginseng extract. During this process, saponin recovery ratios, saponin purity, and elution productivity are determined as process critical quality attributes, and ethanol concentration, elution rate, and elution volume are identified as critical process parameters. Quadratic equations between process critical quality attributes and critical process parameters were established using response surface methodology. Then probability-based design space was computed by calculating the prediction errors using Monte Carlo simulations. The influences of calculation parameters on computation results were investigated. The optimized calculation condition was as follows: calculation step length of 0.02, simulation times of 10 000, and a significance level value of 0.15 for adding or removing terms in a stepwise regression. Recommended normal operation region is located in ethanol concentration of 65.0-70.0%, elution rate of 1.7-2.0 bed volumes (BV)/h and elution volume of 3.0-3.6 BV. Verification experiments were carried out and the experimental values were in a good agreement with the predicted values. The application of present method is promising to develop a probability-based design space for other botanical drug manufacturing process. PMID:26549198
NASA Astrophysics Data System (ADS)
Monaco, E.; Memmolo, V.; Ricci, F.; Boffa, N. D.; Maio, L.
2015-03-01
Maintenance approaches based on sensorised structures and Structural Health Monitoring systems could represent one of the most promising innovations in the fields of aerostructures since many years, mostly when composites materials (fibers reinforced resins) are considered. Layered materials still suffer today of drastic reductions of maximum allowable stress values during the design phase as well as of costly and recurrent inspections during the life cycle phase that don't permit of completely exploit their structural and economic potentialities in today aircrafts. Those penalizing measures are necessary mainly to consider the presence of undetected hidden flaws within the layered sequence (delaminations) or in bonded areas (partial disbonding); in order to relax design and maintenance constraints a system based on sensors permanently installed on the structure to detect and locate eventual flaws can be considered (SHM system) once its effectiveness and reliability will be statistically demonstrated via a rigorous Probability Of Detection function definition and evaluation. This paper presents an experimental approach with a statistical procedure for the evaluation of detection threshold of a guided waves based SHM system oriented to delaminations detection on a typical wing composite layered panel. The experimental tests are mostly oriented to characterize the statistical distribution of measurements and damage metrics as well as to characterize the system detection capability using this approach. Numerically it is not possible to substitute part of the experimental tests aimed at POD where the noise in the system response is crucial. Results of experiments are presented in the paper and analyzed.
NASA Astrophysics Data System (ADS)
Bachmann, C. E.; Wiemer, S.; Woessner, J.; Hainzl, S.
2011-08-01
Geothermal energy is becoming an important clean energy source, however, the stimulation of a reservoir for an Enhanced Geothermal System (EGS) is associated with seismic risk due to induced seismicity. Seismicity occurring due to the water injection at depth have to be well recorded and monitored. To mitigate the seismic risk of a damaging event, an appropriate alarm system needs to be in place for each individual experiment. In recent experiments, the so-called traffic-light alarm system, based on public response, local magnitude and peak ground velocity, was used. We aim to improve the pre-defined alarm system by introducing a probability-based approach; we retrospectively model the ongoing seismicity in real time with multiple statistical forecast models and then translate the forecast to seismic hazard in terms of probabilities of exceeding a ground motion intensity level. One class of models accounts for the water injection rate, the main parameter that can be controlled by the operators during an experiment. By translating the models into time-varying probabilities of exceeding various intensity levels, we provide tools which are well understood by the decision makers and can be used to determine thresholds non-exceedance during a reservoir stimulation; this, however, remains an entrepreneurial or political decision of the responsible project coordinators. We introduce forecast models based on the data set of an EGS experiment in the city of Basel. Between 2006 December 2 and 8, approximately 11 500 m3 of water was injected into a 5-km-deep well at high pressures. A six-sensor borehole array, was installed by the company Geothermal Explorers Limited (GEL) at depths between 300 and 2700 m around the well to monitor the induced seismicity. The network recorded approximately 11 200 events during the injection phase, more than 3500 of which were located. With the traffic-light system, actions where implemented after an ML 2.7 event, the water injection was
Shankar Subramaniam
2009-04-01
This final project report summarizes progress made towards the objectives described in the proposal entitled “Developing New Mathematical Models for Multiphase Flows Based on a Fundamental Probability Density Function Approach”. Substantial progress has been made in theory, modeling and numerical simulation of turbulent multiphase flows. The consistent mathematical framework based on probability density functions is described. New models are proposed for turbulent particle-laden flows and sprays.
NASA Astrophysics Data System (ADS)
Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua
1997-04-01
Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.
Cencerrado, Andrés; Cortés, Ana; Margalef, Tomàs
2013-01-01
This work presents a framework for assessing how the existing constraints at the time of attending an ongoing forest fire affect simulation results, both in terms of quality (accuracy) obtained and the time needed to make a decision. In the wildfire spread simulation and prediction area, it is essential to properly exploit the computational power offered by new computing advances. For this purpose, we rely on a two-stage prediction process to enhance the quality of traditional predictions, taking advantage of parallel computing. This strategy is based on an adjustment stage which is carried out by a well-known evolutionary technique: Genetic Algorithms. The core of this framework is evaluated according to the probability theory principles. Thus, a strong statistical study is presented and oriented towards the characterization of such an adjustment technique in order to help the operation managers deal with the two aspects previously mentioned: time and quality. The experimental work in this paper is based on a region in Spain which is one of the most prone to forest fires: El Cap de Creus. PMID:24453898
NASA Technical Reports Server (NTRS)
Bauman, William H., III
2009-01-01
The threat of lightning is a daily concern during the warm season in Florida. Research has revealed distinct spatial and temporal distributions of lightning occurrence that are strongly influenced by large-scale atmospheric flow regimes. Previously, the Applied Meteorology Unit (AMU) calculated the gridded lightning climatologies based on seven flow regimes over Florida for 1-, 3- and 6-hr intervals in 5-, 10-, 20-, and 30-NM diameter range rings around the Shuttle Landing Facility (SLF) and eight other airfields in the National Weather Service in Melbourne (NWS MLB) county warning area (CWA). In this update to the work, the AMU recalculated the lightning climatologies for using individual lightning strike data to improve the accuracy of the climatologies. The AMU included all data regardless of flow regime as one of the stratifications, added monthly stratifications, added three years of data to the period of record and used modified flow regimes based work from the AMU's Objective Lightning Probability Forecast Tool, Phase II. The AMU made changes so the 5- and 10-NM radius range rings are consistent with the aviation forecast requirements at NWS MLB, while the 20- and 30-NM radius range rings at the SLF assist the Spaceflight Meteorology Group in making forecasts for weather Flight Rule violations during Shuttle landings. The AMU also updated the graphical user interface with the new data.
Li, Pingyang; Xue, Rui; Wang, Yinghui; Zhang, Ruijie; Zhang, Gan
2015-01-15
Fifteen polycyclic aromatic hydrocarbons (PAHs) in 41 surface sediment samples and a sediment core (50 cm) from the Beibu Gulf, a significant low-latitude developing gulf, were analyzed. PAHs concentrations were 3.01-388 ng g(-)(1) (mean 95.5 ng g(-)(1)) in the surface sediments and 10.5-87.1 ng g(-)(1) (average 41.1 ng g(-)(1)) in the sediment core. Source apportionment indicated that PAHs were generated from coke production and vehicular emissions (39.4%), coal and biomass combustion (35.8%), and petrogenic sources (24.8%). PAHs were mainly concentrated in the industrialized and urbanized regions and the harbor, and were transported by atmospheric deposition to the marine matrix. The mass inventory (1.57-2.62t) and probability risk showed sediments here served as an important reservoir but low PAH risk. Different from oil and natural gas in developed regions, coal combustion has always been a significant energy consumption pattern in this developing region for the past 30 years (56 ± 5%). PMID:25467868
Significance testing of rules in rule-based models of human problem solving
NASA Technical Reports Server (NTRS)
Lewis, C. M.; Hammer, J. M.
1986-01-01
Rule-based models of human problem solving have typically not been tested for statistical significance. Three methods of testing rules - analysis of variance, randomization, and contingency tables - are presented. Advantages and disadvantages of the methods are also described.
Precursor Analysis for Flight- and Ground-Based Anomaly Risk Significance Determination
NASA Technical Reports Server (NTRS)
Groen, Frank
2010-01-01
This slide presentation reviews the precursor analysis for flight and ground based anomaly risk significance. It includes information on accident precursor analysis, real models vs. models, and probabilistic analysis.
Chen, Y Z; Prohofsky, E W
1994-01-01
We calculate room temperature thermal fluctuational base pair opening probability of a daunomycin-poly d(GCAT).poly d(ATGC) complex. This system is constructed at an atomic level of detail based on x-ray analysis of a crystal structure. The base pair opening probabilities are calculated from a modified self-consistent phonon approach of anharmonic lattice dynamics theory. We find that daunomycin binding substantially enhances the thermal stability of one of the base pairs adjacent the drug because of strong hydrogen bonding between the drug and the base. The possible effect of this enhanced stability on the drug inhibition of DNA transcription and replication is discussed. We also calculate the probability of drug dissociation from the helix based on the selfconsistent calculation of the probability of the disruption of drug-base H-bonds and the unstacking probability of the drug. The calculations can be used to determine the equilibrium drug binding constant which is found to be in good agreement with observations on similar daunomycin-DNA systems. PMID:8011914
NASA Astrophysics Data System (ADS)
Zhong, Rumian; Zong, Zhouhong; Niu, Jie; Liu, Qiqi; Zheng, Peijuan
2016-05-01
Modeling and simulation are routinely implemented to predict the behavior of complex structures. These tools powerfully unite theoretical foundations, numerical models and experimental data which include associated uncertainties and errors. A new methodology for multi-scale finite element (FE) model validation is proposed in this paper. The method is based on two-step updating method, a novel approach to obtain coupling parameters in the gluing sub-regions of a multi-scale FE model, and upon Probability Box (P-box) theory that can provide a lower and upper bound for the purpose of quantifying and transmitting the uncertainty of structural parameters. The structural health monitoring data of Guanhe Bridge, a composite cable-stayed bridge with large span, and Monte Carlo simulation were used to verify the proposed method. The results show satisfactory accuracy, as the overlap ratio index of each modal frequency is over 89% without the average absolute value of relative errors, and the CDF of normal distribution has a good coincidence with measured frequencies of Guanhe Bridge. The validated multiscale FE model may be further used in structural damage prognosis and safety prognosis.
NASA Astrophysics Data System (ADS)
Weiser, Deborah Anne
Induced seismicity is occurring at increasing rates around the country. Brodsky and Lajoie (2013) and others have recognized anthropogenic quakes at a few geothermal fields in California. I use three techniques to assess if there are induced earthquakes in California geothermal fields; there are three sites with clear induced seismicity: Brawley, The Geysers, and Salton Sea. Moderate to strong evidence is found at Casa Diablo, Coso, East Mesa, and Susanville. Little to no evidence is found for Heber and Wendel. I develop a set of tools to reduce or cope with the risk imposed by these earthquakes, and also to address uncertainties through simulations. I test if an earthquake catalog may be bounded by an upper magnitude limit. I address whether the earthquake record during pumping time is consistent with the past earthquake record, or if injection can explain all or some of the earthquakes. I also present ways to assess the probability of future earthquake occurrence based on past records. I summarize current legislation for eight states where induced earthquakes are of concern. Unlike tectonic earthquakes, the hazard from induced earthquakes has the potential to be modified. I discuss direct and indirect mitigation practices. I present a framework with scientific and communication techniques for assessing uncertainty, ultimately allowing more informed decisions to be made.
Ghane, Alireza; Mazaheri, Mehdi; Mohammad Vali Samani, Jamal
2016-09-15
The pollution of rivers due to accidental spills is a major threat to environment and human health. To protect river systems from accidental spills, it is essential to introduce a reliable tool for identification process. Backward Probability Method (BPM) is one of the most recommended tools that is able to introduce information related to the prior location and the release time of the pollution. This method was originally developed and employed in groundwater pollution source identification problems. One of the objectives of this study is to apply this method in identifying the pollution source location and release time in surface waters, mainly in rivers. To accomplish this task, a numerical model is developed based on the adjoint analysis. Then the developed model is verified using analytical solution and some real data. The second objective of this study is to extend the method to pollution source identification in river networks. In this regard, a hypothetical test case is considered. In the later simulations, all of the suspected points are identified, using only one backward simulation. The results demonstrated that all suspected points, determined by the BPM could be a possible pollution source. The proposed approach is accurate and computationally efficient and does not need any simplification in river geometry and flow. Due to this simplicity, it is highly recommended for practical purposes. PMID:27219462
NASA Astrophysics Data System (ADS)
Lin, Liangkui; Xu, Hui; An, Wei; Sheng, Weidong; Xu, Dan
2011-11-01
This paper presents a novel approach to tracking a large number of closely spaced objects (CSO) in image sequences that is based on the particle probability hypothesis density (PHD) filter and multiassignment data association. First, the particle PHD filter is adopted to eliminate most of the clutters and to estimate multitarget states. In the particle PHD filter, a noniterative multitarget estimation technique is introduced to reliably estimate multitarget states, and an improved birth particle sampling scheme is present to effectively acquire targets among clutters. Then, an integrated track management method is proposed to realize multitarget track continuity. The core of the track management is the track-to-estimation multiassignment association, which relaxes the traditional one-to-one data association restriction due to the unresolved focal plane CSO measurements. Meanwhile, a unified technique of multiple consecutive misses for track deletion is used jointly to cope with the sensitivity of the PHD filter to the missed detections and to eliminate false alarms further, as well as to initiate tracks of large numbers of CSO. Finally, results of two simulations and one experiment show that the proposed approach is feasible and efficient.
NASA Astrophysics Data System (ADS)
Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo
2016-07-01
Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.
NASA Astrophysics Data System (ADS)
Nakano, Shinya
2013-04-01
In the ensemble-based sequential data assimilation, the probability density function (PDF) at each time step is represented by ensemble members. These ensemble members are usually assumed to be Monte Carlo samples drawn from the PDF, and the probability density is associated with the concentration of the ensemble members. On the basis of the Monte Carlo approximation, the forecast ensemble, which is obtained by applying the dynamical model to each ensemble member, provides an approximation of the forecast PDF on the basis of the Chapman-Kolmogorov integral. In practical cases, however, the ensemble size is limited by available computational resources, and it is typically much less than the system dimension. In such situations, the Monte Carlo approximation would not well work. When the ensemble size is less than the system dimension, the ensemble would form a simplex in a subspace. The simplex can not represent the third or higher-order moments of the PDF, but it can represent only the Gaussian features of the PDF. As noted by Wang et al. (2004), the forecast ensemble, which is obtained by applying the dynamical model to each member of the simplex ensemble, provides an approximation of the mean and covariance of the forecast PDF where the Taylor expansion of the dynamical model up to the second-order is considered except that the uncertainties which can not represented by the ensemble members are ignored. Since the third and higher-order nonlinearity is discarded, the forecast ensemble would provide some bias to the forecast. Using a small nonlinear model, the Lorenz 63 model, we also performed the experiment of the state estimation with both the simplex representation and the Monte Carlo representation, which corresponds to the limited-sized ensemble case and the large-sized ensemble case, respectively. If we use the simplex representation, it is found that the estimates tend to have some bias which is likely to be caused by the nonlinearity of the system rather
Dauer, Daniel M; Llansó, Roberto J
2003-01-01
The extent of degradation of benthic communities of the Chesapeake Bay was determined by applying a previously developed benthic index of biotic integrity at three spatial scales. Allocation of sampling was probability-based allowing areal estimates of degradation with known confidence intervals. The three spatial scales were: (1) the tidal Chesapeake Bay; (2) the Elizabeth River watershed: and (3) two small tidal creeks within the Southern Branch of the Elizabeth River that are part of a sediment contaminant remediation effort. The areas covered varied from 10(-1) to 10(4) km2 and all were sampled in 1999. The Chesapeake Bay was divided into ten strata, the Elizabeth River into five strata and each of the two tidal creeks was a single stratum. The determination of the number and size of strata was based upon consideration of both managerially useful units for restoration and limitations of funding. Within each stratum 25 random locations were sampled for benthic community condition. In 1999 the percent of the benthos with poor benthic community condition for the entire Chesapeake Bay was 47% and varied from 20% at the mouth of the Bay to 72% in the Potomac River. The estimated area of benthos with poor benthic community condition for the Elizabeth River was 64% and varied from 52-92%. Both small tidal creeks had estimates of 76% of poor benthic community condition. These kinds of estimates allow environmental managers to better direct restoration efforts and evaluate progress towards restoration. Patterns of benthic community condition at smaller spatial scales may not be correctly inferred from larger spatial scales. Comparisons of patterns in benthic community condition across spatial scales, and between combinations of strata, must be cautiously interpreted. PMID:12620014
Walsh, Michael G.; Haseeb, M. A.
2014-01-01
Toxocariasis is increasingly recognized as an important neglected infection of poverty (NIP) in developed countries, and may constitute the most important NIP in the United States (US) given its association with chronic sequelae such as asthma and poor cognitive development. Its potential public health burden notwithstanding, toxocariasis surveillance is minimal throughout the US and so the true burden of disease remains uncertain in many areas. The Third National Health and Nutrition Examination Survey conducted a representative serologic survey of toxocariasis to estimate the prevalence of infection in diverse US subpopulations across different regions of the country. Using the NHANES III surveillance data, the current study applied the predicted probabilities of toxocariasis to the sociodemographic composition of New York census tracts to estimate the local probability of infection across the city. The predicted probability of toxocariasis ranged from 6% among US-born Latino women with a university education to 57% among immigrant men with less than a high school education. The predicted probability of toxocariasis exhibited marked spatial variation across the city, with particularly high infection probabilities in large sections of Queens, and smaller, more concentrated areas of Brooklyn and northern Manhattan. This investigation is the first attempt at small-area estimation of the probability surface of toxocariasis in a major US city. While this study does not define toxocariasis risk directly, it does provide a much needed tool to aid the development of toxocariasis surveillance in New York City. PMID:24918785
NASA Astrophysics Data System (ADS)
De Gregorio, Sofia; Camarda, Marco
2016-04-01
The evaluation of the amount of magma that might be potentially erupted, i.e. the eruptive potential (EP), and the probability of eruptive event occurrence, i.e. eruptive probability (EPR) of active volcano is one of the most compelling and challenging topic addressed by the volcanology community in the last years. The evaluation of the EP in open conduit volcano is generally based on constant magma supply rate deduced by long-term series of eruptive rate. This EP computation gives good results on long-term (centuries) evaluations, but resulted less effective when short-term (years or months) estimations are needed. Actually the rate of magma supply can undergo changes both on long-term and short-term. At steady condition it can be supposed that the regular supply of magma determines an almost constant level of magma in the feeding system (FS) whereas episodic surplus of magma inputs, with respect the regular supply, can cause large variations in the magma level. Follow that the surplus of magma occasionally entered in the FS represents a supply of material that sooner or later will be disposed, i.e. it will be emitted. Afterwards the amount of surplus of magma inward the FS nearly corresponds to the amount of magma that must be erupted in order to restore the equilibrium. Further, larger is the amount of surplus of magma stored in the system higher is the energetic level of the system and its propensity to erupt or in other words its EPR. On the light of the above consideration herein, we present an innovative methodology to evaluate the EP based on the quantification of surplus of magma with respect the regular supply, progressively intruded in the FS. To estimate the surplus of magma supply we used soil CO2 emission data measured monthly at 130 sites in two peripheral areas of Mt Etna Volcano. Indeed as reported by many authors soil CO2 emissions in the areas are linked to magma supply dynamics and more, anomalous discharges of CO2 are ascribable to surplus of
Univariate Probability Distributions
ERIC Educational Resources Information Center
Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.
2012-01-01
We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…
NASA Astrophysics Data System (ADS)
Barnard, J. M.; Augarde, C. E.
2012-12-01
The simulation of reactions in flow through unsaturated porous media is a more complicated process when using particle tracking based models than in continuum based models. In the fomer particles are reacted on an individual particle-to-particle basis using either deterministic or probabilistic methods. This means that particle tracking methods, especially when simulations of reactions are included, are computationally intensive as the reaction simulations require tens of thousands of nearest neighbour searches per time step. Despite this, particle tracking methods merit further study due to their ability to eliminate numerical dispersion, to simulate anomalous transport and incomplete mixing of reactive solutes. A new model has been developed using discrete time random walk particle tracking methods to simulate reactive mass transport in porous media which includes a variation of colocation probability function based methods of reaction simulation from those presented by Benson & Meerschaert (2008). Model development has also included code acceleration via graphics processing units (GPUs). The nature of particle tracking methods means that they are well suited to parallelization using GPUs. The architecture of GPUs is single instruction - multiple data (SIMD). This means that only one operation can be performed at any one time but can be performed on multiple data simultaneously. This allows for significant speed gains where long loops of independent operations are performed. Computationally expensive code elements, such the nearest neighbour searches required by the reaction simulation, are therefore prime targets for GPU acceleration.
NASA Astrophysics Data System (ADS)
Akinci, Aybige; Aochi, Hideo; Herrero, Andre; Pischiutta, Marta; Karanikas, Dimitris
2016-04-01
The city of Istanbul is characterized by one of the highest levels of seismic risk in Europe and the Mediterranean region. The important source of the increased risk in Istanbul is the remarkable probability of the occurrence of a large earthquake, which stands at about 65% during the coming years due to the existing seismic gap and the post-1999 earthquake stress transfer at the western portion of the North Anatolian Fault Zone (NAFZ). In this study, we have simulated hybrid broadband time histories from two selected scenario earthquakes having magnitude M>7.0 in the Marmara Sea within 10-20 km of Istanbul believed to have generated devastating 1509 event in the region. The physics-based rupture scenarios, which may be an indication of potential future events, are adopted to estimate the ground motion characteristics and its variability in the region. Two simulation techniques (a full 3D wave propagation method to generate low-frequency seismograms, <~1 Hz and a stochastic technique to simulate high-frequency seismograms, >1Hz) are used to compute more realistic time series associated with scenario earthquakes having magnitudes Mw >7.0 in the Marmara Sea Region. A dynamic rupture is generated and computed with a boundary integral equation method and the propagation in the medium is realized through a finite difference approach (Aochi and Ulrich, 2015). The high frequency radiation is computed using stochastic finite-fault model approach based on a dynamic corner frequency (Motazedian and Atkinson, 2005; Boore, 2009). The results from the two simulation techniques are then merged by performing a weighted summation at intermediate frequencies to calculate broadband synthetic time series. The hybrid broadband ground motions computed with the proposed approach are validated by comparing peak ground acceleration (PGA), peak ground velocity (PGV), and spectral acceleration (SA) with recently proposed ground motion prediction equations (GMPE) in the region. Our
Techasrivichien, Teeranee; Darawuttimaprakorn, Niphon; Punpuing, Sureeporn; Musumari, Patou Masika; Lukhele, Bhekumusa Wellington; El-Saaidi, Christina; Suguimoto, S Pilar; Feldman, Mitchell D; Ono-Kihara, Masako; Kihara, Masahiro
2016-02-01
Thailand has undergone rapid modernization with implications for changes in sexual norms. We investigated sexual behavior and attitudes across generations and gender among a probability sample of the general population of Nonthaburi province located near Bangkok in 2012. A tablet-based survey was performed among 2,138 men and women aged 15-59 years identified through a three-stage, stratified, probability proportional to size, clustered sampling. Descriptive statistical analysis was carried out accounting for the effects of multistage sampling. Relationship of age and gender to sexual behavior and attitudes was analyzed by bivariate analysis followed by multivariate logistic regression analysis to adjust for possible confounding. Patterns of sexual behavior and attitudes varied substantially across generations and gender. We found strong evidence for a decline in the age of sexual initiation, a shift in the type of the first sexual partner, and a greater rate of acceptance of adolescent premarital sex among younger generations. The study highlighted profound changes among young women as evidenced by a higher number of lifetime sexual partners as compared to older women. In contrast to the significant gender gap in older generations, sexual profiles of Thai young women have evolved to resemble those of young men with attitudes gradually converging to similar sexual standards. Our data suggest that higher education, being never-married, and an urban lifestyle may have been associated with these changes. Our study found that Thai sexual norms are changing dramatically. It is vital to continue monitoring such changes, considering the potential impact on the HIV/STIs epidemic and unintended pregnancies. PMID:25403321
NASA Astrophysics Data System (ADS)
Papadopoulos, Vissarion; Kalogeris, Ioannis
2016-05-01
The present paper proposes a Galerkin finite element projection scheme for the solution of the partial differential equations (pde's) involved in the probability density evolution method, for the linear and nonlinear static analysis of stochastic systems. According to the principle of preservation of probability, the probability density evolution of a stochastic system is expressed by its corresponding Fokker-Planck (FP) stochastic partial differential equation. Direct integration of the FP equation is feasible only for simple systems with a small number of degrees of freedom, due to analytical and/or numerical intractability. However, rewriting the FP equation conditioned to the random event description, a generalized density evolution equation (GDEE) can be obtained, which can be reduced to a one dimensional pde. Two Galerkin finite element method schemes are proposed for the numerical solution of the resulting pde's, namely a time-marching discontinuous Galerkin scheme and the StreamlineUpwind/Petrov Galerkin (SUPG) scheme. In addition, a reformulation of the classical GDEE is proposed, which implements the principle of probability preservation in space instead of time, making this approach suitable for the stochastic analysis of finite element systems. The advantages of the FE Galerkin methods and in particular the SUPG over finite difference schemes, like the modified Lax-Wendroff, which is the most frequently used method for the solution of the GDEE, are illustrated with numerical examples and explored further.
Chu, Congying; Fan, Lingzhong; Eickhoff, Claudia R; Liu, Yong; Yang, Yong; Eickhoff, Simon B; Jiang, Tianzi
2015-08-15
Recent progress in functional neuroimaging has prompted studies of brain activation during various cognitive tasks. Coordinate-based meta-analysis has been utilized to discover the brain regions that are consistently activated across experiments. However, within-experiment co-activation relationships, which can reflect the underlying functional relationships between different brain regions, have not been widely studied. In particular, voxel-wise co-activation, which may be able to provide a detailed configuration of the co-activation network, still needs to be modeled. To estimate the voxel-wise co-activation pattern and deduce the co-activation network, a Co-activation Probability Estimation (CoPE) method was proposed to model within-experiment activations for the purpose of defining the co-activations. A permutation test was adopted as a significance test. Moreover, the co-activations were automatically separated into local and long-range ones, based on distance. The two types of co-activations describe distinct features: the first reflects convergent activations; the second represents co-activations between different brain regions. The validation of CoPE was based on five simulation tests and one real dataset derived from studies of working memory. Both the simulated and the real data demonstrated that CoPE was not only able to find local convergence but also significant long-range co-activation. In particular, CoPE was able to identify a 'core' co-activation network in the working memory dataset. As a data-driven method, the CoPE method can be used to mine underlying co-activation relationships across experiments in future studies. PMID:26037052
Hong, Mei-Zhu; Zhang, Ru-Mian; Chen, Guo-Liang; Huang, Wen-Qi; Min, Feng; Chen, Tian; Xu, Jin-Chao; Pan, Jin-Shui
2014-01-01
Objectives Liver biopsy is indispensable because liver stiffness measurement alone cannot provide information on intrahepatic inflammation. However, the presence of fibrosis highly correlates with inflammation. We constructed a noninvasive model to determine significant inflammation in chronic hepatitis B patients by using liver stiffness measurement and serum markers. Methods The training set included chronic hepatitis B patients (n = 327), and the validation set included 106 patients; liver biopsies were performed, liver histology was scored, and serum markers were investigated. All patients underwent liver stiffness measurement. Results An inflammation activity scoring system for significant inflammation was constructed. In the training set, the area under the curve, sensitivity, and specificity of the fibrosis-based activity score were 0.964, 91.9%, and 90.8% in the HBeAg(+) patients and 0.978, 85.0%, and 94.0% in the HBeAg(−) patients, respectively. In the validation set, the area under the curve, sensitivity, and specificity of the fibrosis-based activity score were 0.971, 90.5%, and 92.5% in the HBeAg(+) patients and 0.977, 95.2%, and 95.8% in the HBeAg(−) patients. The liver stiffness measurement-based activity score was comparable to that of the fibrosis-based activity score in both HBeAg(+) and HBeAg(−) patients for recognizing significant inflammation (G ≥3). Conclusions Significant inflammation can be accurately predicted by this novel method. The liver stiffness measurement-based scoring system can be used without the aid of computers and provides a noninvasive alternative for the prediction of chronic hepatitis B-related significant inflammation. PMID:25360742
NASA Astrophysics Data System (ADS)
Mahdyiar, M.; Galgana, G.; Shen-Tu, B.; Klein, E.; Pontbriand, C. W.
2014-12-01
Most time dependent rupture probability (TDRP) models are basically designed for a single-mode rupture, i.e. a single characteristic earthquake on a fault. However, most subduction zones rupture in complex patterns that create overlapping earthquakes of different magnitudes. Additionally, the limited historic earthquake data does not provide sufficient information to estimate reliable mean recurrence intervals for earthquakes. This makes it difficult to identify a single characteristic earthquake for TDRP analysis. Physical models based on geodetic data have been successfully used to obtain information on the state of coupling and slip deficit rates for subduction zones. Coupling information provides valuable insight into the complexity of subduction zone rupture processes. In this study we present a TDRP model that is formulated based on subduction zone slip deficit rate distribution. A subduction zone is represented by an integrated network of cells. Each cell ruptures multiple times from numerous earthquakes that have overlapping rupture areas. The rate of rupture for each cell is calculated using a moment balance concept that is calibrated based on historic earthquake data. The information in conjunction with estimates of coseismic slip from past earthquakes is used to formulate time dependent rupture probability models for cells. Earthquakes on the subduction zone and their rupture probabilities are calculated by integrating different combinations of cells. The resulting rupture probability estimates are fully consistent with the state of coupling of the subduction zone and the regional and local earthquake history as the model takes into account the impact of all large (M>7.5) earthquakes on the subduction zone. The granular rupture model as developed in this study allows estimating rupture probabilities for large earthquakes other than just a single characteristic magnitude earthquake. This provides a general framework for formulating physically-based
Titus, J.G.; Narayanan, V.K.
1995-10-01
The report develops probability-based projections that can be added to local tide-gage trends to estimate future sea level at particular locations. It uses the same models employed by previous assessments of sea level rise. The key coefficients in those models are based on subjective probability distributions supplied by a cross-section of climatologists, oceanographers, and glaciologists.
ERIC Educational Resources Information Center
Goldschmidt, Pete; Martinez-Fernandez, Jose-Felipe
2004-01-01
We examine whether school quality affects passing the California High School Exit Exam (CAHSEE), which is a standards-based high-stakes performance assessment. We use 3-level hierarchical logistic and linear models to examine student probabilities of passing the CAHSEE to take advantage of the availability of student, teacher, and school level…
The purpose of this manuscript is to describe the practical strategies developed for the implementation of the Minnesota Children's Pesticide Exposure Study (MNCPES), which is one of the first probability-based samples of multi-pathway and multi-pesticide exposures in children....
Wright, Anita C.; Garrido, Victor; Debuex, Georgia; Farrell-Evans, Melissa; Mudbidri, Archana A.; Otwell, W. Steven
2007-01-01
Postharvest processing (PHP) is used to reduce levels of Vibrio vulnificus in oysters, but process validation is labor-intensive and expensive. Therefore, quantitative PCR was evaluated as a rapid confirmation method for most-probable-number enumeration (QPCR-MPN) of V. vulnificus bacteria in PHP oysters. QPCR-MPN showed excellent correlation (R2 = 0.97) with standard MPN and increased assay sensitivity and efficiency. PMID:17905883
Lindhiem, Oliver; Yu, Lan; Grasso, Damion J; Kolko, David J; Youngstrom, Eric A
2015-04-01
This study adapts the Posterior Probability of Diagnosis (PPOD) Index for use with screening data. The original PPOD Index, designed for use in the context of comprehensive diagnostic assessments, is overconfident when applied to screening data. To correct for this overconfidence, we describe a simple method for adjusting the PPOD Index to improve its calibration when used for screening. Specifically, we compare the adjusted PPOD Index to the original index and naïve Bayes probability estimates on two dimensions of accuracy, discrimination and calibration, using a clinical sample of children and adolescents (N = 321) whose caregivers completed the Vanderbilt Assessment Scale to screen for attention-deficit/hyperactivity disorder and who subsequently completed a comprehensive diagnostic assessment. Results indicated that the adjusted PPOD Index, original PPOD Index, and naïve Bayes probability estimates are comparable using traditional measures of accuracy (sensitivity, specificity, and area under the curve), but the adjusted PPOD Index showed superior calibration. We discuss the importance of calibration for screening and diagnostic support tools when applied to individual patients. PMID:25000935
Understanding text-based persuasion and support tactics of concerned significant others
van Stolk-Cooke, Katherine; Hayes, Marie; Baumel, Amit
2015-01-01
The behavior of concerned significant others (CSOs) can have a measurable impact on the health and wellness of individuals attempting to meet behavioral and health goals, and research is needed to better understand the attributes of text-based CSO language when encouraging target significant others (TSOs) to achieve those goals. In an effort to inform the development of interventions for CSOs, this study examined the language content of brief text-based messages generated by CSOs to motivate TSOs to achieve a behavioral goal. CSOs generated brief text-based messages for TSOs for three scenarios: (1) to help TSOs achieve the goal, (2) in the event that the TSO is struggling to meet the goal, and (3) in the event that the TSO has given up on meeting the goal. Results indicate that there was a significant relationship between the tone and compassion of messages generated by CSOs, the CSOs’ perceptions of TSO motivation, and their expectation of a grateful or annoyed reaction by the TSO to their feedback or support. Results underscore the importance of attending to patterns in language when CSOs communicate with TSOs about goal achievement or failure, and how certain variables in the CSOs’ perceptions of their TSOs affect these characteristics. PMID:26312172
NASA Astrophysics Data System (ADS)
Courtade, Ginevra Rose
Federal mandates (A Nation at Risk, 1983 and Project 2061: Science for all Americans, 1985) as well as the National Science Education Standards (NRC, 1996) call for science education for all students. Recent educational laws (IDEA, 1997; NCLB, 2002) require access to and assessment of the general curriculum, including science, for all students with disabilities. Although some research exists on teaching academics to students with significant disabilities, the research on teaching science is especially limited (Browder, Spooner, Ahlgrim-Delzell, Harris, & Wakeman, 2006; Browder, Wakeman, et al., 2006; Courtade, et al., 2006). The purpose of this investigation was to determine if training teachers of students with significant disabilities to teach science concepts using a guided inquiry-based method would change the way science was instructed in the classroom. Further objectives of this study were to determine if training the teachers would increase students' participation and achievement in science. The findings of this study demonstrated a functional relationship between the inquiry-based science instruction training and teacher's ability to instruct students with significant disabilities in science using inquiry-based science instruction. The findings of this study also indicated a functional relationship between the inquiry-based science instruction training and acquisition of student inquiry skills. Also, findings indicated an increase in the number of science content standards being addressed after the teachers received the training. Some students were also able to acquire new science terms after their teachers taught using inquiry-based instruction. Finally, social validity measures indicated a high degree of satisfaction with the intervention and its intended outcomes.
Probability workshop to be better in probability topic
NASA Astrophysics Data System (ADS)
Asmat, Aszila; Ujang, Suriyati; Wahid, Sharifah Norhuda Syed
2015-02-01
The purpose of the present study was to examine whether statistics anxiety and attitudes towards probability topic among students in higher education level have an effect on their performance. 62 fourth semester science students were given statistics anxiety questionnaires about their perception towards probability topic. Result indicated that students' performance in probability topic is not related to anxiety level, which means that the higher level in statistics anxiety will not cause lower score in probability topic performance. The study also revealed that motivated students gained from probability workshop ensure that their performance in probability topic shows a positive improvement compared before the workshop. In addition there exists a significance difference in students' performance between genders with better achievement among female students compared to male students. Thus, more initiatives in learning programs with different teaching approaches is needed to provide useful information in improving student learning outcome in higher learning institution.
Prognostic significance of volume-based PET parameters in cancer patients.
Moon, Seung Hwan; Hyun, Seung Hyup; Choi, Joon Young
2013-01-01
Accurate prediction of cancer prognosis before the start of treatment is important since these predictions often affect the choice of treatment. Prognosis is usually based on anatomical staging and other clinical factors. However, the conventional system is not sufficient to accurately and reliably determine prognosis. Metabolic parameters measured by (18)F-fluorodeoxyglucose (FDG) positron emission tomography (PET) have the potential to provide valuable information regarding prognosis and treatment response evaluation in cancer patients. Among these parameters, volume-based PET parameters such as metabolic tumor volume and total lesion glycolysis are especially promising. However, the measurement of these parameters is significantly affected by the imaging methodology and specific image characteristics, and a standard method for these parameters has not been established. This review introduces volume-based PET parameters as potential prognostic indicators, and highlights methodological considerations for measurement, potential implications, and prospects for further studies. PMID:23323025
Nuclear spin of odd-odd α emitters based on the behavior of α -particle preformation probability
NASA Astrophysics Data System (ADS)
Ismail, M.; Adel, A.; Botros, M. M.
2016-05-01
The preformation probabilities of an α cluster inside radioactive parent nuclei for both odd-even and odd-odd nuclei are investigated. The calculations cover the isotopic chains from Ir to Ac in the mass regions 166 ≤A ≤215 and 77 ≤Z ≤89 . The calculations are employed in the framework of the density-dependent cluster model. A realistic density-dependent nucleon-nucleon (N N ) interaction with a finite-range exchange part is used to calculate the microscopic α -nucleus potential in the well-established double-folding model. The main effect of antisymmetrization under exchange of nucleons between the α and daughter nuclei has been included in the folding model through the finite-range exchange part of the N N interaction. The calculated potential is then implemented to find both the assault frequency and the penetration probability of the α particle by means of the Wentzel-Kramers-Brillouin approximation in combination with the Bohr-Sommerfeld quantization condition. The correlation of the α -particle preformation probability and the neutron and proton level sequences of the parent nucleus as obtained in our previous work is extended to odd-even and odd-odd nuclei to determine the nuclear spin and parities. Two spin coupling rules are used, namely, strong and weak rules to determine the nuclear spin for odd-odd isotopes. This work can be a useful reference for theoretical calculation of undetermined nuclear spin of odd-odd nuclei in the future.
The impacts of problem gambling on concerned significant others accessing web-based counselling.
Dowling, Nicki A; Rodda, Simone N; Lubman, Dan I; Jackson, Alun C
2014-08-01
The 'concerned significant others' (CSOs) of people with problem gambling frequently seek professional support. However, there is surprisingly little research investigating the characteristics or help-seeking behaviour of these CSOs, particularly for web-based counselling. The aims of this study were to describe the characteristics of CSOs accessing the web-based counselling service (real time chat) offered by the Australian national gambling web-based counselling site, explore the most commonly reported CSO impacts using a new brief scale (the Problem Gambling Significant Other Impact Scale: PG-SOIS), and identify the factors associated with different types of CSO impact. The sample comprised all 366 CSOs accessing the service over a 21 month period. The findings revealed that the CSOs were most often the intimate partners of problem gamblers and that they were most often females aged under 30 years. All CSOs displayed a similar profile of impact, with emotional distress (97.5%) and impacts on the relationship (95.9%) reported to be the most commonly endorsed impacts, followed by impacts on social life (92.1%) and finances (91.3%). Impacts on employment (83.6%) and physical health (77.3%) were the least commonly endorsed. There were few significant differences in impacts between family members (children, partners, parents, and siblings), but friends consistently reported the lowest impact scores. Only prior counselling experience and Asian cultural background were consistently associated with higher CSO impacts. The findings can serve to inform the development of web-based interventions specifically designed for the CSOs of problem gamblers. PMID:24813552
Group mindfulness-based therapy significantly improves sexual desire in women.
Brotto, Lori A; Basson, Rosemary
2014-06-01
At least a third of women across reproductive ages experience low sexual desire and impaired arousal. There is increasing evidence that mindfulness, defined as non-judgmental present moment awareness, may improve women's sexual functioning. The goal of this study was to test the effectiveness of mindfulness-based therapy, either immediately or after a 3-month waiting period, in women seeking treatment for low sexual desire and arousal. Women participated in four 90-min group sessions that included mindfulness meditation, cognitive therapy, and education. A total of 117 women were assigned to either the immediate treatment (n = 68, mean age 40.8 yrs) or delayed treatment (n = 49, mean age 42.2 yrs) group, in which women had two pre-treatment baseline assessments followed by treatment. A total of 95 women completed assessments through to the 6-month follow-up period. Compared to the delayed treatment control group, treatment significantly improved sexual desire, sexual arousal, lubrication, sexual satisfaction, and overall sexual functioning. Sex-related distress significantly decreased in both conditions, regardless of treatment, as did orgasmic difficulties and depressive symptoms. Increases in mindfulness and a reduction in depressive symptoms predicted improvements in sexual desire. Mindfulness-based group therapy significantly improved sexual desire and other indices of sexual response, and should be considered in the treatment of women's sexual dysfunction. PMID:24814472
Probability Surveys, Conditional Probability, and Ecological Risk Assessment
We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency’s (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...
PROBABILITY SURVEYS, CONDITIONAL PROBABILITIES, AND ECOLOGICAL RISK ASSESSMENT
We show that probability-based environmental resource monitoring programs, such as U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Asscssment Program EMAP) can be analyzed with a conditional probability analysis (CPA) to conduct quantitative probabi...
Bowden, Stephen C; Harrison, Elise J; Loring, David W
2014-01-01
Meehl's (1973, Psychodiagnosis: Selected papers. Minneapolis: University of Minnesota Press) distinction between statistical and clinical significance holds special relevance for evidence-based neuropsychological practice. Meehl argued that despite attaining statistical significance, many published findings have limited practical value since they do not inform clinical care. In the context of an ever expanding clinical research literature, accessible methods to evaluate clinical impact are needed. The method of Critically Appraised Topics (Straus, Richardson, Glasziou, & Haynes, 2011, Evidence-based medicine: How to practice and teach EBM (4th ed.). Edinburgh: Elsevier Churchill-Livingstone) was developed to provide clinicians with a "toolkit" to facilitate implementation of evidence-based practice. We illustrate the Critically Appraised Topics method using a dementia screening example. We argue that the skills practiced through critical appraisal provide clinicians with methods to: (1) evaluate the clinical relevance of new or unfamiliar research findings with a focus on patient benefit, (2) help focus of research quality, and (3) incorporate evaluation of clinical impact into educational and professional development activities. PMID:23463942
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Peskov, Nikolay V.; Kirillov, Dmitry A.
2012-10-01
One of the problems arising in Time-Domain THz spectroscopy for the problem of security is the developing the criteria for assessment of probability for the detection and identification of the explosive and drugs. We analyze the efficiency of using the correlation function and another functional (more exactly, spectral norm) for this aim. These criteria are applied to spectral lines dynamics. For increasing the reliability of the assessment we subtract the averaged value of THz signal during time of analysis of the signal: it means deleting the constant from this part of the signal. Because of this, we can increase the contrast of assessment. We compare application of the Fourier-Gabor transform with unbounded (for example, Gaussian) window, which slides along the signal, for finding the spectral lines dynamics with application of the Fourier transform in short time interval (FTST), in which the Fourier transform is applied to parts of the signals, for the same aim. These methods are close each to other. Nevertheless, they differ by series of frequencies which they use. It is important for practice that the optimal window shape depends on chosen method for obtaining the spectral dynamics. The probability enhancements if we can find the train of pulses with different frequencies, which follow sequentially. We show that there is possibility to get pure spectral lines dynamics even under the condition of distorted spectrum of the substance response on the action of the THz pulse.
Ramos-Fernández, Antonio; Paradela, Alberto; Navajas, Rosana; Albar, Juan Pablo
2008-09-01
Tandem mass spectrometry-based proteomics is currently in great demand of computational methods that facilitate the elimination of likely false positives in peptide and protein identification. In the last few years, a number of new peptide identification programs have been described, but scores or other significance measures reported by these programs cannot always be directly translated into an easy to interpret error rate measurement such as the false discovery rate. In this work we used generalized lambda distributions to model frequency distributions of database search scores computed by MASCOT, X!TANDEM with k-score plug-in, OMSSA, and InsPecT. From these distributions, we could successfully estimate p values and false discovery rates with high accuracy. From the set of peptide assignments reported by any of these engines, we also defined a generic protein scoring scheme that enabled accurate estimation of protein-level p values by simulation of random score distributions that was also found to yield good estimates of protein-level false discovery rate. The performance of these methods was evaluated by searching four freely available data sets ranging from 40,000 to 285,000 MS/MS spectra. PMID:18515861
NASA Astrophysics Data System (ADS)
Siettos, Constantinos I.; Anastassopoulou, Cleo; Russo, Lucia; Grigoras, Christos; Mylonakis, Eleftherios
2016-06-01
Based on multiscale agent-based computations we estimated the per-contact probability of transmission by age of the Ebola virus disease (EVD) that swept through Liberia from May 2014 to March 2015. For the approximation of the epidemic dynamics we have developed a detailed agent-based model with small-world interactions between individuals categorized by age. For the estimation of the structure of the evolving contact network as well as the per-contact transmission probabilities by age group we exploited the so called Equation-Free framework. Model parameters were fitted to official case counts reported by the World Health Organization (WHO) as well as to recently published data of key epidemiological variables, such as the mean time to death, recovery and the case fatality rate.
Waste Package Misload Probability
J.K. Knudsen
2001-11-20
The objective of this calculation is to calculate the probability of occurrence for fuel assembly (FA) misloads (i.e., Fa placed in the wrong location) and FA damage during FA movements. The scope of this calculation is provided by the information obtained from the Framatome ANP 2001a report. The first step in this calculation is to categorize each fuel-handling events that occurred at nuclear power plants. The different categories are based on FAs being damaged or misloaded. The next step is to determine the total number of FAs involved in the event. Using the information, a probability of occurrence will be calculated for FA misload and FA damage events. This calculation is an expansion of preliminary work performed by Framatome ANP 2001a.
A Non-Parametric Surrogate-based Test of Significance for T-Wave Alternans Detection
Nemati, Shamim; Abdala, Omar; Bazán, Violeta; Yim-Yeh, Susie; Malhotra, Atul; Clifford, Gari
2010-01-01
We present a non-parametric adaptive surrogate test that allows for the differentiation of statistically significant T-Wave Alternans (TWA) from alternating patterns that can be solely explained by the statistics of noise. The proposed test is based on estimating the distribution of noise induced alternating patterns in a beat sequence from a set of surrogate data derived from repeated reshuffling of the original beat sequence. Thus, in assessing the significance of the observed alternating patterns in the data no assumptions are made about the underlying noise distribution. In addition, since the distribution of noise-induced alternans magnitudes is calculated separately for each sequence of beats within the analysis window, the method is robust to data non-stationarities in both noise and TWA. The proposed surrogate method for rejecting noise was compared to the standard noise rejection methods used with the Spectral Method (SM) and the Modified Moving Average (MMA) techniques. Using a previously described realistic multi-lead model of TWA, and real physiological noise, we demonstrate the proposed approach reduces false TWA detections, while maintaining a lower missed TWA detection compared with all the other methods tested. A simple averaging-based TWA estimation algorithm was coupled with the surrogate significance testing and was evaluated on three public databases; the Normal Sinus Rhythm Database (NRSDB), the Chronic Heart Failure Database (CHFDB) and the Sudden Cardiac Death Database (SCDDB). Differences in TWA amplitudes between each database were evaluated at matched heart rate (HR) intervals from 40 to 120 beats per minute (BPM). Using the two-sample Kolmogorov-Smirnov test, we found that significant differences in TWA levels exist between each patient group at all decades of heart rates. The most marked difference was generally found at higher heart rates, and the new technique resulted in a larger margin of separability between patient populations than
ERIC Educational Resources Information Center
Koo, Reginald; Jones, Martin L.
2011-01-01
Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.
Non-front-fanged colubroid snakes: a current evidence-based analysis of medical significance.
Weinstein, Scott A; White, Julian; Keyler, Daniel E; Warrell, David A
2013-07-01
Non-front-fanged colubroid snakes (NFFC; formerly and artificially taxonomically assembled as "colubrids") comprise about 70% of extant snake species and include several taxa now known to cause lethal or life threatening envenoming in humans. Although the medical risks of bites by only a handful of species have been documented, a growing number of NFFC are implicated in medically significant bites. The majority of these snakes have oral products (Duvernoy's secretions, or venoms) with unknown biomedical properties and their potential for causing harm in humans is unknown. Increasingly, multiple NFFC species are entering the commercial snake trade posing an uncertain risk. Published case reports describing NFFC bites were assessed for evidence-based value, clinical detail and verified species identification. These data were subjected to meta-analysis and a hazard index was generated for select taxa. Cases on which we consulted or personally treated were included and subjected to the same assessment criteria. Cases involving approximately 120 species met the selection criteria, and a small subset designated Hazard Level 1 (most hazardous), contained 5 species with lethal potential. Recommended management of these cases included antivenom for 3 species, Dispholidus typus, Rhabdophis tiginis, Rhabdophis subminiatus, whereas others in this subset without commercially available antivenoms (Thelotornis spp.) were treated with plasma/erythrocyte replacement therapy and supportive care. Heparin, antifibrinolytics and/or plasmapheresis/exchange transfusion have been used in the management of some Hazard Level 1 envenomings, but evidence-based analysis positively contraindicates the use of any of these interventions. Hazard Level 2/3 species were involved in cases containing mixed quality data that implicated these taxa (e.g. Boiga irregularis, Philodryas olfersii, Malpolon monspessulanus) with bites that caused rare systemic effects. Recommended management may include use of
Ultrasonography-Based Thyroidal and Perithyroidal Anatomy and Its Clinical Significance
Ha, Eun Ju; Lee, Jeong Hyun
2015-01-01
Ultrasonography (US)-guided procedures such as ethanol ablation, radiofrequency ablation, laser ablation, selective nerve block, and core needle biopsy have been widely applied in the diagnosis and management of thyroid and neck lesions. For a safe and effective US-guided procedure, knowledge of neck anatomy, particularly that of the nerves, vessels, and other critical structures, is essential. However, most previous reports evaluated neck anatomy based on cadavers, computed tomography, or magnetic resonance imaging rather than US. Therefore, the aim of this article was to elucidate US-based thyroidal and perithyroidal anatomy, as well as its clinical significance in the use of prevention techniques for complications during the US-guided procedures. Knowledge of these areas may be helpful for maximizing the efficacy and minimizing the complications of US-guided procedures for the thyroid and other neck lesions. PMID:26175574
Ultrasonography-Based Thyroidal and Perithyroidal Anatomy and Its Clinical Significance.
Ha, Eun Ju; Baek, Jung Hwan; Lee, Jeong Hyun
2015-01-01
Ultrasonography (US)-guided procedures such as ethanol ablation, radiofrequency ablation, laser ablation, selective nerve block, and core needle biopsy have been widely applied in the diagnosis and management of thyroid and neck lesions. For a safe and effective US-guided procedure, knowledge of neck anatomy, particularly that of the nerves, vessels, and other critical structures, is essential. However, most previous reports evaluated neck anatomy based on cadavers, computed tomography, or magnetic resonance imaging rather than US. Therefore, the aim of this article was to elucidate US-based thyroidal and perithyroidal anatomy, as well as its clinical significance in the use of prevention techniques for complications during the US-guided procedures. Knowledge of these areas may be helpful for maximizing the efficacy and minimizing the complications of US-guided procedures for the thyroid and other neck lesions. PMID:26175574
Significance of platelet count and platelet-based models for hepatocellular carcinoma recurrence
Pang, Qing; Zhang, Jing-Yao; Xu, Xin-Sen; Song, Si-Dong; Qu, Kai; Chen, Wei; Zhou, Yan-Yan; Miao, Run-Chen; Liu, Su-Shun; Dong, Ya-Feng; Liu, Chang
2015-01-01
AIM: To explore the effects of platelet count (PLT) and 11 platelet-based indices on postoperative recurrence of hepatocellular carcinoma (HCC). METHODS: We retrospectively analyzed 172 HCC patients who were treated by partial hepatectomy. Preoperative data, including laboratory biochemical results, were used to calculate the 11 indices included in the analysis. We performed receiver operating characteristic curve analysis to determine the optimal cut-off values for predicting recurrence. Cumulative rates of HCC recurrence were calculated using Kaplan-Meier survival curves and differences were analyzed by log-rank tests. Multivariate analyses were performed to identify independent predictors of recurrence, early recurrence (within one year after surgery), and late recurrence in HCC. To obtain better prognostic models, PLT-based indices were analyzed separately after being expressed as binary and continuous variables. Two platelet-unrelated, validated HCC prognostic models were included in the analyses as reference indices. Additional analyses were performed after patients were stratified based on hepatitis B virus infection status, cirrhosis, and tumor size to investigate the significance of platelets in different subgroups. RESULTS: In the study cohort, 44.2% (76/172) of patients experienced HCC recurrence, and 50.6% (87/172) died during a median follow-up time of 46 mo. PLT and five of the 11 platelet-related models were significant predisposing factors for recurrence (P < 0.05). Multivariate analysis indicated that, among the clinical parameters, presence of ascites, PLT ≥ 148 × 109/L, alkaline phosphatase ≥ 116 U/L, and tumor size ≥ 5 cm were independently associated with a higher risk of HCC recurrence (P < 0.05). Independent and significant models included the aspartate aminotransferase/PLT index, fibrosis index based on the four factors, fibro-quotient, aspartate aminotransferase/PLT/γ-glutamyl transpeptidase/alpha-fetoprotein index, and the PLT
Minimizing the probable maximum flood
Woodbury, M.S.; Pansic, N. ); Eberlein, D.T. )
1994-06-01
This article examines Wisconsin Electric Power Company's efforts to determine an economical way to comply with Federal Energy Regulatory Commission requirements at two hydroelectric developments on the Michigamme River. Their efforts included refinement of the area's probable maximum flood model based, in part, on a newly developed probable maximum precipitation estimate.
Probability for Weather and Climate
NASA Astrophysics Data System (ADS)
Smith, L. A.
2013-12-01
Over the last 60 years, the availability of large-scale electronic computers has stimulated rapid and significant advances both in meteorology and in our understanding of the Earth System as a whole. The speed of these advances was due, in large part, to the sudden ability to explore nonlinear systems of equations. The computer allows the meteorologist to carry a physical argument to its conclusion; the time scales of weather phenomena then allow the refinement of physical theory, numerical approximation or both in light of new observations. Prior to this extension, as Charney noted, the practicing meteorologist could ignore the results of theory with good conscience. Today, neither the practicing meteorologist nor the practicing climatologist can do so, but to what extent, and in what contexts, should they place the insights of theory above quantitative simulation? And in what circumstances can one confidently estimate the probability of events in the world from model-based simulations? Despite solid advances of theory and insight made possible by the computer, the fidelity of our models of climate differs in kind from the fidelity of models of weather. While all prediction is extrapolation in time, weather resembles interpolation in state space, while climate change is fundamentally an extrapolation. The trichotomy of simulation, observation and theory which has proven essential in meteorology will remain incomplete in climate science. Operationally, the roles of probability, indeed the kinds of probability one has access too, are different in operational weather forecasting and climate services. Significant barriers to forming probability forecasts (which can be used rationally as probabilities) are identified. Monte Carlo ensembles can explore sensitivity, diversity, and (sometimes) the likely impact of measurement uncertainty and structural model error. The aims of different ensemble strategies, and fundamental differences in ensemble design to support of
NASA Astrophysics Data System (ADS)
Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming
2013-03-01
The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory.
Bakbergenuly, Ilyas; Kulinskaya, Elena; Morgenthaler, Stephan
2016-07-01
We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group-level studies or in meta-analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log-odds and arcsine transformations of the estimated probability p̂, both for single-group studies and in combining results from several groups or studies in meta-analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta-analysis and result in abysmal coverage of the combined effect for large K. We also propose bias-correction for the arcsine transformation. Our simulations demonstrate that this bias-correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta-analyses of prevalence. PMID:27192062
Bakbergenuly, Ilyas; Morgenthaler, Stephan
2016-01-01
We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group‐level studies or in meta‐analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log‐odds and arcsine transformations of the estimated probability p^, both for single‐group studies and in combining results from several groups or studies in meta‐analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta‐analysis and result in abysmal coverage of the combined effect for large K. We also propose bias‐correction for the arcsine transformation. Our simulations demonstrate that this bias‐correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta‐analyses of prevalence. PMID:27192062
NASA Astrophysics Data System (ADS)
Chen, Yanli; Du, Lianhuan; Yang, Peihua; Sun, Peng; Yu, Xiang; Mai, Wenjie
2015-08-01
Here, we report robust, flexible CNT-based supercapacitor (SC) electrodes fabricated by electrodepositing polypyrrole (PPy) on freestanding vacuum-filtered CNT film. These electrodes demonstrate significantly improved mechanical properties (with the ultimate tensile strength of 16 MPa), and greatly enhanced electrochemical performance (5.6 times larger areal capacitance). The major drawback of conductive polymer electrodes is the fast capacitance decay caused by structural breakdown, which decreases cycling stability but this is not observed in our case. All-solid-state SCs assembled with the robust CNT/PPy electrodes exhibit excellent flexibility, long lifetime (95% capacitance retention after 10,000 cycles) and high electrochemical performance (a total device volumetric capacitance of 4.9 F/cm3). Moreover, a flexible SC pack is demonstrated to light up 53 LEDs or drive a digital watch, indicating the broad potential application of our SCs for portable/wearable electronics.
A Network-Based Method to Assess the Statistical Significance of Mild Co-Regulation Effects
Horvát, Emőke-Ágnes; Zhang, Jitao David; Uhlmann, Stefan; Sahin, Özgür; Zweig, Katharina Anna
2013-01-01
Recent development of high-throughput, multiplexing technology has initiated projects that systematically investigate interactions between two types of components in biological networks, for instance transcription factors and promoter sequences, or microRNAs (miRNAs) and mRNAs. In terms of network biology, such screening approaches primarily attempt to elucidate relations between biological components of two distinct types, which can be represented as edges between nodes in a bipartite graph. However, it is often desirable not only to determine regulatory relationships between nodes of different types, but also to understand the connection patterns of nodes of the same type. Especially interesting is the co-occurrence of two nodes of the same type, i.e., the number of their common neighbours, which current high-throughput screening analysis fails to address. The co-occurrence gives the number of circumstances under which both of the biological components are influenced in the same way. Here we present SICORE, a novel network-based method to detect pairs of nodes with a statistically significant co-occurrence. We first show the stability of the proposed method on artificial data sets: when randomly adding and deleting observations we obtain reliable results even with noise exceeding the expected level in large-scale experiments. Subsequently, we illustrate the viability of the method based on the analysis of a proteomic screening data set to reveal regulatory patterns of human microRNAs targeting proteins in the EGFR-driven cell cycle signalling system. Since statistically significant co-occurrence may indicate functional synergy and the mechanisms underlying canalization, and thus hold promise in drug target identification and therapeutic development, we provide a platform-independent implementation of SICORE with a graphical user interface as a novel tool in the arsenal of high-throughput screening analysis. PMID:24039936
Visualization of the significance of Receiver Operating Characteristics based on confidence ellipses
NASA Astrophysics Data System (ADS)
Sarlis, Nicholas V.; Christopoulos, Stavros-Richard G.
2014-03-01
The Receiver Operating Characteristics (ROC) is used for the evaluation of prediction methods in various disciplines like meteorology, geophysics, complex system physics, medicine etc. The estimation of the significance of a binary prediction method, however, remains a cumbersome task and is usually done by repeating the calculations by Monte Carlo. The FORTRAN code provided here simplifies this problem by evaluating the significance of binary predictions for a family of ellipses which are based on confidence ellipses and cover the whole ROC space. Catalogue identifier: AERY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERY_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 11511 No. of bytes in distributed program, including test data, etc.: 72906 Distribution format: tar.gz Programming language: FORTRAN. Computer: Any computer supporting a GNU FORTRAN compiler. Operating system: Linux, MacOS, Windows. RAM: 1Mbyte Classification: 4.13, 9, 14. Nature of problem: The Receiver Operating Characteristics (ROC) is used for the evaluation of prediction methods in various disciplines like meteorology, geophysics, complex system physics, medicine etc. The estimation of the significance of a binary prediction method, however, remains a cumbersome task and is usually done by repeating the calculations by Monte Carlo. The FORTRAN code provided here simplifies this problem by evaluating the significance of binary predictions for a family of ellipses which are based on confidence ellipses and cover the whole ROC space. Solution method: Using the statistics of random binary predictions for a given value of the predictor threshold ɛt, one can construct the corresponding confidence ellipses. The envelope of these corresponding confidence ellipses is estimated when
Functional activity maps based on significance measures and Independent Component Analysis.
Martínez-Murcia, F J; Górriz, J M; Ramírez, J; Puntonet, C G; Illán, I A
2013-07-01
The use of functional imaging has been proven very helpful for the process of diagnosis of neurodegenerative diseases, such as Alzheimer's Disease (AD). In many cases, the analysis of these images is performed by manual reorientation and visual interpretation. Therefore, new statistical techniques to perform a more quantitative analysis are needed. In this work, a new statistical approximation to the analysis of functional images, based on significance measures and Independent Component Analysis (ICA) is presented. After the images preprocessing, voxels that allow better separation of the two classes are extracted, using significance measures such as the Mann-Whitney-Wilcoxon U-Test (MWW) and Relative Entropy (RE). After this feature selection step, the voxels vector is modelled by means of ICA, extracting a few independent components which will be used as an input to the classifier. Naive Bayes and Support Vector Machine (SVM) classifiers are used in this work. The proposed system has been applied to two different databases. A 96-subjects Single Photon Emission Computed Tomography (SPECT) database from the "Virgen de las Nieves" Hospital in Granada, Spain, and a 196-subjects Positron Emission Tomography (PET) database from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Values of accuracy up to 96.9% and 91.3% for SPECT and PET databases are achieved by the proposed system, which has yielded many benefits over methods proposed on recent works. PMID:23660005
Palmisano, Stephen; Allison, Robert S.; Schira, Mark M.; Barry, Robert J.
2015-01-01
This paper discusses four major challenges facing modern vection research. Challenge 1 (Defining Vection) outlines the different ways that vection has been defined in the literature and discusses their theoretical and experimental ramifications. The term vection is most often used to refer to visual illusions of self-motion induced in stationary observers (by moving, or simulating the motion of, the surrounding environment). However, vection is increasingly being used to also refer to non-visual illusions of self-motion, visually mediated self-motion perceptions, and even general subjective experiences (i.e., “feelings”) of self-motion. The common thread in all of these definitions is the conscious subjective experience of self-motion. Thus, Challenge 2 (Significance of Vection) tackles the crucial issue of whether such conscious experiences actually serve functional roles during self-motion (e.g., in terms of controlling or guiding the self-motion). After more than 100 years of vection research there has been surprisingly little investigation into its functional significance. Challenge 3 (Vection Measures) discusses the difficulties with existing subjective self-report measures of vection (particularly in the context of contemporary research), and proposes several more objective measures of vection based on recent empirical findings. Finally, Challenge 4 (Neural Basis) reviews the recent neuroimaging literature examining the neural basis of vection and discusses the hurdles still facing these investigations. PMID:25774143
Mass spectrometry-based protein identification with accurate statistical significance assignment
Alves, Gelio; Yu, Yi-Kuo
2015-01-01
Motivation: Assigning statistical significance accurately has become increasingly important as metadata of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of metadata at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry-based proteomics, even though accurate statistics for peptide identification can now be achieved, accurate protein level statistics remain challenging. Results: We have constructed a protein ID method that combines peptide evidences of a candidate protein based on a rigorous formula derived earlier; in this formula the database P-value of every peptide is weighted, prior to the final combination, according to the number of proteins it maps to. We have also shown that this protein ID method provides accurate protein level E-value, eliminating the need of using empirical post-processing methods for type-I error control. Using a known protein mixture, we find that this protein ID method, when combined with the Sorić formula, yields accurate values for the proportion of false discoveries. In terms of retrieval efficacy, the results from our method are comparable with other methods tested. Availability and implementation: The source code, implemented in C++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit. Contact: yyu@ncbi.nlm.nih.gov Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25362092
Sato, Tatsuhiko; Hamada, Nobuyuki
2014-01-01
We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities. PMID:25426641
Sato, Tatsuhiko; Hamada, Nobuyuki
2014-01-01
We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities. PMID:25426641
The spline probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Sithiravel, Rajiv; Tharmarasa, Ratnasingham; McDonald, Mike; Pelletier, Michel; Kirubarajan, Thiagalingam
2012-06-01
The Probability Hypothesis Density Filter (PHD) is a multitarget tracker for recursively estimating the number of targets and their state vectors from a set of observations. The PHD filter is capable of working well in scenarios with false alarms and missed detections. Two distinct PHD filter implementations are available in the literature: the Sequential Monte Carlo Probability Hypothesis Density (SMC-PHD) and the Gaussian Mixture Probability Hypothesis Density (GM-PHD) filters. The SMC-PHD filter uses particles to provide target state estimates, which can lead to a high computational load, whereas the GM-PHD filter does not use particles, but restricts to linear Gaussian mixture models. The SMC-PHD filter technique provides only weighted samples at discrete points in the state space instead of a continuous estimate of the probability density function of the system state and thus suffers from the well-known degeneracy problem. This paper proposes a B-Spline based Probability Hypothesis Density (S-PHD) filter, which has the capability to model any arbitrary probability density function. The resulting algorithm can handle linear, non-linear, Gaussian, and non-Gaussian models and the S-PHD filter can also provide continuous estimates of the probability density function of the system state. In addition, by moving the knots dynamically, the S-PHD filter ensures that the splines cover only the region where the probability of the system state is significant, hence the high efficiency of the S-PHD filter is maintained at all times. Also, unlike the SMC-PHD filter, the S-PHD filter is immune to the degeneracy problem due to its continuous nature. The S-PHD filter derivations and simulations are provided in this paper.
Probability mapping of contaminants
Rautman, C.A.; Kaplan, P.G.; McGraw, M.A.; Istok, J.D.; Sigda, J.M.
1994-04-01
Exhaustive characterization of a contaminated site is a physical and practical impossibility. Descriptions of the nature, extent, and level of contamination, as well as decisions regarding proposed remediation activities, must be made in a state of uncertainty based upon limited physical sampling. The probability mapping approach illustrated in this paper appears to offer site operators a reasonable, quantitative methodology for many environmental remediation decisions and allows evaluation of the risk associated with those decisions. For example, output from this approach can be used in quantitative, cost-based decision models for evaluating possible site characterization and/or remediation plans, resulting in selection of the risk-adjusted, least-cost alternative. The methodology is completely general, and the techniques are applicable to a wide variety of environmental restoration projects. The probability-mapping approach is illustrated by application to a contaminated site at the former DOE Feed Materials Production Center near Fernald, Ohio. Soil geochemical data, collected as part of the Uranium-in-Soils Integrated Demonstration Project, have been used to construct a number of geostatistical simulations of potential contamination for parcels approximately the size of a selective remediation unit (the 3-m width of a bulldozer blade). Each such simulation accurately reflects the actual measured sample values, and reproduces the univariate statistics and spatial character of the extant data. Post-processing of a large number of these equally likely statistically similar images produces maps directly showing the probability of exceeding specified levels of contamination (potential clean-up or personnel-hazard thresholds).