Significance of "high probability/low damage" versus "low probability/high damage" flood events
NASA Astrophysics Data System (ADS)
Merz, B.; Elmer, F.; Thieken, A. H.
2009-06-01
The need for an efficient use of limited resources fosters the application of risk-oriented design in flood mitigation. Flood defence measures reduce future damage. Traditionally, this benefit is quantified via the expected annual damage. We analyse the contribution of "high probability/low damage" floods versus the contribution of "low probability/high damage" events to the expected annual damage. For three case studies, i.e. actual flood situations in flood-prone communities in Germany, it is shown that the expected annual damage is dominated by "high probability/low damage" events. Extreme events play a minor role, even though they cause high damage. Using typical values for flood frequency behaviour, flood plain morphology, distribution of assets and vulnerability, it is shown that this also holds for the general case of river floods in Germany. This result is compared to the significance of extreme events in the public perception. "Low probability/high damage" events are more important in the societal view than it is expressed by the expected annual damage. We conclude that the expected annual damage should be used with care since it is not in agreement with societal priorities. Further, risk aversion functions that penalise events with disastrous consequences are introduced in the appraisal of risk mitigation options. It is shown that risk aversion may have substantial implications for decision-making. Different flood mitigation decisions are probable, when risk aversion is taken into account.
[Biometric bases: basic concepts of probability calculation].
Dinya, E
1998-04-26
The author gives or outline of the basic concepts of probability theory. The bases of the event algebra, definition of the probability, the classical probability model and the random variable are presented.
Significance probability mapping: the final touch in t-statistic mapping.
Hassainia, F; Petit, D; Montplaisir, J
1994-01-01
Significance Probability Mapping (SPM), based on Student's t-statistic, is widely used for comparing mean brain topography maps of two groups. The map resulting from this process represents the distribution of t-values over the entire scalp. However, t-values by themselves cannot reveal whether or not group differences are significant. Significance levels associated with a few t-values are therefore commonly indicated on map legends to give the reader an idea of the significance levels of t-values. Nevertheless, a precise significance level topography cannot be achieved with these few significance values. We introduce a new kind of map which directly displays significance level topography in order to relieve the reader from converting multiple t-values to their corresponding significance probabilities, and to obtain a good quantification and a better localization of regions with significant differences between groups. As an illustration of this type of map, we present a comparison of EEG activity in Alzheimer's patients and age-matched control subjects for both wakefulness and REM sleep.
Modulation Based on Probability Density Functions
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
2009-01-01
A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.
Probability-based TCP congestion control mechanism
NASA Astrophysics Data System (ADS)
Xu, Changbiao; Yang, Shizhong; Xian, Yongju
2005-11-01
To mitigate TCP global synchronization and improve network throughput, an improved TCP congestion control mechanism is proposed, namely P-TCP, which adopts the probability-based way to adjust congestion window independently when the network occurs congestion. Therefore, some P-TCP connections may decrease the congestion window greatly while other P-TCP connections may decrease the congestion window lightly. Simulation results show that TCP global synchronization can be effectively mitigated, which leads to efficient utilization of network resources as well as the effective mitigation for network congestion. Simulation results also give some valuable references for determining the related parameters in P-TCP.
Probability based models for estimation of wildfire risk
Haiganoush Preisler; D. R. Brillinger; R. E. Burgan; John Benoit
2004-01-01
We present a probability-based model for estimating fire risk. Risk is defined using three probabilities: the probability of fire occurrence; the conditional probability of a large fire given ignition; and the unconditional probability of a large fire. The model is based on grouped data at the 1 kmÂ²-day cell level. We fit a spatially and temporally explicit non-...
Significance of stress transfer in time-dependent earthquake probability calculation
NASA Astrophysics Data System (ADS)
Parsons, T.
2004-12-01
A sudden change in stress is seen to modify earthquake rates, but should it also revise earthquake probability? Data used to derive input parameters permit an array of forecasts; so how large a static stress change is required to cause a statistically significant earthquake probability change? To answer that question, effects of parameter and philosophical choices are examined through all phases of sample calculations. Drawing at random from distributions of recurrence-aperiodicity pairs identifies many that recreate long paleoseismic and historic earthquake catalogs. Probability-density functions built from the recurrence-aperiodicity pairs give the range of possible earthquake forecasts under a point-process renewal model. Consequences of choices made in stress-transfer calculations, such as different slip models, fault rake, dip and friction are tracked. For interactions among large faults, calculated peak stress changes may be localized, with most of the receiving fault area changed less than the mean. Thus to avoid overstating probability change on segments, stress-change values should be drawn from a distribution reflecting the spatial pattern rather than using the segment mean. Disparity resulting from interaction probability methodology is also examined. For a fault with a well-understood earthquake history, a minimum stress-change to stressing-rate ratio of 10:1 to 20:1 is required to significantly skew probabilities with >80-85% confidence. That ratio must be closer to 50:1 to exceed 90-95% confidence levels. Thus revision to earthquake probability is achievable when a perturbing event is very close to the fault in question, or the tectonic stressing rate is low.
Significance of stress transfer in time-dependent earthquake probability calculations
Parsons, T.
2005-01-01
A sudden change in stress is seen to modify earthquake rates, but should it also revise earthquake probability? Data used to derive input parameters permits an array of forecasts; so how large a static stress change is require to cause a statistically significant earthquake probability change? To answer that question, effects of parameter and philosophical choices are examined through all phases of sample calculations, Drawing at random from distributions of recurrence-aperiodicity pairs identifies many that recreate long paleoseismic and historic earthquake catalogs. Probability density funtions built from the recurrence-aperiodicity pairs give the range of possible earthquake forecasts under a point process renewal model. Consequences of choices made in stress transfer calculations, such as different slip models, fault rake, dip, and friction are, tracked. For interactions among large faults, calculated peak stress changes may be localized, with most of the receiving fault area changed less than the mean. Thus, to avoid overstating probability change on segments, stress change values should be drawn from a distribution reflecting the spatial pattern rather than using the segment mean. Disparity resulting from interaction probability methodology is also examined. For a fault with a well-understood earthquake history, a minimum stress change to stressing rate ratio of 10:1 to 20:1 is required to significantly skew probabilities with >80-85% confidence. That ratio must be closer to 50:1 to exceed 90-95% confidence levels. Thus revision to earthquake probability is achievable when a perturbing event is very close to the fault in question or the tectonic stressing rate is low.
Jang, Ja Yoon; Kim, Sun Mi; Kim, Jin Hwan; Jang, Mijung; La Yun, Bo; Lee, Jong Yoon; Lee, Soo Hyun; Kim, Bohyoung
2017-03-01
The aims of this study were to determine the malignancy rate of probably benign lesions that show an interval change on follow-up ultrasound and to evaluate the differences seen on imaging between benign and malignant lesions initially categorized as probably benign but with interval change on follow-up breast ultrasound.We retrospectively reviewed 11,323 lesions from ultrasound-guided core-biopsies performed between June 2004 and December 2014 and identified 289 lesions (266 patients) with an interval change from probably benign (Breast Imaging Reporting and Data System [BI-RADS] category 3) in the previous 2 years. Malignancy rates were compared according to the ultrasound findings and the characteristics of the interval changes, including changes in morphology and/or diameter.The malignancy rate for probably benign lesions that showed an interval change on follow-up ultrasound was 6.9% (20/289). The malignancy rate was higher for clustered cysts (33.3%) and irregular or noncircumscribed masses (12.7%) than for circumscribed oval masses (5%) or complicated cysts (5%) seen on initial ultrasound (P = 0.043). Fifty-five percent of the malignancies were found to be ductal carcinoma in situ and there was 1 case of lymph node metastasis among the patients with invasive disease in whom biopsy was delayed by 6 to 15 months. The extent of invasiveness was greater in missed cases. There was a significant difference in the maximal diameter change between the 20 malignant lesions and the 269 benign lesions (4.0 mm vs 2.7 mm, P = 0.002). The cutoff value for maximal diameter change per initial diameter was 39.0% for predicting malignancy (sensitivity 95%, specificity 53.5%). The malignancy rate for morphologically changed lesions was significantly higher than for morphologically stable lesions (13.6% vs 4.9%; P = 0.024)Our 6.9% of probably benign lesions that showed an interval change finally turned out to be malignancy was mostly DCIS. The sonographic
Jang, Ja Yoon; Kim, Sun Mi; Kim, Jin Hwan; Jang, Mijung; La Yun, Bo; Lee, Jong Yoon; Lee, Soo Hyun; Kim, Bohyoung
2017-01-01
Abstract The aims of this study were to determine the malignancy rate of probably benign lesions that show an interval change on follow-up ultrasound and to evaluate the differences seen on imaging between benign and malignant lesions initially categorized as probably benign but with interval change on follow-up breast ultrasound. We retrospectively reviewed 11,323 lesions from ultrasound-guided core-biopsies performed between June 2004 and December 2014 and identified 289 lesions (266 patients) with an interval change from probably benign (Breast Imaging Reporting and Data System [BI-RADS] category 3) in the previous 2 years. Malignancy rates were compared according to the ultrasound findings and the characteristics of the interval changes, including changes in morphology and/or diameter. The malignancy rate for probably benign lesions that showed an interval change on follow-up ultrasound was 6.9% (20/289). The malignancy rate was higher for clustered cysts (33.3%) and irregular or noncircumscribed masses (12.7%) than for circumscribed oval masses (5%) or complicated cysts (5%) seen on initial ultrasound (P = 0.043). Fifty-five percent of the malignancies were found to be ductal carcinoma in situ and there was 1 case of lymph node metastasis among the patients with invasive disease in whom biopsy was delayed by 6 to 15 months. The extent of invasiveness was greater in missed cases. There was a significant difference in the maximal diameter change between the 20 malignant lesions and the 269 benign lesions (4.0 mm vs 2.7 mm, P = 0.002). The cutoff value for maximal diameter change per initial diameter was 39.0% for predicting malignancy (sensitivity 95%, specificity 53.5%). The malignancy rate for morphologically changed lesions was significantly higher than for morphologically stable lesions (13.6% vs 4.9%; P = 0.024) Our 6.9% of probably benign lesions that showed an interval change finally turned out to be malignancy was mostly DCIS. The
Vehicle Detection Based on Probability Hypothesis Density Filter
Zhang, Feihu; Knoll, Alois
2016-01-01
In the past decade, the developments of vehicle detection have been significantly improved. By utilizing cameras, vehicles can be detected in the Regions of Interest (ROI) in complex environments. However, vision techniques often suffer from false positives and limited field of view. In this paper, a LiDAR based vehicle detection approach is proposed by using the Probability Hypothesis Density (PHD) filter. The proposed approach consists of two phases: the hypothesis generation phase to detect potential objects and the hypothesis verification phase to classify objects. The performance of the proposed approach is evaluated in complex scenarios, compared with the state-of-the-art. PMID:27070621
Vehicle Detection Based on Probability Hypothesis Density Filter.
Zhang, Feihu; Knoll, Alois
2016-04-09
In the past decade, the developments of vehicle detection have been significantly improved. By utilizing cameras, vehicles can be detected in the Regions of Interest (ROI) in complex environments. However, vision techniques often suffer from false positives and limited field of view. In this paper, a LiDAR based vehicle detection approach is proposed by using the Probability Hypothesis Density (PHD) filter. The proposed approach consists of two phases: the hypothesis generation phase to detect potential objects and the hypothesis verification phase to classify objects. The performance of the proposed approach is evaluated in complex scenarios, compared with the state-of-the-art.
GNSS integer ambiguity validation based on posterior probability
NASA Astrophysics Data System (ADS)
Wu, Zemin; Bian, Shaofeng
2015-10-01
GNSS integer ambiguity validation is considered to be a challenge task for decades. Several kinds of validation tests are developed and widely used in these years, but theoretical basis is their weakness. Ambiguity validation theoretically is an issue of hypothesis test. In the frame of Bayesian hypothesis testing, posterior probability is the canonical standard that statistical decision should be based on. In this contribution, (i) we derive the posterior probability of the fixed ambiguity based on the Bayesian principle and modify it for practice ambiguity validation. (ii) The optimal property of the posterior probability test is proved based on an extended Neyman-Pearson lemma. Since validation failure rate is the issue users most concerned about, (iii) we derive the failure rate upper bound of the posterior probability test, so the user can use the posterior probability test either in the fixed posterior probability or in the fixed failure rate way. Simulated as well as real observed data are used for experimental validations. The results show that (i) the posterior probability test is the most effective within the R-ratio test, difference test, ellipsoidal integer aperture test and posterior probability test, (ii) the posterior probability test is computational efficient and (iii) the failure rate estimation for posterior probability test is useful.
Probability-based nitrate contamination map of groundwater in Kinmen.
Liu, Chen-Wuing; Wang, Yeuh-Bin; Jang, Cheng-Shin
2013-12-01
Groundwater supplies over 50% of drinking water in Kinmen. Approximately 16.8% of groundwater samples in Kinmen exceed the drinking water quality standard (DWQS) of NO3 (-)-N (10 mg/L). The residents drinking high nitrate-polluted groundwater pose a potential risk to health. To formulate effective water quality management plan and assure a safe drinking water in Kinmen, the detailed spatial distribution of nitrate-N in groundwater is a prerequisite. The aim of this study is to develop an efficient scheme for evaluating spatial distribution of nitrate-N in residential well water using logistic regression (LR) model. A probability-based nitrate-N contamination map in Kinmen is constructed. The LR model predicted the binary occurrence probability of groundwater nitrate-N concentrations exceeding DWQS by simple measurement variables as independent variables, including sampling season, soil type, water table depth, pH, EC, DO, and Eh. The analyzed results reveal that three statistically significant explanatory variables, soil type, pH, and EC, are selected for the forward stepwise LR analysis. The total ratio of correct classification reaches 92.7%. The highest probability of nitrate-N contamination map presents in the central zone, indicating that groundwater in the central zone should not be used for drinking purposes. Furthermore, a handy EC-pH-probability curve of nitrate-N exceeding the threshold of DWQS was developed. This curve can be used for preliminary screening of nitrate-N contamination in Kinmen groundwater. This study recommended that the local agency should implement the best management practice strategies to control nonpoint nitrogen sources and carry out a systematic monitoring of groundwater quality in residential wells of the high nitrate-N contamination zones.
PROBABILITY BASED CORROSION CONTROL FOR WASTE TANKS - PART II
Hoffman, E.; Edwards, T.
2010-12-09
As part of an ongoing study to evaluate the discontinuity in the corrosion controls at the SRS tank farm, a study was conducted this year to assess the minimum concentrations below 1 molar nitrate, see Figure 1. Current controls on the tank farm solution chemistry are in place to prevent the initiation and propagation of pitting and stress corrosion cracking in the primary steel waste tanks. The controls are based upon a series of experiments performed with simulated solutions on materials used for construction of the tanks, namely ASTM A537 carbon steel (A537). During FY09, an experimental program was undertaken to investigate the risk associated with reducing the minimum molar nitrite concentration required to confidently inhibit pitting in dilute solutions (i.e., less than 1 molar nitrate). The experimental results and conclusions herein provide a statistical basis to quantify the probability of pitting for the tank wall exposed to various solutions with dilute concentrations of nitrate and nitrite. Understanding the probability for pitting will allow the facility to make tank-specific risk-based decisions for chemistry control. Based on previous electrochemical testing, a statistical test matrix was developed to refine and solidify the application of the statistical mixture/amount model to corrosion of A537 steel. A mixture/amount model was identified based on statistical analysis of recent and historically collected electrochemical data. This model provides a more complex relationship between the nitrate and nitrite concentrations and the probability of pitting than is represented by the model underlying the current chemistry control program, and its use may provide a technical basis for the utilization of less nitrite to inhibit pitting at concentrations below 1 molar nitrate. FY09 results fit within the mixture/amount model, and further refine the nitrate regime in which the model is applicable. The combination of visual observations and cyclic
PROBABILITY BASED CORROSION CONTROL FOR LIQUID WASTE TANKS - PART III
Hoffman, E.; Edwards, T.
2010-12-09
The liquid waste chemistry control program is designed to reduce the pitting corrosion occurrence on tank walls. The chemistry control program has been implemented, in part, by applying engineering judgment safety factors to experimental data. However, the simple application of a general safety factor can result in use of excessive corrosion inhibiting agents. The required use of excess corrosion inhibitors can be costly for tank maintenance, waste processing, and in future tank closure. It is proposed that a probability-based approach can be used to quantify the risk associated with the chemistry control program. This approach can lead to the application of tank-specific chemistry control programs reducing overall costs associated with overly conservative use of inhibitor. Furthermore, when using nitrite as an inhibitor, the current chemistry control program is based on a linear model of increased aggressive species requiring increased protective species. This linear model was primarily supported by experimental data obtained from dilute solutions with nitrate concentrations less than 0.6 M, but is used to produce the current chemistry control program up to 1.0 M nitrate. Therefore, in the nitrate space between 0.6 and 1.0 M, the current control limit is based on assumptions that the linear model developed from data in the <0.6 M region is applicable in the 0.6-1.0 M region. Due to this assumption, further investigation of the nitrate region of 0.6 M to 1.0 M has potential for significant inhibitor reduction, while maintaining the same level of corrosion risk associated with the current chemistry control program. Ongoing studies have been conducted in FY07, FY08, FY09 and FY10 to evaluate the corrosion controls at the SRS tank farm and to assess the minimum nitrite concentrations to inhibit pitting in ASTM A537 carbon steel below 1.0 molar nitrate. The experimentation from FY08 suggested a non-linear model known as the mixture/amount model could be used to predict
Probability distribution of forecasts based on the ETAS model
NASA Astrophysics Data System (ADS)
Harte, D. S.
2017-07-01
Earthquake probability forecasts based on a point process model, which is defined with a conditional intensity function (e.g. ETAS), are generally made by using the history of the process to the current point in time, and by then simulating over the future time interval over which a forecast is required. By repeating the simulation multiple times, an estimate of the mean number of events together with the empirical probability distribution of event counts can be derived. This can involve a considerable amount of computation. Here we derive a recursive procedure to approximate the expected number of events when forecasts are based on the ETAS model. To assess the associated uncertainty of this expected number, we then derive the probability generating function of the distribution of the forecasted number of events. This theoretically derived distribution is very complex; hence we show how it can be approximated using the negative binomial distribution.
Scene text detection based on probability map and hierarchical model
NASA Astrophysics Data System (ADS)
Zhou, Gang; Liu, Yuehu
2012-06-01
Scene text detection is an important step for the text-based information extraction system. This problem is challenging due to the variations of size, unknown colors, and background complexity. We present a novel algorithm to robustly detect text in scene images. To segment text candidate connected components (CC) from images, a text probability map consisting of the text position and scale information is estimated by a text region detector. To filter out the non-text CCs, a hierarchical model consisting of two classifiers in cascade is utilized. The first stage of the model estimates text probabilities with unary component features. The second stage classifier is trained with both probability features and similarity features. Since the proposed method is learning-based, there are very few manual parameters required. Experimental results on the public benchmark ICDAR dataset show that our algorithm outperforms other state-of-the-art methods.
He, Jieyue; Wang, Chunyan; Qiu, Kunpu; Zhong, Wei
2014-01-01
Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. The algorithm of probability graph isomorphism evaluation based on circuit simulation
2014-01-01
Background Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. Methods In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. Results The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. Conclusions The algorithm of probability graph isomorphism
Lake Superior Phytoplankton Characterization from the 2006 Probability Based Survey
We conducted a late summer probability based survey of Lake Superior in 2006 which consisted of 52 sites stratified across 3 depth zones. As part of this effort, we collected composite phytoplankton samples from the epilimnion and the fluorescence maxima (Fmax) at 29 of the site...
Nonprobability and probability-based sampling strategies in sexual science.
Catania, Joseph A; Dolcini, M Margaret; Orellana, Roberto; Narayanan, Vasudah
2015-01-01
With few exceptions, much of sexual science builds upon data from opportunistic nonprobability samples of limited generalizability. Although probability-based studies are considered the gold standard in terms of generalizability, they are costly to apply to many of the hard-to-reach populations of interest to sexologists. The present article discusses recent conclusions by sampling experts that have relevance to sexual science that advocates for nonprobability methods. In this regard, we provide an overview of Internet sampling as a useful, cost-efficient, nonprobability sampling method of value to sex researchers conducting modeling work or clinical trials. We also argue that probability-based sampling methods may be more readily applied in sex research with hard-to-reach populations than is typically thought. In this context, we provide three case studies that utilize qualitative and quantitative techniques directed at reducing limitations in applying probability-based sampling to hard-to-reach populations: indigenous Peruvians, African American youth, and urban men who have sex with men (MSM). Recommendations are made with regard to presampling studies, adaptive and disproportionate sampling methods, and strategies that may be utilized in evaluating nonprobability and probability-based sampling methods.
Lake Superior Phytoplankton Characterization from the 2006 Probability Based Survey
We conducted a late summer probability based survey of Lake Superior in 2006 which consisted of 52 sites stratified across 3 depth zones. As part of this effort, we collected composite phytoplankton samples from the epilimnion and the fluorescence maxima (Fmax) at 29 of the site...
Brain MR image segmentation improved algorithm based on probability
NASA Astrophysics Data System (ADS)
Liao, Hengxu; Liu, Gang; Guo, Xiantang
2017-08-01
Local weight voting algorithm is a kind of current mainstream segmentation algorithm. It takes full account of the influences of the likelihood of image likelihood and the prior probabilities of labels on the segmentation results. But this method still can be improved since the essence of this method is to get the label with the maximum probability. If the probability of a label is 70%, it may be acceptable in mathematics. But in the actual segmentation, it may be wrong. So we use the matrix completion algorithm as a supplement. When the probability of the former is larger, the result of the former algorithm is adopted. When the probability of the later is larger, the result of the later algorithm is adopted. This is equivalent to adding an automatic algorithm selection switch that can theoretically ensure that the accuracy of the algorithm we propose is superior to the local weight voting algorithm. At the same time, we propose an improved matrix completion algorithm based on enumeration method. In addition, this paper also uses a multi-parameter registration model to reduce the influence that the registration made on the segmentation. The experimental results show that the accuracy of the algorithm is better than the common segmentation algorithm.
Walker, J.D.; Burchfiel, B.C.; Royden, L.H.
1983-02-01
The upper part of the Moenkopi Formation in the Northern Clark Mountains, Southeastern California, contains conglomerate beds whose clasts comprise igneous, metamorphic, and sedimentary rocks. Metamorphic clasts include foliated granite, meta-arkose, and quarzite, probably derived from older Precambrian basement and younger Precambrian clastic rocks. Volcanic clasts are altered plagioclase-bearing rocks, and sedimentary clasts were derived from Paleozoic miogeoclinal rocks. Paleocurrent data indicate that the clasts had a source to the southwest. An age of late Early or early Middle Triassic has been tentatively assigned to these conglomerates. These conglomerates indicate that Late Permian to Early Triassic deformational events in this part of the orogen affected rocks much farther east than has been previously recognized.
Insomnia in probable migraine: a population-based study.
Kim, Jiyoung; Cho, Soo-Jin; Kim, Won-Joo; Yang, Kwang Ik; Yun, Chang-Ho; Chu, Min Kyung
2016-12-01
Insomnia is a common complaint among individuals with migraine. The close association between insomnia and migraine has been reported in clinic-based and population-based studies. Probable migraine (PM) is a migrainous headache which fulfills all but one criterion in the migraine diagnostic criteria. However, an association between insomnia and PM has rarely been reported. This study is to investigate the association between insomnia and PM in comparison with migraine using data from the Korean Headache-Sleep Study. The Korean Headache-Sleep Study is a nation-wide cross-sectional survey for all Korean adults aged 19-69 years. The survey was performed via face-to-face interview using a questionnaire on sleep and headache. If an individual's Insomnia Severity Index score was ≥15.5, she/he was diagnosed as having insomnia. Of 2695 participants, the prevalence of migraine, PM and insomnia was 5.3, 14.1 and 3.6 %, respectively. The prevalence of insomnia among subjects with PM was not significantly different compared to those with migraine (8.2 % vs. 9.1 %, p = 0.860). However, insomnia prevalence in subjects with PM was significantly higher than in non-headache controls (8.2 % vs. 1.8 %, p < 0.001). Insomnia Severity Index score was significantly higher in subjects with migraine compared to those with PM (6.8 ± 5.8 vs. 5.5 ± 5.8, p = 0.012). Headache frequency and Headache Impact Test-6 score were significantly higher in subjects with migraine and PM with insomnia compared to those without insomnia. Multivariable linear analyses showed that anxiety, depression, headache frequency and headache intensity were independent variables for contributing the ISI score among subjects with PM. The prevalence of insomnia among subjects with PM was not significantly different compared to those with migraine. Anxiety, depression, headache frequency and headache intensity were related with ISI score in subjects with PM.
Quantum probability ranking principle for ligand-based virtual screening.
Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal
2017-04-01
Chemical libraries contain thousands of compounds that need screening, which increases the need for computational methods that can rank or prioritize compounds. The tools of virtual screening are widely exploited to enhance the cost effectiveness of lead drug discovery programs by ranking chemical compounds databases in decreasing probability of biological activity based upon probability ranking principle (PRP). In this paper, we developed a novel ranking approach for molecular compounds inspired by quantum mechanics, called quantum probability ranking principle (QPRP). The QPRP ranking criteria would make an attempt to draw an analogy between the physical experiment and molecular structure ranking process for 2D fingerprints in ligand based virtual screening (LBVS). The development of QPRP criteria in LBVS has employed the concepts of quantum at three different levels, firstly at representation level, this model makes an effort to develop a new framework of molecular representation by connecting the molecular compounds with mathematical quantum space. Secondly, estimate the similarity between chemical libraries and references based on quantum-based similarity searching method. Finally, rank the molecules using QPRP approach. Simulated virtual screening experiments with MDL drug data report (MDDR) data sets showed that QPRP outperformed the classical ranking principle (PRP) for molecular chemical compounds.
Quantum probability ranking principle for ligand-based virtual screening
NASA Astrophysics Data System (ADS)
Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal
2017-04-01
Chemical libraries contain thousands of compounds that need screening, which increases the need for computational methods that can rank or prioritize compounds. The tools of virtual screening are widely exploited to enhance the cost effectiveness of lead drug discovery programs by ranking chemical compounds databases in decreasing probability of biological activity based upon probability ranking principle (PRP). In this paper, we developed a novel ranking approach for molecular compounds inspired by quantum mechanics, called quantum probability ranking principle (QPRP). The QPRP ranking criteria would make an attempt to draw an analogy between the physical experiment and molecular structure ranking process for 2D fingerprints in ligand based virtual screening (LBVS). The development of QPRP criteria in LBVS has employed the concepts of quantum at three different levels, firstly at representation level, this model makes an effort to develop a new framework of molecular representation by connecting the molecular compounds with mathematical quantum space. Secondly, estimate the similarity between chemical libraries and references based on quantum-based similarity searching method. Finally, rank the molecules using QPRP approach. Simulated virtual screening experiments with MDL drug data report (MDDR) data sets showed that QPRP outperformed the classical ranking principle (PRP) for molecular chemical compounds.
Quantum probability ranking principle for ligand-based virtual screening
NASA Astrophysics Data System (ADS)
Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal
2017-02-01
Chemical libraries contain thousands of compounds that need screening, which increases the need for computational methods that can rank or prioritize compounds. The tools of virtual screening are widely exploited to enhance the cost effectiveness of lead drug discovery programs by ranking chemical compounds databases in decreasing probability of biological activity based upon probability ranking principle (PRP). In this paper, we developed a novel ranking approach for molecular compounds inspired by quantum mechanics, called quantum probability ranking principle (QPRP). The QPRP ranking criteria would make an attempt to draw an analogy between the physical experiment and molecular structure ranking process for 2D fingerprints in ligand based virtual screening (LBVS). The development of QPRP criteria in LBVS has employed the concepts of quantum at three different levels, firstly at representation level, this model makes an effort to develop a new framework of molecular representation by connecting the molecular compounds with mathematical quantum space. Secondly, estimate the similarity between chemical libraries and references based on quantum-based similarity searching method. Finally, rank the molecules using QPRP approach. Simulated virtual screening experiments with MDL drug data report (MDDR) data sets showed that QPRP outperformed the classical ranking principle (PRP) for molecular chemical compounds.
Saliency region detection based on Markov absorption probabilities.
Sun, Jingang; Lu, Huchuan; Liu, Xiuping
2015-05-01
In this paper, we present a novel bottom-up salient object detection approach by exploiting the relationship between the saliency detection and the Markov absorption probability. First, we calculate a preliminary saliency map by the Markov absorption probability on a weighted graph via partial image borders as background prior. Unlike most of the existing background prior-based methods which treated all image boundaries as background, we only use the left and top sides as background for simplicity. The saliency of each element is defined as the sum of the corresponding absorption probability by several left and top virtual boundary nodes, which are most similar to it. Second, a better result is obtained by ranking the relevance of the image elements with foreground cues extracted from the preliminary saliency map, which can effectively emphasize the objects against the background, whose computation is processed similarly as that in the first stage and yet substantially different from the former one. At last, three optimization techniques--content-based diffusion mechanism, superpixelwise depression function, and guided filter--are utilized to further modify the saliency map generalized at the second stage, which is proved to be effective and complementary to each other. Both qualitative and quantitative evaluations on four publicly available benchmark data sets demonstrate the robustness and efficiency of the proposed method against 17 state-of-the-art methods.
Fault Diagnosis Method of Fault Indicator Based on Maximum Probability
NASA Astrophysics Data System (ADS)
Yin, Zili; Zhang, Wei
2017-05-01
In order to solve the problem of distribution fault diagnosis in case of misreporting or failed-report of fault indicator information, the characteristics of the fault indicator are analyzed, and the concept of the minimum fault judgment area of the distribution network is developed. Based on which, the mathematical model of fault indicator fault diagnosis is evaluated. The characteristics of fault indicator signals are analyzed. Based on two-in-three principle, a probabilistic fault indicator combination signal processing method is proposed. Based on the combination of the minimum fault judgment area model, the fault indicator combination signal and the interdependence between the fault indicators, a fault diagnosis method based on maximum probability is proposed. The method is based on the similarity between the simulated fault signal and the real fault signal, and the detailed formula is given. The method has good fault-tolerance in the case of misreporting or failed-report of fault indicator information, which can more accurately determine the fault area. The probability of each area is given, and fault alternatives are provided. The proposed approach is feasible and valuable for the dispatching and maintenance personnel to deal with the fault.
A new search algorithm based on probability in intrusion detection
NASA Astrophysics Data System (ADS)
Sun, Jianhua; Jin, Hai; Han, Zongfen; Chen, Hao; Yang, Yanping
2004-04-01
Detection rate is vital to intrusion detection. We propose a new search algorithm base on probability to speed up the process rate for a novel compound intrusion detection system (CIDS). We employ an improved Bayesian decision theorem to build this compound model. The improved Bayesian decision theorem brings four profits to this model. The first is to eliminate the flaws of a narrow definition for normal patterns and intrusion patterns. The second is to extend the known intrusions patterns to novel intrusions patterns. The third is to reduce risks that detecting intrusion brings to system. The last is to offer a method to build a compound intrusion detection model that integrates misuse intrusion detection system (MIDS) and anomaly intrusion detection system (AIDS). During the experiment of this model, we find that different system calls sequences have different probabilities. So the sequences with high probabilities are compared prior to an observed sequence, which is the foundation of our new search algorithm. We evaluate the performance of the new algorithm using numerical results, and the results show this new algorithm increases the detection rate.
Rae, Caroline D.; Davidson, Joanne E.; Maher, Anthony D.; Rowlands, Benjamin D.; Kashem, Mohammed A.; Nasrallah, Fatima A.; Rallapalli, Sundari K.; Cook, James M; Balcar, Vladimir J.
2014-01-01
Ethanol is a known neuromodulatory agent with reported actions at a range of neurotransmitter receptors. Here, we used an indirect approach, measuring the effect of alcohol on metabolism of [3-13C]pyruvate in the adult Guinea pig brain cortical tissue slice and comparing the outcomes to those from a library of ligands active in the GABAergic system as well as studying the metabolic fate of [1,2-13C]ethanol. Ethanol (10, 30 and 60 mM) significantly reduced metabolic flux into all measured isotopomers and reduced all metabolic pool sizes. The metabolic profiles of these three concentrations of ethanol were similar and clustered with that of the α4β3δ positive allosteric modulator DS2 (4-Chloro-N-[2-(2-thienyl)imidazo[1,2a]-pyridin-3-yl]benzamide). Ethanol at a very low concentration (0.1 mM) produced a metabolic profile which clustered with those from inhibitors of GABA uptake, and ligands showing affinity for α5, and to a lesser extent, α1-containing GABA(A)R. There was no measureable metabolism of [1,2-13C]ethanol with no significant incorporation of 13C from [1,2-13C]ethanol into any measured metabolite above natural abundance, although there were measurable effects on total metabolite sizes similar to those seen with unlabeled ethanol. The reduction in metabolism seen in the presence of ethanol is therefore likely to be due to its actions at neurotransmitter receptors, particularly α4β3δ receptors, and not because ethanol is substituting as a substrate or because of the effects of ethanol catabolites acetaldehyde or acetate. We suggest that the stimulatory effects of very low concentrations of ethanol are due to release of GABA via GAT1 and the subsequent interaction of this GABA with local α5-containing, and to a lesser extent, α1-containing GABA(A)R. PMID:24313287
Diffusion-based population statistics using tract probability maps.
Wassermann, Demian; Kanterakis, Efstathios; Gur, Ruben C; Deriche, Rachid; Verma, Ragini
2010-01-01
We present a novel technique for the tract-based statistical analysis of diffusion imaging data. In our technique, we represent each white matter (WM) tract as a tract probability map (TPM): a function mapping a point to its probability of belonging to the tract. We start by automatically clustering the tracts identified in the brain via tractography into TPMs using a novel Gaussian process framework. Then, each tract is modeled by the skeleton of its TPM, a medial representation with a tubular or sheet-like geometry. The appropriate geometry for each tract is implicitly inferred from the data instead of being selected a priori, as is done by current tract-specific approaches. The TPM representation makes it possible to average diffusion imaging based features along directions locally perpendicular to the skeleton of each WM tract, increasing the sensitivity and specificity of statistical analyses on the WM. Our framework therefore facilitates the automated analysis of WM tract bundles, and enables the quantification and visualization of tract-based statistical differences between groups. We have demonstrated the applicability of our framework by studying WM differences between 34 schizophrenia patients and 24 healthy controls.
QKD-based quantum private query without a failure probability
NASA Astrophysics Data System (ADS)
Liu, Bin; Gao, Fei; Huang, Wei; Wen, QiaoYan
2015-10-01
In this paper, we present a quantum-key-distribution (QKD)-based quantum private query (QPQ) protocol utilizing single-photon signal of multiple optical pulses. It maintains the advantages of the QKD-based QPQ, i.e., easy to implement and loss tolerant. In addition, different from the situations in the previous QKD-based QPQ protocols, in our protocol, the number of the items an honest user will obtain is always one and the failure probability is always zero. This characteristic not only improves the stability (in the sense that, ignoring the noise and the attack, the protocol would always succeed), but also benefits the privacy of the database (since the database will no more reveal additional secrets to the honest users). Furthermore, for the user's privacy, the proposed protocol is cheat sensitive, and for security of the database, we obtain an upper bound for the leaked information of the database in theory.
Clover, Kerrie; Carter, Gregory Leigh; Mackinnon, Andrew; Adams, Catherine
2009-12-01
Screening oncology patients for clinically significant emotional distress is a recommended standard of care in psycho-oncology. However, principles regarding the interpretation of screening and diagnostic tests developed in other areas of medicine have not been widely applied in psycho-oncology. This paper explores the application of the concepts of likelihood ratios and post-test probabilities to the interpretation of psychological screening instruments and demonstrates the development of an algorithm for screening for emotional distress and common psychopathology. Three hundred forty oncology/haematology outpatients at the Calvary Mater Newcastle, Australia completed the Distress Thermometer (DT), the PSYCH-6 subscale of the Somatic and Psychological Health Report and the Kessler-10 scale. The Hospital Anxiety and Depression Scale (HADS) (cutoff 15+) was used as the gold standard. Likelihood ratios showed that a score over threshold on the DT was 2.77 times more likely in patients who were cases on the HADS. These patients had a 53% post-test probability of being cases on the HADS compared with the pretest probability of 29%. Adding either the PSYCH-6 (3+) or the Kessler-10 (22+) to the DT (4+) significantly increased this post-test probability to 94% and 92%, respectively. The significance of these improvements was confirmed by logistic regression analysis. This study demonstrated the application of probability statistics to develop an algorithm for screening for distress in oncology patients. In our sample, a two-stage screening algorithm improved appreciably on the performance of the DT alone to identify distressed patients. Sequential administration of a very brief instrument followed by selective use of a longer inventory may save time and increase acceptability.
Nahorniak, Matthew
2015-01-01
In ecology, as in other research fields, efficient sampling for population estimation often drives sample designs toward unequal probability sampling, such as in stratified sampling. Design based statistical analysis tools are appropriate for seamless integration of sample design into the statistical analysis. However, it is also common and necessary, after a sampling design has been implemented, to use datasets to address questions that, in many cases, were not considered during the sampling design phase. Questions may arise requiring the use of model based statistical tools such as multiple regression, quantile regression, or regression tree analysis. However, such model based tools may require, for ensuring unbiased estimation, data from simple random samples, which can be problematic when analyzing data from unequal probability designs. Despite numerous method specific tools available to properly account for sampling design, too often in the analysis of ecological data, sample design is ignored and consequences are not properly considered. We demonstrate here that violation of this assumption can lead to biased parameter estimates in ecological research. In addition, to the set of tools available for researchers to properly account for sampling design in model based analysis, we introduce inverse probability bootstrapping (IPB). Inverse probability bootstrapping is an easily implemented method for obtaining equal probability re-samples from a probability sample, from which unbiased model based estimates can be made. We demonstrate the potential for bias in model-based analyses that ignore sample inclusion probabilities, and the effectiveness of IPB sampling in eliminating this bias, using both simulated and actual ecological data. For illustration, we considered three model based analysis tools—linear regression, quantile regression, and boosted regression tree analysis. In all models, using both simulated and actual ecological data, we found inferences to be
Nahorniak, Matthew; Larsen, David P; Volk, Carol; Jordan, Chris E
2015-01-01
In ecology, as in other research fields, efficient sampling for population estimation often drives sample designs toward unequal probability sampling, such as in stratified sampling. Design based statistical analysis tools are appropriate for seamless integration of sample design into the statistical analysis. However, it is also common and necessary, after a sampling design has been implemented, to use datasets to address questions that, in many cases, were not considered during the sampling design phase. Questions may arise requiring the use of model based statistical tools such as multiple regression, quantile regression, or regression tree analysis. However, such model based tools may require, for ensuring unbiased estimation, data from simple random samples, which can be problematic when analyzing data from unequal probability designs. Despite numerous method specific tools available to properly account for sampling design, too often in the analysis of ecological data, sample design is ignored and consequences are not properly considered. We demonstrate here that violation of this assumption can lead to biased parameter estimates in ecological research. In addition, to the set of tools available for researchers to properly account for sampling design in model based analysis, we introduce inverse probability bootstrapping (IPB). Inverse probability bootstrapping is an easily implemented method for obtaining equal probability re-samples from a probability sample, from which unbiased model based estimates can be made. We demonstrate the potential for bias in model-based analyses that ignore sample inclusion probabilities, and the effectiveness of IPB sampling in eliminating this bias, using both simulated and actual ecological data. For illustration, we considered three model based analysis tools--linear regression, quantile regression, and boosted regression tree analysis. In all models, using both simulated and actual ecological data, we found inferences to be
Datz, F.L.; Bedont, R.A.; Taylor, A.
1985-05-01
Patients with a pleural effusion on chest x-ray often undergo a lung scan to exclude pulmonary embolism (PE). According to other studies, when the scan shows a perfusion defect equal in size to a radiographic abnormality on chest x-ray, the scan should be classified as indeterminate or intermediate probability for PE. However, since those studies dealt primarily with alveolar infiltrates rather than pleural effusions, the authors undertook a retrospective study to determine the probability of PE in patients with pleural effusion and a matching perfusion defect. The authors reviewed 451 scans and x-rays of patients studied for suspected PE. Of those, 53 had moderate or large perfusion defects secondary to pleural effusion without other significant (>25% of a segment) effusion without other significant (>25% of a segment) defects on the scan. Final diagnosis was confirmed by pulmonary angiography (16), thoracentesis (40), venography (11), other radiographic and laboratory studies, and clinical course. Of the 53 patients, only 2 patients had venous thrombotic disease. One patient had PE on pulmonary angiography, the other patient had thrombophlebitis on venography. The remainder of the patients had effusions due to congestive heart failure (12), malignancy (12), infection (7), trauma (7), collegen vascular disease (7), sympathetic effusion (3) and unknown etiology (3). The authors conclude that lung scans with significant perfusion defects limited to matching pleural effusions on chest x-ray have a low probability for PE.
Gesture Recognition Based on the Probability Distribution of Arm Trajectories
NASA Astrophysics Data System (ADS)
Wan, Khairunizam; Sawada, Hideyuki
The use of human motions for the interaction between humans and computers is becoming an attractive alternative to verbal media, especially through the visual interpretation of the human body motion. In particular, hand gestures are used as non-verbal media for the humans to communicate with machines that pertain to the use of the human gestures to interact with them. This paper introduces a 3D motion measurement of the human upper body for the purpose of the gesture recognition, which is based on the probability distribution of arm trajectories. In this study, by examining the characteristics of the arm trajectories given by a signer, motion features are selected and classified by using a fuzzy technique. Experimental results show that the use of the features extracted from arm trajectories effectively works on the recognition of dynamic gestures of a human, and gives a good performance to classify various gesture patterns.
Boysen, Teide-Jens; Heuer, Claas; Tetens, Jens; Reinhardt, Fritz; Thaller, Georg
2013-01-01
The estimation of dominance effects requires the availability of direct phenotypes, i.e., genotypes and phenotypes in the same individuals. In dairy cattle, classical QTL mapping approaches are, however, relying on genotyped sires and daughter-based phenotypes like breeding values. Thus, dominance effects cannot be estimated. The number of dairy bulls genotyped for dense genome-wide marker panels is steadily increasing in the context of genomic selection schemes. The availability of genotyped cows is, however, limited. Within the current study, the genotypes of male ancestors were applied to the calculation of genotype probabilities in cows. Together with the cows’ phenotypes, these probabilities were used to estimate dominance effects on a genome-wide scale. The impact of sample size, the depth of pedigree used in deriving genotype probabilities, the linkage disequilibrium between QTL and marker, the fraction of variance explained by the QTL, and the degree of dominance on the power to detect dominance were analyzed in simulation studies. The effect of relatedness among animals on the specificity of detection was addressed. Furthermore, the approach was applied to a real data set comprising 470,000 Holstein cows. To account for relatedness between animals a mixed-model two-step approach was used to adjust phenotypes based on an additive genetic relationship matrix. Thereby, considerable dominance effects were identified for important milk production traits. The approach might serve as a powerful tool to dissect the genetic architecture of performance and functional traits in dairy cattle. PMID:23222654
Minimum Bayesian error probability-based gene subset selection.
Li, Jian; Yu, Tian; Wei, Jin-Mao
2015-01-01
Sifting functional genes is crucial to the new strategies for drug discovery and prospective patient-tailored therapy. Generally, simply generating gene subset by selecting the top k individually superior genes may obtain an inferior gene combination, for some selected genes may be redundant with respect to some others. In this paper, we propose to select gene subset based on the criterion of minimum Bayesian error probability. The method dynamically evaluates all available genes and sifts only one gene at a time. A gene is selected if its combination with the other selected genes can gain better classification information. Within the generated gene subset, each individual gene is the most discriminative one in comparison with those that classify cancers in the same way as this gene does and different genes are more discriminative in combination than in individual. The genes selected in this way are likely to be functional ones from the system biology perspective, for genes tend to co-regulate rather than regulate individually. Experimental results show that the classifiers induced based on this method are capable of classifying cancers with high accuracy, while only a small number of genes are involved.
Knapp, Sabine; Kumar, Shashi; Sakurada, Yuri; Shen, Jiajun
2011-05-01
This study uses econometric models to measure the effect of significant wave height and wind strength on the probability of casualty and tests whether these effects changed. While both effects are in particular relevant for stability and strength calculations of vessels, it is also helpful for the development of ship construction standards in general to counteract increased risk resulting from changing oceanographic conditions. The authors analyzed a unique dataset of 3.2 million observations from 20,729 individual vessels in the North Atlantic and Arctic regions gathered during the period 1979-2007. The results show that although there is a seasonal pattern in the probability of casualty especially during the winter months, the effect of wind strength and significant wave height do not follow the same seasonal pattern. Additionally, over time, significant wave height shows an increasing effect in January, March, May and October while wind strength shows a decreasing effect, especially in January, March and May. The models can be used to simulate relationships and help understand the relationships. This is of particular interest to naval architects and ship designers as well as multilateral agencies such as the International Maritime Organization (IMO) that establish global standards in ship design and construction.
Naive Probability: Model-Based Estimates of Unique Events.
Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N
2015-08-01
We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning. © 2014 Cognitive Science Society, Inc.
Context Based Prior Probability Estimation of Object Appearance
NASA Astrophysics Data System (ADS)
Suzuyama, Yuki; Hotta, Kazuhiro; Takahashi, Haruhisa
This paper presents a method to estimate the prior probability of object appearance and position from only context information. The context is extracted from a whole image by Gabor filters. The conventional method represented the context by mixture of Gaussian distributions. The prior probabilities of object appearance and position were estimated by generative model. However, we define the probability estimation of object appearance as the binary-classification problem whether an input image contains the specific object or not. The Support Vector Machine is used to classify them, and the distance from the hyperplane is transformed to the probability using a sigmoid function. We also define the estimation problem of object position in an image from only the context as the regression problem. The position of object in an image is estimated by Support Vector Regression. Experimental results show that the proposed method outperforms the conventional method.
Cucherat, Michel; Laporte, Silvy
2017-05-08
The use of statistical test is central in the clinical trial. At the statistical level, obtaining a P<0.05 allows to claim the effectiveness of the new studied treatment. However, given its underlying mathematical logic the concept of "P value" is often misinterpreted. It is often assimilated, mistakenly, to the likelihood that treatment is ineffective. Actually the "P value" gives an indirect information about the plausibility of the existence of treatment effect. With "P<0.05", the probability that the treatment is effective may vary depending on other statistical parameters which are the alpha level of risk, the power of the study and especially the a priori probability of the existence of treatment effect. A "P<0.05" does not always produce the same degree of certainty. Thus there exist situations where the risk of a result "P<0.05" is in reality a false positive is very high. This is the case if the power is low, if there is an inflation of the alpha risk or if the result is exploratory or chance discoveries. This possibility is important to take into consideration when interpreting the results of clinical trials in order to avoid pushing ahead significant results in appearance, but which are likely to be actually false positive results. Copyright © 2017 Société française de pharmacologie et de thérapeutique. Published by Elsevier Masson SAS. All rights reserved.
Web-based experiments controlled by JavaScript: an example from probability learning.
Birnbaum, Michael H; Wakcher, Sandra V
2002-05-01
JavaScript programs can be used to control Web experiments. This technique is illustrated by an experiment that tested the effects of advice on performance in the classic probability-learning paradigm. Previous research reported that people tested via the Web or in the lab tended to match the probabilities of their responses to the probabilities that those responses would be reinforced. The optimal strategy, however, is to consistently choose the more frequent event; probability matching produces suboptimal performance. We investigated manipulations we reasoned should improve performance. A horse race scenario in which participants predicted the winner in each of a series of races between two horses was compared with an abstract scenario used previously. Ten groups of learners received different amounts of advice, including all combinations of (1) explicit instructions concerning the optimal strategy, (2) explicit instructions concerning a monetary sum to maximize, and (3) accurate information concerning the probabilities of events. The results showed minimal effects of horse race versus abstract scenario. Both advice concerning the optimal strategy and probability information contributed significantly to performance in the task. This paper includes a brief tutorial on JavaScript, explaining with simple examples how to assemble a browser-based experiment.
Naive Probability: Model-based Estimates of Unique Events
2014-05-04
1. Introduction Probabilistic thinking is ubiquitous in both numerate and innumerate cultures. Aristotle ...wrote: “A probability is a thing that happens for the most part” ( Aristotle , Rhetoric, Book I, 1357a35, see Barnes, 1984). His account, as Franklin...1984). The complete works of Aristotle . Princeton, NJ: Princeton University Press
Success Probability Analysis for Shuttle Based Microgravity Experiments
NASA Technical Reports Server (NTRS)
Liou, Ying-Hsin Andrew
1996-01-01
Presented in this report are the results of data analysis of shuttle-based microgravity flight experiments. Potential factors were identified in the previous grant period, and in this period 26 factors were selected for data analysis. In this project, the degree of success was developed and used as the performance measure. 293 of the 391 experiments in Lewis Research Center Microgravity Database were assigned degrees of success. The frequency analysis and the analysis of variance were conducted to determine the significance of the factors that effect the experiment success.
Proficiency Scaling Based on Conditional Probability Functions for Attributes
1993-10-01
4.1 Non-parametric regression estimates as probability functions for attributes Non- parametric estimation of the unknown density function f from a plot...as construction of confidence intervals for PFAs and further improvement of non- parametric estimation methods are not discussed in this paper. The... parametric estimation of PFAs will be illustrated with the attribute mastery patterns of SAT M Section 4. In the next section, analysis results will be
Probability-based stability robustness assessment of controlled structures
Field, R.V. Jr.; Voulgaris, P.G.; Bergman, L.A.
1996-01-01
Model uncertainty, if ignored, can seriously degrade the performance of an otherwise well-designed control system. If the level of this uncertainty is extreme, the system may even be driven to instability. In the context of structural control, performance degradation and instability imply excessive vibration or even structural failure. Robust control has typically been applied to the issue of model uncertainty through worst-case analyses. These traditional methods include the use of the structured singular value, as applied to the small gain condition, to provide estimates of controller robustness. However, this emphasis on the worst-case scenario has not allowed a probabilistic understanding of robust control. In this paper an attempt to view controller robustness as a probability measure is presented. The probability of failure due to parametric uncertainty is estimated using first-order reliability methods (FORM). It is demonstrated that this method can provide quite accurate results on the probability of failure of actively controlled structures. Moreover, a comparison of this method to a suitability modified structured singular value robustness analysis in a probabilistic framework is performed. It is shown that FORM is the superior analysis technique when applied to a controlled three degree-of-freedom structure. In addition, the robustness qualities of various active control design schemes such as LQR, H{sub 2}, H {sub oo}, and {mu}-synthesis is discussed in order to provide some design guidelines.
Dangerous "spin": the probability myth of evidence-based prescribing - a Merleau-Pontyian approach.
Morstyn, Ron
2011-08-01
The aim of this study was to examine logical positivist statistical probability statements used to support and justify "evidence-based" prescribing rules in psychiatry when viewed from the major philosophical theories of probability, and to propose "phenomenological probability" based on Maurice Merleau-Ponty's philosophy of "phenomenological positivism" as a better clinical and ethical basis for psychiatric prescribing. The logical positivist statistical probability statements which are currently used to support "evidence-based" prescribing rules in psychiatry have little clinical or ethical justification when subjected to critical analysis from any of the major theories of probability and represent dangerous "spin" because they necessarily exclude the individual , intersubjective and ambiguous meaning of mental illness. A concept of "phenomenological probability" founded on Merleau-Ponty's philosophy of "phenomenological positivism" overcomes the clinically destructive "objectivist" and "subjectivist" consequences of logical positivist statistical probability and allows psychopharmacological treatments to be appropriately integrated into psychiatric treatment.
Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto
2016-06-14
Stochastic simulation of large biochemical reaction networks is often computationally expensive due to the disparate reaction rates and high variability of population of chemical species. An approach to accelerate the simulation is to allow multiple reaction firings before performing update by assuming that reaction propensities are changing of a negligible amount during a time interval. Species with small population in the firings of fast reactions significantly affect both performance and accuracy of this simulation approach. It is even worse when these small population species are involved in a large number of reactions. We present in this paper a new approximate algorithm to cope with this problem. It is based on bounding the acceptance probability of a reaction selected by the exact rejection-based simulation algorithm, which employs propensity bounds of reactions and the rejection-based mechanism to select next reaction firings. The reaction is ensured to be selected to fire with an acceptance rate greater than a predefined probability in which the selection becomes exact if the probability is set to one. Our new algorithm improves the computational cost for selecting the next reaction firing and reduces the updating the propensities of reactions.
Storkel, Holly L.; Hoover, Jill R.
2010-01-01
An on-line calculator was developed (http://www.bncdnet.ku.edu/cml/info_ccc.vi) to compute phonotactic probability, the likelihood of occurrence of a sound sequence, and neighborhood density, the number of phonologically similar words, based on child corpora of American English (Kolson, 1960; Moe, Hopkins, & Rush, 1982) and compared to an adult calculator. Phonotactic probability and neighborhood density were computed for a set of 380 nouns (Fenson et al., 1993) using both the child and adult corpora. Child and adult raw values were significantly correlated. However, significant differences were detected. Specifically, child phonotactic probability was higher than adult phonotactic probability, especially for high probability words; and child neighborhood density was lower than adult neighborhood density, especially for high density words. These differences were reduced or eliminated when relative measures (i.e., z scores) were used. Suggestions are offered regarding which values to use in future research. PMID:20479181
Reliability-Based Design Optimization Using Buffered Failure Probability
2010-06-01
missile. One component of the missile’s launcher is an optical system. Suppose that two different optical systems, 1 and 2, are available for...ed.). Belmont, MA: Athena Scientific. Bichon, B. J., Mahadevan, S., & Eldred, M. S. (May 4–7, 2009). Reliability-based design optimization using
Yin, Ziming; Dong, Zhao; Lu, Xudong; Yu, Shengyuan; Chen, Xiaoyan; Duan, Huilong
2015-04-01
The overlap between probable migraine (PM) and probable tension-type headache (PTTH) often confuses physicians in clinical practice. Although clinical decision support systems (CDSSs) have been proven to be helpful in the diagnosis of primary headaches, the existing guideline-based headache disorder CDSSs do not perform adequately due to this overlapping issue. Thus, in this study, a CDSS based on case-based reasoning (CBR) was developed in order to solve this problem. First, a case library consisting of 676 clinical cases, 56.95% of which had been diagnosed with PM and 43.05% of which had been diagnosed with PTTH, was constructed, screened by a three-member panel, and weighted by engineers. Next, the resulting case library was used to diagnose current cases based on their similarities to the previous cases. The test dataset was composed of an additional 222 historical cases, 76.1% of which had been diagnosed with PM and 23.9% of which had been diagnosed with PTTH. The cases that comprised the case library as well as the test dataset were actual clinical cases obtained from the International Headache Center in Chinese PLA General Hospital. The results indicated that the PM and PTTH recall rates were equal to 97.02% and 77.78%, which were 34.31% and 16.91% higher than that of the guideline-based CDSS, respectively. Furthermore, the PM and PTTH precision rates were equal to 93.14% and 89.36%, which were7.09% and 15.68% higher than that of the guideline-based CDSS, respectively. Comparing CBR CDSS and guideline-based CDSS, the p-value of PM diagnoses was equal to 0.019, while that of PTTH diagnoses was equal to 0.002, which indicated that there was a significant difference between the two approaches. The experimental results indicated that the CBR CDSS developed in this study diagnosed PM and PTTH with a high degree of accuracy and performed better than the guideline-based CDSS. This system could be used as a diagnostic tool to assist general practitioners in
Bureau, Alexandre; Younkin, Samuel G.; Parker, Margaret M.; Bailey-Wilson, Joan E.; Marazita, Mary L.; Murray, Jeffrey C.; Mangold, Elisabeth; Albacha-Hejazi, Hasan; Beaty, Terri H.; Ruczinski, Ingo
2014-01-01
Motivation: Family-based designs are regaining popularity for genomic sequencing studies because they provide a way to test cosegregation with disease of variants that are too rare in the population to be tested individually in a conventional case–control study. Results: Where only a few affected subjects per family are sequenced, the probability that any variant would be shared by all affected relatives—given it occurred in any one family member—provides evidence against the null hypothesis of a complete absence of linkage and association. A P-value can be obtained as the sum of the probabilities of sharing events as (or more) extreme in one or more families. We generalize an existing closed-form expression for exact sharing probabilities to more than two relatives per family. When pedigree founders are related, we show that an approximation of sharing probabilities based on empirical estimates of kinship among founders obtained from genome-wide marker data is accurate for low levels of kinship. We also propose a more generally applicable approach based on Monte Carlo simulations. We applied this method to a study of 55 multiplex families with apparent non-syndromic forms of oral clefts from four distinct populations, with whole exome sequences available for two or three affected members per family. The rare single nucleotide variant rs149253049 in ADAMTS9 shared by affected relatives in three Indian families achieved significance after correcting for multiple comparisons (p=2×10−6). Availability and implementation: Source code and binaries of the R package RVsharing are freely available for download at http://cran.r-project.org/web/packages/RVsharing/index.html. Contact: alexandre.bureau@msp.ulaval.ca or ingo@jhu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24740360
Batch Mode Active Sampling based on Marginal Probability Distribution Matching
Chattopadhyay, Rita; Wang, Zheng; Fan, Wei; Davidson, Ian; Panchanathan, Sethuraman; Ye, Jieping
2013-01-01
Active Learning is a machine learning and data mining technique that selects the most informative samples for labeling and uses them as training data; it is especially useful when there are large amount of unlabeled data and labeling them is expensive. Recently, batch-mode active learning, where a set of samples are selected concurrently for labeling, based on their collective merit, has attracted a lot of attention. The objective of batch-mode active learning is to select a set of informative samples so that a classifier learned on these samples has good generalization performance on the unlabeled data. Most of the existing batch-mode active learning methodologies try to achieve this by selecting samples based on varied criteria. In this paper we propose a novel criterion which achieves good generalization performance of a classifier by specifically selecting a set of query samples that minimizes the difference in distribution between the labeled and the unlabeled data, after annotation. We explicitly measure this difference based on all candidate subsets of the unlabeled data and select the best subset. The proposed objective is an NP-hard integer programming optimization problem. We provide two optimization techniques to solve this problem. In the first one, the problem is transformed into a convex quadratic programming problem and in the second method the problem is transformed into a linear programming problem. Our empirical studies using publicly available UCI datasets and a biomedical image dataset demonstrate the effectiveness of the proposed approach in comparison with the state-of-the-art batch-mode active learning methods. We also present two extensions of the proposed approach, which incorporate uncertainty of the predicted labels of the unlabeled data and transfer learning in the proposed formulation. Our empirical studies on UCI datasets show that incorporation of uncertainty information improves performance at later iterations while our studies on 20
Batch Mode Active Sampling based on Marginal Probability Distribution Matching.
Chattopadhyay, Rita; Wang, Zheng; Fan, Wei; Davidson, Ian; Panchanathan, Sethuraman; Ye, Jieping
2012-01-01
Active Learning is a machine learning and data mining technique that selects the most informative samples for labeling and uses them as training data; it is especially useful when there are large amount of unlabeled data and labeling them is expensive. Recently, batch-mode active learning, where a set of samples are selected concurrently for labeling, based on their collective merit, has attracted a lot of attention. The objective of batch-mode active learning is to select a set of informative samples so that a classifier learned on these samples has good generalization performance on the unlabeled data. Most of the existing batch-mode active learning methodologies try to achieve this by selecting samples based on varied criteria. In this paper we propose a novel criterion which achieves good generalization performance of a classifier by specifically selecting a set of query samples that minimizes the difference in distribution between the labeled and the unlabeled data, after annotation. We explicitly measure this difference based on all candidate subsets of the unlabeled data and select the best subset. The proposed objective is an NP-hard integer programming optimization problem. We provide two optimization techniques to solve this problem. In the first one, the problem is transformed into a convex quadratic programming problem and in the second method the problem is transformed into a linear programming problem. Our empirical studies using publicly available UCI datasets and a biomedical image dataset demonstrate the effectiveness of the proposed approach in comparison with the state-of-the-art batch-mode active learning methods. We also present two extensions of the proposed approach, which incorporate uncertainty of the predicted labels of the unlabeled data and transfer learning in the proposed formulation. Our empirical studies on UCI datasets show that incorporation of uncertainty information improves performance at later iterations while our studies on 20
METAPHOR: Probability density estimation for machine learning based photometric redshifts
NASA Astrophysics Data System (ADS)
Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.
2017-06-01
We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).
Surface characterization based upon significant topographic features
NASA Astrophysics Data System (ADS)
Blanc, J.; Grime, D.; Blateyron, F.
2011-08-01
Watershed segmentation and Wolf pruning, as defined in ISO 25178-2, allow the detection of significant features on surfaces and their characterization in terms of dimension, area, volume, curvature, shape or morphology. These new tools provide a robust way to specify functional surfaces.
[Base excess. Parameter with exceptional clinical significance].
Schaffartzik, W
2007-05-01
The base excess of blood (BE) plays an important role in the description of the acid-base status of a patient and is gaining in clinical interest. Apart from the Quick test, the age, the injury severity score and the Glasgow coma scale, the BE is becoming more and more important to identify, e. g. the risk of mortality for patients with multiple injuries. According to Zander the BE is calculated using the pH, pCO(2), haemoglobin concentration and the oxygen saturation of haemoglobin (sO(2)). The use of sO(2 )allows the blood gas analyser to determine only one value of BE, independent of the type of blood sample analyzed: arterial, mixed venous or venous. The BE and measurement of the lactate concentration (cLac) play an important role in diagnosing critically ill patients. In general, the change in BE corresponds to the change in cLac. If DeltaBE is smaller than DeltacLac the reason could be therapy with HCO(3)(-) but also with infusion solutions containing lactate. Physician are very familiar with the term BE, therefore, knowledge about an alkalizing or acidifying effect of an infusion solution would be very helpful in the treatment of patients, especially critically ill patients. Unfortunately, at present the description of an infusion solution with respect to BE has not yet been accepted by the manufacturers.
Probability method for Cerenkov luminescence tomography based on conformance error minimization
Ding, Xintao; Wang, Kun; Jie, Biao; Luo, Yonglong; Hu, Zhenhua; Tian, Jie
2014-01-01
Cerenkov luminescence tomography (CLT) was developed to reconstruct a three-dimensional (3D) distribution of radioactive probes inside a living animal. Reconstruction methods are generally performed within a unique framework by searching for the optimum solution. However, the ill-posed aspect of the inverse problem usually results in the reconstruction being non-robust. In addition, the reconstructed result may not match reality since the difference between the highest and lowest uptakes of the resulting radiotracers may be considerably large, therefore the biological significance is lost. In this paper, based on the minimization of a conformance error, a probability method is proposed that consists of qualitative and quantitative modules. The proposed method first pinpoints the organ that contains the light source. Next, we developed a 0-1 linear optimization subject to a space constraint to model the CLT inverse problem, which was transformed into a forward problem by employing a region growing method to solve the optimization. After running through all of the elements used to grow the sources, a source sequence was obtained. Finally, the probability of each discrete node being the light source inside the organ was reconstructed. One numerical study and two in vivo experiments were conducted to verify the performance of the proposed algorithm, and comparisons were carried out using the hp-finite element method (hp-FEM). The results suggested that our proposed probability method was more robust and reasonable than hp-FEM. PMID:25071951
NASA Astrophysics Data System (ADS)
Appourchaux, T.; Samadi, R.; Dupret, M.-A.
2009-10-01
Context: The CoRoT mission provides asteroseismic data of very high quality allowing one to adopt new statistical approaches for mode detection in power spectra, especially with respect to testing the null hypothesis (H{0}, which assumes that what is observed is pure noise). Aims: We emphasize that the significance level when rejecting the null hypothesis can lead to the incorrect conclusion that the H{0} hypothesis is unlikely to occur at that significance level. We demonstrate that the significance level is unrelated to the posterior probability of H{0}, given the observed data set, and that this posterior probability is very much higher than implied by the significance level. Methods: We use Bayes theorem to derive the posterior probability of that H{0} is true assuming an alternative hypothesis H{1} that a mode is present, taking some prior for the mode height, mode amplitude and linewidth. Results: We compute the posterior probability of H{0} for the p modes detected on HD 49 933 by CoRoT. Conclusions: We conclude that the posterior probability of H{0} provide a much more conservative quantification of the mode detection than the significance level. This framework can be applied to any similar stellar power spectra obtained to complete asteroseismology. The CoRoT space mission, launched on 2006 December 27, was developed and is operated by the CNES, with participation of the Science Programs of ESA, ESA's RSSD, Austria, Belgium, Brazil, Germany and Spain.
Token swap test of significance for serial medical data bases.
Moore, G W; Hutchins, G M; Miller, R E
1986-02-01
Established tests of statistical significance are based upon the concept that observed data are drawn randomly from a larger, perhaps infinite source population. The significance value, p, is the probability that the observations are drawn from a source population satisfying the null hypothesis; if p is small enough (less than 5 percent, 1 percent, etc.), then the null hypothesis is rejected. Serial medical data bases, such as a hospital clinic intake or autopsy case accessions, often do not have an identifiable source population from which they are randomly drawn. In an effort to make a reasonable interpretation of these less-than-ideal data, this report introduces a "token swap" test of significance, in which the usual paradigm of repeated drawing from a source population is replaced by a paradigm or misclassification within the observed data themselves. The token swap test consists of rearranging the data into a balanced distribution, and determining the disparity between the observed and the balanced distribution of data. In a two-by-two contingency table, patients are represented as "tokens" distributed into four "cells." Significance is determined by the proportion of "token swaps" that are able to transform the balanced table into the observed table. The token swap test was applied to three series of autopsy observations, and gave results roughly comparable to the corresponding (two-tail) chi-square and one-tail Fisher exact tests. The token swap test of significance may be a useful alternative to classic statistical tests when the limiting assumptions of a retrospective, serial medical data base are present.
ERIC Educational Resources Information Center
Koparan, Timur; Yilmaz, Gül Kaleli
2015-01-01
The effect of simulation-based probability teaching on the prospective teachers' inference skills has been examined with this research. In line with this purpose, it has been aimed to examine the design, implementation and efficiency of a learning environment for experimental probability. Activities were built on modeling, simulation and the…
Developing a probability-based model of aquifer vulnerability in an agricultural region
NASA Astrophysics Data System (ADS)
Chen, Shih-Kai; Jang, Cheng-Shin; Peng, Yi-Huei
2013-04-01
SummaryHydrogeological settings of aquifers strongly influence the regional groundwater movement and pollution processes. Establishing a map of aquifer vulnerability is considerably critical for planning a scheme of groundwater quality protection. This study developed a novel probability-based DRASTIC model of aquifer vulnerability in the Choushui River alluvial fan, Taiwan, using indicator kriging and to determine various risk categories of contamination potentials based on estimated vulnerability indexes. Categories and ratings of six parameters in the probability-based DRASTIC model were probabilistically characterized according to the parameter classification methods of selecting a maximum estimation probability and calculating an expected value. Moreover, the probability-based estimation and assessment gave us an excellent insight into propagating the uncertainty of parameters due to limited observation data. To examine the prediction capacity of pollutants for the developed probability-based DRASTIC model, medium, high, and very high risk categories of contamination potentials were compared with observed nitrate-N exceeding 0.5 mg/L indicating the anthropogenic groundwater pollution. The analyzed results reveal that the developed probability-based DRASTIC model is capable of predicting high nitrate-N groundwater pollution and characterizing the parameter uncertainty via the probability estimation processes.
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).
Probability of foliar injury for Acer sp. based on foliar fluoride concentrations.
McDonough, Andrew M; Dixon, Murray J; Terry, Debbie T; Todd, Aaron K; Luciani, Michael A; Williamson, Michele L; Roszak, Danuta S; Farias, Kim A
2016-12-01
Fluoride is considered one of the most phytotoxic elements to plants, and indicative fluoride injury has been associated over a wide range of foliar fluoride concentrations. The aim of this study was to determine the probability of indicative foliar fluoride injury based on Acer sp. foliar fluoride concentrations using a logistic regression model. Foliage from Acer nedundo, Acer saccharinum, Acer saccharum and Acer platanoides was collected along a distance gradient from three separate brick manufacturing facilities in southern Ontario as part of a long-term monitoring programme between 1995 and 2014. Hydrogen fluoride is the major emission source associated with the manufacturing facilities resulting with highly elevated foliar fluoride close to the facilities and decreasing with distance. Consistent with other studies, indicative fluoride injury was observed over a wide range of foliar concentrations (9.9-480.0 μg F(-) g(-1)). The logistic regression model was statistically significant for the Acer sp. group, A. negundo and A. saccharinum; consequently, A. negundo being the most sensitive species among the group. In addition, A. saccharum and A. platanoides were not statistically significant within the model. We are unaware of published foliar fluoride values for Acer sp. within Canada, and this research provides policy maker and scientist with probabilities of indicative foliar injury for common urban Acer sp. trees that can help guide decisions about emissions controls. Further research should focus on mechanisms driving indicative fluoride injury over wide ranging foliar fluoride concentrations and help determine foliar fluoride thresholds for damage.
NASA Astrophysics Data System (ADS)
Sari, Dwi Ivayana; Budayasa, I. Ketut; Juniati, Dwi
2017-08-01
Probabilistic thinking is very important in human life especially in responding to situation which possibly occured or situation containing uncertainty elements. It is necessary to develop students' probabilistic thinking since in elementary school by teaching probability. Based on mathematics curriculum in Indonesia, probability is firstly introduced to ninth grade students. Though, some research showed that low-grade students were successful in solving probability tasks, even in pre school. This study is aimed to explore students' probabilistic thinking of elementary school; high and low math ability in solving probability tasks. Qualitative approach was chosen to describe in depth related to students' probabilistic thinking. The results showed that high and low math ability students were difference in responding to 1 and 2 dimensional sample space tasks, and probability comparison tasks of drawing marker and contextual. Representation used by high and low math ability students were also difference in responding to contextual probability of an event task and probability comparison task of rotating spinner. This study is as reference to mathematics curriculum developers of elementary school in Indonesia. In this case to introduce probability material and teach probability through spinner, as media in learning.
A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.
2005-01-01
We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.
Reducing the Probability of Incidents Through Behavior-Based Safety -- An Anomaly or Not?
Turek, John A
2002-07-23
Reducing the probability of incidents through Behavior-Based Safety-an anomaly or not? Can a Behavior-Based Safety (BBS) process reduce the probability of an employee sustaining a work-related injury or illness? This presentation describes the actions taken to implement a sustainable BBS process and evaluates its effectiveness. The BBS process at the Stanford Linear Accelerator Center used a pilot population of national laboratory employees to: Achieve employee and management support; Reduce the probability of employees' sustaining work-related injuries and illnesses; and Provide support for additional funding to expand within the laboratory.
Reducing the Probability of Incidents Through Behavior-Based Safety -- An Anomaly or Not?
Turek, John A
2002-07-23
Reducing the probability of incidents through Behavior-Based Safety--an anomaly or not? Can a Behavior-Based Safety (BBS) process reduce the probability of an employee sustaining a work-related injury or illness? This presentation describes the actions taken to implement a sustainable BBS process and evaluates its effectiveness. The BBS process at the Stanford Linear Accelerator Center used a pilot population of national laboratory employees to: Achieve employee and management support; Reduce the probability of employees' sustaining work-related injuries and illnesses; and Provide support for additional funding to expand within the laboratory.
Li, Shuying; Zhuang, Jun; Shen, Shifei
2017-07-01
In recent years, various types of terrorist attacks occurred, causing worldwide catastrophes. According to the Global Terrorism Database (GTD), among all attack tactics, bombing attacks happened most frequently, followed by armed assaults. In this article, a model for analyzing and forecasting the conditional probability of bombing attacks (CPBAs) based on time-series methods is developed. In addition, intervention analysis is used to analyze the sudden increase in the time-series process. The results show that the CPBA increased dramatically at the end of 2011. During that time, the CPBA increased by 16.0% in a two-month period to reach the peak value, but still stays 9.0% greater than the predicted level after the temporary effect gradually decays. By contrast, no significant fluctuation can be found in the conditional probability process of armed assault. It can be inferred that some social unrest, such as America's troop withdrawal from Afghanistan and Iraq, could have led to the increase of the CPBA in Afghanistan, Iraq, and Pakistan. The integrated time-series and intervention model is used to forecast the monthly CPBA in 2014 and through 2064. The average relative error compared with the real data in 2014 is 3.5%. The model is also applied to the total number of attacks recorded by the GTD between 2004 and 2014. © 2016 Society for Risk Analysis.
Bag of Events: An Efficient Probability-Based Feature Extraction Method for AER Image Sensors.
Peng, Xi; Zhao, Bo; Yan, Rui; Tang, Huajin; Yi, Zhang
2016-03-18
Address event representation (AER) image sensors represent the visual information as a sequence of events that denotes the luminance changes of the scene. In this paper, we introduce a feature extraction method for AER image sensors based on the probability theory, namely, bag of events (BOE). The proposed approach represents each object as the joint probability distribution of the concurrent events, and each event corresponds to a unique activated pixel of the AER sensor. The advantages of BOE include: 1) it is a statistical learning method and has a good interpretability in mathematics; 2) BOE can significantly reduce the effort to tune parameters for different data sets, because it only has one hyperparameter and is robust to the value of the parameter; 3) BOE is an online learning algorithm, which does not require the training data to be collected in advance; 4) BOE can achieve competitive results in real time for feature extraction (>275 frames/s and >120,000 events/s); and 5) the implementation complexity of BOE only involves some basic operations, e.g., addition and multiplication. This guarantees the hardware friendliness of our method. The experimental results on three popular AER databases (i.e., MNIST-dynamic vision sensor, Poker Card, and Posture) show that our method is remarkably faster than two recently proposed AER categorization systems while preserving a good classification accuracy.
Zitnick-Anderson, Kimberly K; Norland, Jack E; Del Río Mendoza, Luis E; Fortuna, Ann-Marie; Nelson, Berlin D
2017-04-06
Associations between soil properties and Pythium groups on soybean roots were investigated in 83 commercial soybean fields in North Dakota. A data set containing 2877 isolates of Pythium which included 26 known spp. and 1 unknown spp. and 13 soil properties from each field were analyzed. A Pearson correlation analysis was performed with all soil properties to observe any significant correlation between properties. Hierarchical clustering, indicator spp., and multi-response permutation procedures were used to identify groups of Pythium. Logistic regression analysis using stepwise selection was employed to calculate probability models for presence of groups based on soil properties. Three major Pythium groups were identified and three soil properties were associated with these groups. Group 1, characterized by P. ultimum, was associated with zinc levels; as zinc increased, the probability of group 1 being present increased (α = 0.05). Pythium group 2, characterized by Pythium kashmirense and an unknown Pythium sp., was associated with cation exchange capacity (CEC) (α < 0.05); as CEC increased, these spp. increased. Group 3, characterized by Pythium heterothallicum and Pythium irregulare, were associated with CEC and calcium carbonate exchange (CCE); as CCE increased and CEC decreased, these spp. increased (α = 0.05). The regression models may have value in predicting pathogenic Pythium spp. in soybean fields in North Dakota and adjacent states.
Martinelli, Marcella; Parra, Alessandro; Scapoli, Luca; Sanctis, Paola De; Chiadini, Valentina; Hattinger, Claudia; Picci, Piero
2016-01-01
Ewing sarcoma (EWS), the second most common primary bone tumor in pediatric age, is known for its paucity of recurrent somatic abnormalities. Apart from the chimeric oncoprotein that derives from the fusion of EWS and FLI genes, recent genome-wide association studies have identified susceptibility variants near the EGR2 gene that regulate DNA binding of EWS-FLI. However, to induce transformation, EWS-FLI requires the presence of additional molecular events, including the expression of CD99, a cell surface molecule with critical relevance for the pathogenesis of EWS. High expression of CD99 is a common and distinctive feature of EWS cells, and it has largely been used for the differential diagnosis of the disease. The present study first links CD99 germline genetic variants to the susceptibility of EWS development and its progression. In particular, a panel of 25 single nucleotide polymorphisms has been genotyped in a case-control study. The CD99 rs311059 T variant was found to be significantly associated [P value = 0.0029; ORhet = 3.9 (95% CI 1.5-9.8) and ORhom = 5.3 (95% CI 1.2-23.7)] with EWS onset in patients less than 14 years old, while the CD99 rs312257-T was observed to be associated [P value = 0.0265; ORhet = 3.5 (95% CI 1.3-9.9)] with a reduced risk of relapse. Besides confirming the importance of CD99, our findings indicate that polymorphic variations in this gene may affect either development or progression of EWS, leading to further understanding of this cancer and development of better diagnostics/prognostics for children and adolescents with this devastating disease. PMID:27792997
Studying the effects of fuel treatment based on burn probability on a boreal forest landscape.
Liu, Zhihua; Yang, Jian; He, Hong S
2013-01-30
Fuel treatment is assumed to be a primary tactic to mitigate intense and damaging wildfires. However, how to place treatment units across a landscape and assess its effectiveness is difficult for landscape-scale fuel management planning. In this study, we used a spatially explicit simulation model (LANDIS) to conduct wildfire risk assessments and optimize the placement of fuel treatments at the landscape scale. We first calculated a baseline burn probability map from empirical data (fuel, topography, weather, and fire ignition and size data) to assess fire risk. We then prioritized landscape-scale fuel treatment based on maps of burn probability and fuel loads (calculated from the interactions among tree composition, stand age, and disturbance history), and compared their effects on reducing fire risk. The burn probability map described the likelihood of burning on a given location; the fuel load map described the probability that a high fuel load will accumulate on a given location. Fuel treatment based on the burn probability map specified that stands with high burn probability be treated first, while fuel treatment based on the fuel load map specified that stands with high fuel loads be treated first. Our results indicated that fuel treatment based on burn probability greatly reduced the burned area and number of fires of different intensities. Fuel treatment based on burn probability also produced more dispersed and smaller high-risk fire patches and therefore can improve efficiency of subsequent fire suppression. The strength of our approach is that more model components (e.g., succession, fuel, and harvest) can be linked into LANDIS to map the spatially explicit wildfire risk and its dynamics to fuel management, vegetation dynamics, and harvesting.
Evaluation of gene importance in microarray data based upon probability of selection
Fu, Li M; Fu-Liu, Casey S
2005-01-01
Background Microarray devices permit a genome-scale evaluation of gene function. This technology has catalyzed biomedical research and development in recent years. As many important diseases can be traced down to the gene level, a long-standing research problem is to identify specific gene expression patterns linking to metabolic characteristics that contribute to disease development and progression. The microarray approach offers an expedited solution to this problem. However, it has posed a challenging issue to recognize disease-related genes expression patterns embedded in the microarray data. In selecting a small set of biologically significant genes for classifier design, the nature of high data dimensionality inherent in this problem creates substantial amount of uncertainty. Results Here we present a model for probability analysis of selected genes in order to determine their importance. Our contribution is that we show how to derive the P value of each selected gene in multiple gene selection trials based on different combinations of data samples and how to conduct a reliability analysis accordingly. The importance of a gene is indicated by its associated P value in that a smaller value implies higher information content from information theory. On the microarray data concerning the subtype classification of small round blue cell tumors, we demonstrate that the method is capable of finding the smallest set of genes (19 genes) with optimal classification performance, compared with results reported in the literature. Conclusion In classifier design based on microarray data, the probability value derived from gene selection based on multiple combinations of data samples enables an effective mechanism for reducing the tendency of fitting local data particularities. PMID:15784140
Kausar, A S M Zahid; Reza, Ahmed Wasif; Wo, Lau Chun; Ramiah, Harikrishnan
2014-01-01
Although ray tracing based propagation prediction models are popular for indoor radio wave propagation characterization, most of them do not provide an integrated approach for achieving the goal of optimum coverage, which is a key part in designing wireless network. In this paper, an accelerated technique of three-dimensional ray tracing is presented, where rough surface scattering is included for making a more accurate ray tracing technique. Here, the rough surface scattering is represented by microfacets, for which it becomes possible to compute the scattering field in all possible directions. New optimization techniques, like dual quadrant skipping (DQS) and closest object finder (COF), are implemented for fast characterization of wireless communications and making the ray tracing technique more efficient. In conjunction with the ray tracing technique, probability based coverage optimization algorithm is accumulated with the ray tracing technique to make a compact solution for indoor propagation prediction. The proposed technique decreases the ray tracing time by omitting the unnecessary objects for ray tracing using the DQS technique and by decreasing the ray-object intersection time using the COF technique. On the other hand, the coverage optimization algorithm is based on probability theory, which finds out the minimum number of transmitters and their corresponding positions in order to achieve optimal indoor wireless coverage. Both of the space and time complexities of the proposed algorithm surpass the existing algorithms. For the verification of the proposed ray tracing technique and coverage algorithm, detailed simulation results for different scattering factors, different antenna types, and different operating frequencies are presented. Furthermore, the proposed technique is verified by the experimental results.
Value and probability coding in a feedback-based learning task utilizing food rewards.
Tricomi, Elizabeth; Lempert, Karolina M
2015-01-01
For the consequences of our actions to guide behavior, the brain must represent different types of outcome-related information. For example, an outcome can be construed as negative because an expected reward was not delivered or because an outcome of low value was delivered. Thus behavioral consequences can differ in terms of the information they provide about outcome probability and value. We investigated the role of the striatum in processing probability-based and value-based negative feedback by training participants to associate cues with food rewards and then employing a selective satiety procedure to devalue one food outcome. Using functional magnetic resonance imaging, we examined brain activity related to receipt of expected rewards, receipt of devalued outcomes, omission of expected rewards, omission of devalued outcomes, and expected omissions of an outcome. Nucleus accumbens activation was greater for rewarding outcomes than devalued outcomes, but activity in this region did not correlate with the probability of reward receipt. Activation of the right caudate and putamen, however, was largest in response to rewarding outcomes relative to expected omissions of reward. The dorsal striatum (caudate and putamen) at the time of feedback also showed a parametric increase correlating with the trialwise probability of reward receipt. Our results suggest that the ventral striatum is sensitive to the motivational relevance, or subjective value, of the outcome, while the dorsal striatum codes for a more complex signal that incorporates reward probability. Value and probability information may be integrated in the dorsal striatum, to facilitate action planning and allocation of effort.
PROBABILITY BASED CORROSION CONTROL FOR HIGH LEVEL WASTE TANKS: INTERIM REPORT
Hoffman, E; Karthik Subramanian, K
2008-04-23
Controls on the solution chemistry (minimum nitrite and hydroxide concentrations) are in place to prevent the initiation and propagation of pitting and stress corrosion cracking in high level waste (HLW) tanks. These controls are based upon a series of experiments performed on carbon steel coupons in simulated waste solutions. An experimental program was undertaken to investigate reducing the minimum molar nitrite concentration required to confidently inhibit pitting. A statistical basis to quantify the probability of pitting for the tank wall, when exposed to various dilute solutions, is being developed. Electrochemical and coupon testing are being performed within the framework of the statistical test matrix to determine the minimum necessary inhibitor concentrations and develop a quantitative model to predict pitting propensity. A subset of the original statistical test matrix was used to develop an applied understanding of the corrosion response of the carbon steel in the various environments. The interim results suggest that there exists some critical nitrite concentration that sufficiently inhibits against localized corrosion mechanisms due to nitrates/chlorides/sulfates, beyond which further nitrite additions are unnecessary. The combination of visual observation and the cyclic potentiodynamic polarization scans indicate the potential for significant inhibitor reductions without consequence specifically at nitrate concentrations near 1 M. The complete data sets will be used to determine the statistical basis to confidently inhibit against pitting using nitrite inhibition with the current pH controls. Once complete, a revised chemistry control program will be devised based upon the probability of pitting specifically for dilute solutions which will allow for tank specific chemistry control implementation.
Flow Regime Based Climatologies of Lightning Probabilities for Spaceports and Airports
NASA Technical Reports Server (NTRS)
Bauman, William H., III; Sharp, David; Spratt, Scott; Lafosse, Richard A.
2008-01-01
The objective of this work was to provide forecasters with a tool to indicate the warm season climatological probability of one or more lightning strikes within a circle at a site within a specified time interval. This paper described the AMU work conducted in developing flow regime based climatologies of lightning probabilities for the SLF and seven airports in the NWS MLB CWA in east-central Florida. The paper also described the GUI developed by the AMU that is used to display the data for the operational forecasters. There were challenges working with gridded lightning data as well as the code that accompanied the gridded data. The AMU modified the provided code to be able to produce the climatologies of lightning probabilities based on eight flow regimes for 5-, 10-, 20-, and 30-n mi circles centered on eight sites in 1-, 3-, and 6-hour increments.
Design of an activity landscape view taking compound-based feature probabilities into account
NASA Astrophysics Data System (ADS)
Zhang, Bijun; Vogt, Martin; Bajorath, Jürgen
2014-09-01
Activity landscapes (ALs) of compound data sets are rationalized as graphical representations that integrate similarity and potency relationships between active compounds. ALs enable the visualization of structure-activity relationship (SAR) information and are thus computational tools of interest for medicinal chemistry. For AL generation, similarity and potency relationships are typically evaluated in a pairwise manner and major AL features are assessed at the level of compound pairs. In this study, we add a conditional probability formalism to AL design that makes it possible to quantify the probability of individual compounds to contribute to characteristic AL features. Making this information graphically accessible in a molecular network-based AL representation is shown to further increase AL information content and helps to quickly focus on SAR-informative compound subsets. This feature probability-based AL variant extends the current spectrum of AL representations for medicinal chemistry applications.
Design of an activity landscape view taking compound-based feature probabilities into account.
Zhang, Bijun; Vogt, Martin; Bajorath, Jürgen
2014-09-01
Activity landscapes (ALs) of compound data sets are rationalized as graphical representations that integrate similarity and potency relationships between active compounds. ALs enable the visualization of structure-activity relationship (SAR) information and are thus computational tools of interest for medicinal chemistry. For AL generation, similarity and potency relationships are typically evaluated in a pairwise manner and major AL features are assessed at the level of compound pairs. In this study, we add a conditional probability formalism to AL design that makes it possible to quantify the probability of individual compounds to contribute to characteristic AL features. Making this information graphically accessible in a molecular network-based AL representation is shown to further increase AL information content and helps to quickly focus on SAR-informative compound subsets. This feature probability-based AL variant extends the current spectrum of AL representations for medicinal chemistry applications.
NASA Astrophysics Data System (ADS)
Bo, Xiaoming; Chen, Zujue
2004-04-01
Capacity analysis and call admission control in wireless communication systems are essential for system design and operation. The capacity for imperfectly power controlled multimedia code division multiple access (CDMA) networks based on system outage probability constraint is presented and analyzed. A handoff prioritized call admission scheme is then developed based on the derived system capacity and evaluated using the K-dimensional birth-death process model. A more general assumption that average channel holding times for new calls and handoff calls are not equal and an effective approximate model are adopted in the performance analysis. Numerical examples are given to demonstrate the system performance in terms of blocking probabilities, resource utilization and average system throughput. It is shown that the system parameters such as outage probability constraint and power control errors have great impact on system capacity and performance.
NASA Astrophysics Data System (ADS)
Huang, Q. Z.; Hsu, S. Y.; Li, M. H.
2016-12-01
The long-term streamflow prediction is important not only to estimate water-storage of a reservoir but also to the surface water intakes, which supply people's livelihood, agriculture, and industry. Climatology forecasts of streamflow have been traditionally used for calculating the exceedance probability curve of streamflow and water resource management. In this study, we proposed a stochastic approach to predict the exceedance probability curve of long-term streamflow with the seasonal weather outlook from Central Weather Bureau (CWB), Taiwan. The approach incorporates a statistical downscale weather generator and a catchment-scale hydrological model to convert the monthly outlook into daily rainfall and temperature series and to simulate the streamflow based on the outlook information. Moreover, we applied Bayes' theorem to derive a method for calculating the exceedance probability curve of the reservoir inflow based on the seasonal weather outlook and its imperfection. The results show that our approach can give the exceedance probability curves reflecting the three-month weather outlook and its accuracy. We also show how the improvement of the weather outlook affects the predicted exceedance probability curves of the streamflow. Our approach should be useful for the seasonal planning and management of water resource and their risk assessment.
Detecting Probable Cheating during Online Assessments Based on Time Delay and Head Pose
ERIC Educational Resources Information Center
Chuang, Chia Yuan; Craig, Scotty D.; Femiani, John
2017-01-01
This study investigated the ability of test takers' behaviors during online assessments to detect probable cheating incidents. Specifically, this study focused on the role of time delay and head pose for detection of cheating incidences in a lab-based online testing session. The analysis of a test taker's behavior indicated that not only time…
Learning Probabilities in Computer Engineering by Using a Competency- and Problem-Based Approach
ERIC Educational Resources Information Center
Khoumsi, Ahmed; Hadjou, Brahim
2005-01-01
Our department has redesigned its electrical and computer engineering programs by adopting a learning methodology based on competence development, problem solving, and the realization of design projects. In this article, we show how this pedagogical approach has been successfully used for learning probabilities and their application to computer…
Teaching Probability to Pre-Service Teachers with Argumentation Based Science Learning Approach
ERIC Educational Resources Information Center
Can, Ömer Sinan; Isleyen, Tevfik
2016-01-01
The aim of this study is to explore the effects of the argumentation based science learning (ABSL) approach on the teaching probability to pre-service teachers. The sample of the study included 41 students studying at the Department of Elementary School Mathematics Education in a public university during the 2014-2015 academic years. The study is…
HABITAT ASSESSMENT USING A RANDOM PROBABILITY BASED SAMPLING DESIGN: ESCAMBIA RIVER DELTA, FLORIDA
Smith, Lisa M., Darrin D. Dantin and Steve Jordan. In press. Habitat Assessment Using a Random Probability Based Sampling Design: Escambia River Delta, Florida (Abstract). To be presented at the SWS/GERS Fall Joint Society Meeting: Communication and Collaboration: Coastal Systems...
HABITAT ASSESSMENT USING A RANDOM PROBABILITY BASED SAMPLING DESIGN: ESCAMBIA RIVER DELTA, FLORIDA
Smith, Lisa M., Darrin D. Dantin and Steve Jordan. In press. Habitat Assessment Using a Random Probability Based Sampling Design: Escambia River Delta, Florida (Abstract). To be presented at the SWS/GERS Fall Joint Society Meeting: Communication and Collaboration: Coastal Systems...
The Role of Probability-Based Inference in an Intelligent Tutoring System.
ERIC Educational Resources Information Center
Mislevy, Robert J.; Gitomer, Drew H.
Probability-based inference in complex networks of interdependent variables is an active topic in statistical research, spurred by such diverse applications as forecasting, pedigree analysis, troubleshooting, and medical diagnosis. This paper concerns the role of Bayesian inference networks for updating student models in intelligent tutoring…
ERIC Educational Resources Information Center
Gurbuz, Ramazan
2010-01-01
The purpose of this study is to investigate and compare the effects of activity-based and traditional instructions on students' conceptual development of certain probability concepts. The study was conducted using a pretest-posttest control group design with 80 seventh graders. A developed "Conceptual Development Test" comprising 12…
NASA Astrophysics Data System (ADS)
Peres, David Johnny; Cancelliere, Antonino
2017-04-01
Climate change related to uncontrolled greenhouse gas emissions is expected to modify climate characteristics in a harmful way, increasing the frequency of many precipitation-triggered natural hazards, landslides included. In our study we analyse regional climate model (RCM) projections with the aim of assessing the potential future modifications of rainfall event characteristics linked to shallow landslide triggering, such as: event duration, total depth, and inter-arrival time. Factor of changes of the mean and the variance of these rainfall-event characteristics are exploited to adjust a stochastic rainfall generator aimed at simulating precipitation series likely to occur in the future. Then Monte Carlo simulations - where the stochastic rainfall generator and a physically based hydromechanical model are coupled - are carried out to estimate the probability of landslide triggering for future time horizons, and its changes respect to the current climate conditions. The proposed methodology is applied to the Peloritani region in Sicily, Italy, an area that in the past two decades has experienced several catastrophic shallow and rapidly moving landslide events. Different RCM simulations from the Coordinated regional Climate Downscaling Experiment (CORDEX) initiative are considered in the application, as well as two different emission scenarios, known as Representative Concentration Pathways: intermediate (RCP 4.5) and high-emissions (RCP 8.5). The estimated rainfall event characteristics modifications differ significantly both in magnitude and in direction (increase/decrease) from one model to another. RCMs are concordant only in predicting an increase of the mean of inter-event dry intervals. The variance of rainfall depth exhibits maximum changes (increase or decrease depending on the RCM), and it is the characteristic to which landslide triggering seems to be more sensitive. Some RCMs indicate significant variations of landslide probability due to climate
Duality-based calculations for transition probabilities in stochastic chemical reactions
NASA Astrophysics Data System (ADS)
Ohkubo, Jun
2017-02-01
An idea for evaluating transition probabilities in chemical reaction systems is proposed, which is efficient for repeated calculations with various rate constants. The idea is based on duality relations; instead of direct time evolutions of the original reaction system, the dual process is dealt with. Usually, if one changes rate constants of the original reaction system, the direct time evolutions should be performed again, using the new rate constants. On the other hands, only one solution of an extended dual process can be reused to calculate the transition probabilities for various rate constant cases. The idea is demonstrated in a parameter estimation problem for the Lotka-Volterra system.
Duality-based calculations for transition probabilities in stochastic chemical reactions.
Ohkubo, Jun
2017-02-01
An idea for evaluating transition probabilities in chemical reaction systems is proposed, which is efficient for repeated calculations with various rate constants. The idea is based on duality relations; instead of direct time evolutions of the original reaction system, the dual process is dealt with. Usually, if one changes rate constants of the original reaction system, the direct time evolutions should be performed again, using the new rate constants. On the other hands, only one solution of an extended dual process can be reused to calculate the transition probabilities for various rate constant cases. The idea is demonstrated in a parameter estimation problem for the Lotka-Volterra system.
Mode of delivery and the probability of subsequent childbearing: a population-based register study.
Elvander, C; Dahlberg, J; Andersson, G; Cnattingius, S
2015-11-01
To investigate the relationship between mode of first delivery and probability of subsequent childbearing. Population-based study. Nationwide study in Sweden. A cohort of 771 690 women who delivered their first singleton infant in Sweden between 1992 and 2010. Using Cox's proportional-hazards regression models, risks of subsequent childbearing were compared across four modes of delivery. Hazard ratios (HRs) were calculated, using 95% confidence intervals (95% CIs). Probability of having a second and third child; interpregnancy interval. Compared with women who had a spontaneous vaginal first delivery, women who delivered by vacuum extraction were less likely to have a second pregnancy (HR 0.96, 95% CI 0.95-0.97), and the probabilities of a second childbirth were substantially lower among women with a previous emergency caesarean section (HR 0.85, 95% CI 0.84-0.86) or an elective caesarean section (HR 0.82, 95% CI 0.80-0.83). There were no clinically important differences in the median time between first and second pregnancy by mode of first delivery. Compared with women younger than 30 years of age, older women were more negatively affected by a vacuum extraction with respect to the probability of having a second child. A primary vacuum extraction decreased the probability of having a third child by 4%, but having two consecutive vacuum extraction deliveries did not further alter the probability. A first delivery by vacuum extraction does not reduce the probability of subsequent childbearing to the same extent as a first delivery by emergency or elective caesarean section. © 2014 Royal College of Obstetricians and Gynaecologists.
Socia, Adam; Foley, Joe P
2014-01-10
This paper demonstrates that sequential elution liquid chromatography (SE-LC), an approach in which two or more elution modes are employed in series for the separation of two or more groups of compounds, can be used to separate not only weak acids (or weak bases) from neutral compounds, but weak acids and weak bases from neutral compounds (and each other) by the sequential application of either of two types of an extended pH gradient prior to a solvent gradient. It also details a comparison, based on peak capacity and separation disorder, of the probability of success of this approach with the unimodal elution approach taken by conventional column liquid chromatography. For an HPLC peak capacity of 120 and samples of moderate complexity (e.g., 12 components), the probability of success (Rs≥1) increases from 37.9% (HPLC) to 85.8% (SE-LC). Different columns were evaluated for their utility for SE-LC using the following criteria: (1) the prediction of the elution order of the groups based on the degree of ionization of the compounds; and (2) the closeness of the peak shape to the ideal Gaussian distribution. The best columns overall were the Zorbax SB-AQ and Waters XBridge Shield columns, as they provided both between-class and within-class separations of all compounds, as well as the lowest degree of tailing of 4-ethylaniline using the pH 2 to pH 8 gradient.
Hubig, Michael; Muggenthaler, Holger; Mall, Gita
2014-05-01
Bayesian estimation applied to temperature based death time estimation was recently introduced as conditional probability distribution or CPD-method by Biermann and Potente. The CPD-method is useful, if there is external information that sets the boundaries of the true death time interval (victim last seen alive and found dead). CPD allows computation of probabilities for small time intervals of interest (e.g. no-alibi intervals of suspects) within the large true death time interval. In the light of the importance of the CPD for conviction or acquittal of suspects the present study identifies a potential error source. Deviations in death time estimates will cause errors in the CPD-computed probabilities. We derive formulae to quantify the CPD error as a function of input error. Moreover we observed the paradox, that in cases, in which the small no-alibi time interval is located at the boundary of the true death time interval, adjacent to the erroneous death time estimate, CPD-computed probabilities for that small no-alibi interval will increase with increasing input deviation, else the CPD-computed probabilities will decrease. We therefore advise not to use CPD if there is an indication of an error or a contra-empirical deviation in the death time estimates, that is especially, if the death time estimates fall out of the true death time interval, even if the 95%-confidence intervals of the estimate still overlap the true death time interval.
Performance of the Rayleigh task based on the posterior probability of tomographic reconstructions
Hanson, K.M.
1991-01-01
We seek the best possible performance of the Rayleigh task in which one must decide whether a perceived object is a pair of Gaussian-blurred points or a blurred line. Two Bayesian reconstruction algorithms are used, the first based on a Gaussian prior-probability distribution with a nonnegativity constraint and the second based on an entropic prior. In both cases, the reconstructions are found that maximize the posterior probability. We compare the performance of the Rayleigh task obtained with two decision variables, the logarithm of the posterior probability ratio and the change in the mean-squared deviation from the reconstruction. The method of evaluation is based on the results of a numerical testing procedure in which the stated discrimination task is carried out on reconstructions of a randomly generated sequence of images. The ability to perform the Rayleigh task is summarized in terms of a discrimination index that is derived from the area under the receiver-operating characteristic (ROC) curve. We find that the use of the posterior probability does not result in better performance of the Rayleigh task than the mean-squared deviation from the reconstruction. 10 refs., 6 figs.
Karanki, Durga Rao; Kushwaha, Hari Shankar; Verma, Ajit Kumar; Ajit, Srividya
2009-05-01
A wide range of uncertainties will be introduced inevitably during the process of performing a safety assessment of engineering systems. The impact of all these uncertainties must be addressed if the analysis is to serve as a tool in the decision-making process. Uncertainties present in the components (input parameters of model or basic events) of model output are propagated to quantify its impact in the final results. There are several methods available in the literature, namely, method of moments, discrete probability analysis, Monte Carlo simulation, fuzzy arithmetic, and Dempster-Shafer theory. All the methods are different in terms of characterizing at the component level and also in propagating to the system level. All these methods have different desirable and undesirable features, making them more or less useful in different situations. In the probabilistic framework, which is most widely used, probability distribution is used to characterize uncertainty. However, in situations in which one cannot specify (1) parameter values for input distributions, (2) precise probability distributions (shape), and (3) dependencies between input parameters, these methods have limitations and are found to be not effective. In order to address some of these limitations, the article presents uncertainty analysis in the context of level-1 probabilistic safety assessment (PSA) based on a probability bounds (PB) approach. PB analysis combines probability theory and interval arithmetic to produce probability boxes (p-boxes), structures that allow the comprehensive propagation through calculation in a rigorous way. A practical case study is also carried out with the developed code based on the PB approach and compared with the two-phase Monte Carlo simulation results.
Unequal Probability Marking Approach to Enhance Security of Traceback Scheme in Tree-Based WSNs
Huang, Changqin; Ma, Ming; Liu, Xiao; Liu, Anfeng; Zuo, Zhengbang
2017-01-01
Fog (from core to edge) computing is a newly emerging computing platform, which utilizes a large number of network devices at the edge of a network to provide ubiquitous computing, thus having great development potential. However, the issue of security poses an important challenge for fog computing. In particular, the Internet of Things (IoT) that constitutes the fog computing platform is crucial for preserving the security of a huge number of wireless sensors, which are vulnerable to attack. In this paper, a new unequal probability marking approach is proposed to enhance the security performance of logging and migration traceback (LM) schemes in tree-based wireless sensor networks (WSNs). The main contribution of this paper is to overcome the deficiency of the LM scheme that has a higher network lifetime and large storage space. In the unequal probability marking logging and migration (UPLM) scheme of this paper, different marking probabilities are adopted for different nodes according to their distances to the sink. A large marking probability is assigned to nodes in remote areas (areas at a long distance from the sink), while a small marking probability is applied to nodes in nearby area (areas at a short distance from the sink). This reduces the consumption of storage and energy in addition to enhancing the security performance, lifetime, and storage capacity. Marking information will be migrated to nodes at a longer distance from the sink for increasing the amount of stored marking information, thus enhancing the security performance in the process of migration. The experimental simulation shows that for general tree-based WSNs, the UPLM scheme proposed in this paper can store 1.12–1.28 times the amount of stored marking information that the equal probability marking approach achieves, and has 1.15–1.26 times the storage utilization efficiency compared with other schemes. PMID:28629135
Value and probability coding in a feedback-based learning task utilizing food rewards
Lempert, Karolina M.
2014-01-01
For the consequences of our actions to guide behavior, the brain must represent different types of outcome-related information. For example, an outcome can be construed as negative because an expected reward was not delivered or because an outcome of low value was delivered. Thus behavioral consequences can differ in terms of the information they provide about outcome probability and value. We investigated the role of the striatum in processing probability-based and value-based negative feedback by training participants to associate cues with food rewards and then employing a selective satiety procedure to devalue one food outcome. Using functional magnetic resonance imaging, we examined brain activity related to receipt of expected rewards, receipt of devalued outcomes, omission of expected rewards, omission of devalued outcomes, and expected omissions of an outcome. Nucleus accumbens activation was greater for rewarding outcomes than devalued outcomes, but activity in this region did not correlate with the probability of reward receipt. Activation of the right caudate and putamen, however, was largest in response to rewarding outcomes relative to expected omissions of reward. The dorsal striatum (caudate and putamen) at the time of feedback also showed a parametric increase correlating with the trialwise probability of reward receipt. Our results suggest that the ventral striatum is sensitive to the motivational relevance, or subjective value, of the outcome, while the dorsal striatum codes for a more complex signal that incorporates reward probability. Value and probability information may be integrated in the dorsal striatum, to facilitate action planning and allocation of effort. PMID:25339705
Unequal Probability Marking Approach to Enhance Security of Traceback Scheme in Tree-Based WSNs.
Huang, Changqin; Ma, Ming; Liu, Xiao; Liu, Anfeng; Zuo, Zhengbang
2017-06-17
Fog (from core to edge) computing is a newly emerging computing platform, which utilizes a large number of network devices at the edge of a network to provide ubiquitous computing, thus having great development potential. However, the issue of security poses an important challenge for fog computing. In particular, the Internet of Things (IoT) that constitutes the fog computing platform is crucial for preserving the security of a huge number of wireless sensors, which are vulnerable to attack. In this paper, a new unequal probability marking approach is proposed to enhance the security performance of logging and migration traceback (LM) schemes in tree-based wireless sensor networks (WSNs). The main contribution of this paper is to overcome the deficiency of the LM scheme that has a higher network lifetime and large storage space. In the unequal probability marking logging and migration (UPLM) scheme of this paper, different marking probabilities are adopted for different nodes according to their distances to the sink. A large marking probability is assigned to nodes in remote areas (areas at a long distance from the sink), while a small marking probability is applied to nodes in nearby area (areas at a short distance from the sink). This reduces the consumption of storage and energy in addition to enhancing the security performance, lifetime, and storage capacity. Marking information will be migrated to nodes at a longer distance from the sink for increasing the amount of stored marking information, thus enhancing the security performance in the process of migration. The experimental simulation shows that for general tree-based WSNs, the UPLM scheme proposed in this paper can store 1.12-1.28 times the amount of stored marking information that the equal probability marking approach achieves, and has 1.15-1.26 times the storage utilization efficiency compared with other schemes.
A probability-based multi-cycle sorting method for 4D-MRI: A simulation study.
Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing
2016-12-01
by the 4D images, and also the accuracy of average intensity projection (AIP) of 4D images. Probability-based sorting showed improved similarity of breathing motion PDF from 4D images to reference PDF compared to single cycle sorting, indicated by the significant increase in Dice similarity coefficient (DSC) (probability-based sorting, DSC = 0.89 ± 0.03, and single cycle sorting, DSC = 0.83 ± 0.05, p-value <0.001). Based on the simulation study on XCAT, the probability-based method outperforms the conventional phase-based methods in qualitative evaluation on motion artifacts and quantitative evaluation on tumor volume precision and accuracy and accuracy of AIP of the 4D images. In this paper the authors demonstrated the feasibility of a novel probability-based multicycle 4D image sorting method. The authors' preliminary results showed that the new method can improve the accuracy of tumor motion PDF and the AIP of 4D images, presenting potential advantages over the conventional phase-based sorting method for radiation therapy motion management.
Nichols, J.D.; Sauer, J.R.; Pollock, K.H.; Hestbeck, J.B.
1992-01-01
In stage-based demography, animals are often categorized into size (or mass) classes, and size-based probabilities of surviving and changing mass classes must be estimated before demographic analyses can be conducted. In this paper, we develop two procedures for the estimation of mass transition probabilities from capture-recapture data. The first approach uses a multistate capture-recapture model that is parameterized directly with the transition probabilities of interest. Maximum likelihood estimates are then obtained numerically using program SURVIV. The second approach involves a modification of Pollock's robust design. Estimation proceeds by conditioning on animals caught in a particualr class at time i, and then using closed models to estimate the number of these that are alive in other classes at i + 1. Both methods are illustrated by application to meadow vole, Microtus pennsylvanicus, capture-recapture data. The two methods produced reasonable estimates that were similar. Advantages of these two approaches include the directness of estimation, the absence of need for restrictive assumptions about the independence of survival and growth, the testability of assumptions, and the testability of related hypotheses of ecological interest (e.g., the hypothesis of temporal variation in transition probabilities).
Doubravsky, Karel; Dohnal, Mirko
2015-01-01
Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details. PMID:26158662
Delavande, Adeline; Rohwedder, Susann
2013-01-01
Cross-country comparisons of differential survival by socioeconomic status (SES) are useful in many domains. Yet, to date, such studies have been rare. Reliably estimating differential survival in a single country has been challenging because it requires rich panel data with a large sample size. Cross-country estimates have proven even more difficult because the measures of SES need to be comparable internationally. We present an alternative method for acquiring information on differential survival by SES. Rather than using observations of actual survival, we relate individuals’ subjective probabilities of survival to SES variables in cross section. To show that subjective survival probabilities are informative proxies for actual survival when estimating differential survival, we compare estimates of differential survival based on actual survival with estimates based on subjective probabilities of survival for the same sample. The results are remarkably similar. We then use this approach to compare differential survival by SES for 10 European countries and the United States. Wealthier people have higher survival probabilities than those who are less wealthy, but the strength of the association differs across countries. Nations with a smaller gradient appear to be Belgium, France, and Italy, while the United States, England, and Sweden appear to have a larger gradient. PMID:22042664
Doubravsky, Karel; Dohnal, Mirko
2015-01-01
Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details.
NASA Astrophysics Data System (ADS)
Lin, Wei; Chen, Yu-hua; Wang, Ji-yuan; Gao, Hong-sheng; Wang, Ji-jun; Su, Rong-hua; Mao, Wei
2011-04-01
Detection probability is an important index to represent and estimate target viability, which provides basis for target recognition and decision-making. But it will expend a mass of time and manpower to obtain detection probability in reality. At the same time, due to the different interpretation of personnel practice knowledge and experience, a great difference will often exist in the datum obtained. By means of studying the relationship between image features and perception quantity based on psychology experiments, the probability model has been established, in which the process is as following.Firstly, four image features have been extracted and quantified, which affect directly detection. Four feature similarity degrees between target and background were defined. Secondly, the relationship between single image feature similarity degree and perception quantity was set up based on psychological principle, and psychological experiments of target interpretation were designed which includes about five hundred people for interpretation and two hundred images. In order to reduce image features correlativity, a lot of artificial synthesis images have been made which include images with single brightness feature difference, images with single chromaticity feature difference, images with single texture feature difference and images with single shape feature difference. By analyzing and fitting a mass of experiments datum, the model quantitys have been determined. Finally, by applying statistical decision theory and experimental results, the relationship between perception quantity with target detection probability has been found. With the verification of a great deal of target interpretation in practice, the target detection probability can be obtained by the model quickly and objectively.
NASA Astrophysics Data System (ADS)
Wei, Robert P.; Harlow, D. Gary
2005-01-01
Life prediction and reliability assessment are essential components for the life-cycle engineering and management (LCEM) of modern engineered systems. These systems can range from microelectronic and bio-medical devices to large machinery and structures. To be effective, the underlying approach to LCEM must be transformed to embody mechanistically based probability modelling, vis-à-vis the more traditional experientially based statistical modelling, for predicting damage evolution and distribution. In this paper, the probability and statistical approaches are compared and differentiated. The process of model development on the basis of mechanistic understanding derived from critical experiments is illustrated through selected examples. The efficacy of this approach is illustrated through an example of the evolution and distribution of corrosion and corrosion fatigue damage in aluminium alloys in relation to aircraft that had been in long-term service.
NASA Technical Reports Server (NTRS)
Kim, Hakil; Swain, Philip H.
1990-01-01
An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method.
NASA Astrophysics Data System (ADS)
Toroody, Ahmad Bahoo; Abaiee, Mohammad Mahdi; Gholamnia, Reza; Ketabdari, Mohammad Javad
2016-09-01
Owing to the increase in unprecedented accidents with new root causes in almost all operational areas, the importance of risk management has dramatically risen. Risk assessment, one of the most significant aspects of risk management, has a substantial impact on the system-safety level of organizations, industries, and operations. If the causes of all kinds of failure and the interactions between them are considered, effective risk assessment can be highly accurate. A combination of traditional risk assessment approaches and modern scientific probability methods can help in realizing better quantitative risk assessment methods. Most researchers face the problem of minimal field data with respect to the probability and frequency of each failure. Because of this limitation in the availability of epistemic knowledge, it is important to conduct epistemic estimations by applying the Bayesian theory for identifying plausible outcomes. In this paper, we propose an algorithm and demonstrate its application in a case study for a light-weight lifting operation in the Persian Gulf of Iran. First, we identify potential accident scenarios and present them in an event tree format. Next, excluding human error, we use the event tree to roughly estimate the prior probability of other hazard-promoting factors using a minimal amount of field data. We then use the Success Likelihood Index Method (SLIM) to calculate the probability of human error. On the basis of the proposed event tree, we use the Bayesian network of the provided scenarios to compensate for the lack of data. Finally, we determine the resulting probability of each event based on its evidence in the epistemic estimation format by building on two Bayesian network types: the probability of hazard promotion factors and the Bayesian theory. The study results indicate that despite the lack of available information on the operation of floating objects, a satisfactory result can be achieved using epistemic data.
Probability based earthquake load and resistance factor design criteria for offshore platforms
Bea, R.G.
1996-12-31
This paper describes a probability reliability based formulation to determine earthquake Load and Resistance Factor Design (LRFD) parameters for conventional, steel, pile supported, tubular membered platforms that is proposed as a basis for earthquake design criteria and guidelines for offshore platforms that are intended to have worldwide applicability. The formulation is illustrated with application to platforms located in five areas: offshore California, Venezuela (Rio Caribe), the East Coast of Canada, in the Caspian Sea (Azeri), and the Norwegian sector of the North Sea.
Probability Prediction of a Nation’s Internal Conflict Based on Instability
2008-06-01
COVERED Master’s Thesis 4. TITLE AND SUBTITLE Probability Prediction of a Nation’s Internal Conflict Based on Instability 6. AUTHOR( S ) Shian...kuen Wann 5. FUNDING NUMBERS 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943-5000 8. PERFORMING...ORGANIZATION REPORT NUMBER 9. SPONSORING /MONITORING AGENCY NAME( S ) AND ADDRESS(ES) N/A 10. SPONSORING/MONITORING AGENCY REPORT NUMBER 11
The development of posterior probability models in risk-based integrity modeling.
Thodi, Premkumar N; Khan, Faisal I; Haddara, Mahmoud R
2010-03-01
There is a need for accurate modeling of mechanisms causing material degradation of equipment in process installation, to ensure safety and reliability of the equipment. Degradation mechanisms are stochastic processes. They can be best described using risk-based approaches. Risk-based integrity assessment quantifies the level of risk to which the individual components are subjected and provides means to mitigate them in a safe and cost-effective manner. The uncertainty and variability in structural degradations can be best modeled by probability distributions. Prior probability models provide initial description of the degradation mechanisms. As more inspection data become available, these prior probability models can be revised to obtain posterior probability models, which represent the current system and can be used to predict future failures. In this article, a rejection sampling-based Metropolis-Hastings (M-H) algorithm is used to develop posterior distributions. The M-H algorithm is a Markov chain Monte Carlo algorithm used to generate a sequence of posterior samples without actually knowing the normalizing constant. Ignoring the transient samples in the generated Markov chain, the steady state samples are rejected or accepted based on an acceptance criterion. To validate the estimated parameters of posterior models, analytical Laplace approximation method is used to compute the integrals involved in the posterior function. Results of the M-H algorithm and Laplace approximations are compared with conjugate pair estimations of known prior and likelihood combinations. The M-H algorithm provides better results and hence it is used for posterior development of the selected priors for corrosion and cracking.
Acceptance Control Charts with Stipulated Error Probabilities Based on Poisson Count Data
1973-01-01
Richard L / Scheaffer ’.* Richard S eavenwort December,... 198 *Department of Industrial and Systems Engineering University of Florida Gainesville...L. Scheaffer N00014-75-C-0783 Richard S. Leavenworth 9. PERFORMING ORGANIZATION NAME AND ADDRESS . PROGRAM ELEMENT. PROJECT, TASK Industrial and...PROBABILITIES BASED ON POISSON COUNT DATA by Suresh 1Ihatre Richard L. Scheaffer S..Richard S. Leavenworth ABSTRACT An acceptance control charting
NASA Astrophysics Data System (ADS)
Yang, Dixiong; Liu, Zhenjun; Zhou, Jilei
2014-04-01
Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.
NASA Astrophysics Data System (ADS)
Kikuchi, Ryota; Misaka, Takashi; Obayashi, Shigeru
2015-10-01
An integrated method of a proper orthogonal decomposition based reduced-order model (ROM) and data assimilation is proposed for the real-time prediction of an unsteady flow field. In this paper, a particle filter (PF) and an ensemble Kalman filter (EnKF) are compared for data assimilation and the difference in the predicted flow fields is evaluated focusing on the probability density function (PDF) of the model variables. The proposed method is demonstrated using identical twin experiments of an unsteady flow field around a circular cylinder at the Reynolds number of 1000. The PF and EnKF are employed to estimate temporal coefficients of the ROM based on the observed velocity components in the wake of the circular cylinder. The prediction accuracy of ROM-PF is significantly better than that of ROM-EnKF due to the flexibility of PF for representing a PDF compared to EnKF. Furthermore, the proposed method reproduces the unsteady flow field several orders faster than the reference numerical simulation based on the Navier-Stokes equations.
Participatory design of probability-based decision support tools for in-hospital nurses.
Jeffery, Alvin D; Novak, Laurie L; Kennedy, Betsy; Dietrich, Mary S; Mion, Lorraine C
2017-06-19
To describe nurses' preferences for the design of a probability-based clinical decision support (PB-CDS) tool for in-hospital clinical deterioration. A convenience sample of bedside nurses, charge nurses, and rapid response nurses ( n = 20) from adult and pediatric hospitals completed participatory design sessions with researchers in a simulation laboratory to elicit preferred design considerations for a PB-CDS tool. Following theme-based content analysis, we shared findings with user interface designers and created a low-fidelity prototype. Three major themes and several considerations for design elements of a PB-CDS tool surfaced from end users. Themes focused on "painting a picture" of the patient condition over time, promoting empowerment, and aligning probability information with what a nurse already believes about the patient. The most notable design element consideration included visualizing a temporal trend of the predicted probability of the outcome along with user-selected overlapping depictions of vital signs, laboratory values, and outcome-related treatments and interventions. Participants expressed that the prototype adequately operationalized requests from the design sessions. Participatory design served as a valuable method in taking the first step toward developing PB-CDS tools for nurses. This information about preferred design elements of tools that support, rather than interrupt, nurses' cognitive workflows can benefit future studies in this field as well as nurses' practice.
Target detection in complex scene of SAR image based on existence probability
NASA Astrophysics Data System (ADS)
Liu, Shuo; Cao, Zongjie; Wu, Honggang; Pi, Yiming; Yang, Haiyi
2016-12-01
This study proposes a target detection approach based on the target existence probability in complex scenes of a synthetic aperture radar image. Superpixels are the basic unit throughout the approach and are labelled into each classified scene by a texture feature. The original and predicted saliency depth values for each scene are derived through self-information of all the labelled superpixels in each scene. Thereafter, the target existence probability is estimated based on the comparison of two saliency depth values. Lastly, an improved visual attention algorithm, in which the scenes of the saliency map are endowed with different weights related to the existence probabilities, derives the target detection result. This algorithm enhances the attention for the scene that contains the target. Hence, the proposed approach is self-adapting for complex scenes and the algorithm is substantially suitable for different detection missions as well (e.g. vehicle, ship or aircraft detection in the related scenes of road, harbour or airport, respectively). Experimental results on various data show the effectiveness of the proposed method.
2012-02-24
GENI Project: Sandia National Laboratories is working with several commercial and university partners to develop software for market management systems (MMSs) that enable greater use of renewable energy sources throughout the grid. MMSs are used to securely and optimally determine which energy resources should be used to service energy demand across the country. Contributions of electricity to the grid from renewable energy sources such as wind and solar are intermittent, introducing complications for MMSs, which have trouble accommodating the multiple sources of price and supply uncertainties associated with bringing these new types of energy into the grid. Sandia’s software will bring a new, probability-based formulation to account for these uncertainties. By factoring in various probability scenarios for electricity production from renewable energy sources in real time, Sandia’s formula can reduce the risk of inefficient electricity transmission, save ratepayers money, conserve power, and support the future use of renewable energy.
Hanayama, Nobutane; Sibuya, Masaaki
2016-08-01
In modern biology, theories of aging fall mainly into two groups: damage theories and programed theories. If programed theories are true, the probability that human beings live beyond a specific age will be zero. In contrast, if damage theories are true, such an age does not exist, and a longevity record will be eventually destroyed. In this article, for examining real state, a special type of binomial model based on the generalized Pareto distribution has been applied to data of Japanese centenarians. From the results, it is concluded that the upper limit of lifetime probability distribution in the Japanese population has been estimated 123 years. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Arnold, Nina R; Bayen, Ute J; Kuhlmann, Beatrice G; Vaterrodt, Bianca
2013-04-01
According to the probability-matching account of source guessing (Spaniol & Bayen, Journal of Experimental Psychology: Learning, Memory, and Cognition 28:631-651, 2002), when people do not remember the source of an item in a source-monitoring task, they match the source-guessing probabilities to the perceived contingencies between sources and item types. In a source-monitoring experiment, half of the items presented by each of two sources were consistent with schematic expectations about this source, whereas the other half of the items were consistent with schematic expectations about the other source. Participants' source schemas were activated either at the time of encoding or just before the source-monitoring test. After test, the participants judged the contingency of the item type and source. Individual parameter estimates of source guessing were obtained via beta-multinomial processing tree modeling (beta-MPT; Smith & Batchelder, Journal of Mathematical Psychology 54:167-183, 2010). We found a significant correlation between the perceived contingency and source guessing, as well as a correlation between the deviation of the guessing bias from the true contingency and source memory when participants did not receive the schema information until retrieval. These findings support the probability-matching account.
Prediction of protein secondary structure using probability based features and a hybrid system.
Ghanty, Pradip; Pal, Nikhil R; Mudi, Rajani K
2013-10-01
In this paper, we propose some co-occurrence probability-based features for prediction of protein secondary structure. The features are extracted using occurrence/nonoccurrence of secondary structures in the protein sequences. We explore two types of features: position-specific (based on position of amino acid on fragments of protein sequences) as well as position-independent (independent of amino acid position on fragments of protein sequences). We use a hybrid system, NEUROSVM, consisting of neural networks and support vector machines for classification of secondary structures. We propose two schemes NSVMps and NSVM for protein secondary structure prediction. The NSVMps uses position-specific probability-based features and NEUROSVM classifier whereas NSVM uses the same classifier with position-independent probability-based features. The proposed method falls in the single-sequence category of methods because it does not use any sequence profile information such as position specific scoring matrices (PSSM) derived from PSI-BLAST. Two widely used datasets RS126 and CB513 are used in the experiments. The results obtained using the proposed features and NEUROSVM classifier are better than most of the existing single-sequence prediction methods. Most importantly, the results using NSVMps that are obtained using lower dimensional features, are comparable to those by other existing methods. The NSVMps and NSVM are finally tested on target proteins of the critical assessment of protein structure prediction experiment-9 (CASP9). A larger dataset is used to compare the performance of the proposed methods with that of two recent single-sequence prediction methods. We also investigate the impact of presence of different amino acid residues (in protein sequences) that are responsible for the formation of different secondary structures.
Tosun, Tuğçe; Gür, Ezgi; Balcı, Fuat
2016-01-19
Animals can shape their timed behaviors based on experienced probabilistic relations in a nearly optimal fashion. On the other hand, it is not clear if they adopt these timed decisions by making computations based on previously learnt task parameters (time intervals, locations, and probabilities) or if they gradually develop their decisions based on trial and error. To address this question, we tested mice in the timed-switching task, which required them to anticipate when (after a short or long delay) and at which of the two delay locations a reward would be presented. The probability of short trials differed between test groups in two experiments. Critically, we first trained mice on relevant task parameters by signaling the active trial with a discriminative stimulus and delivered the corresponding reward after the associated delay without any response requirement (without inducing switching behavior). During the test phase, both options were presented simultaneously to characterize the emergence and temporal characteristics of the switching behavior. Mice exhibited timed-switching behavior starting from the first few test trials, and their performance remained stable throughout testing in the majority of the conditions. Furthermore, as the probability of the short trial increased, mice waited longer before switching from the short to long location (experiment 1). These behavioral adjustments were in directions predicted by reward maximization. These results suggest that rather than gradually adjusting their time-dependent choice behavior, mice abruptly adopted temporal decision strategies by directly integrating their previous knowledge of task parameters into their timed behavior, supporting the model-based representational account of temporal risk assessment.
Tosun, Tuğçe; Gür, Ezgi; Balcı, Fuat
2016-01-01
Animals can shape their timed behaviors based on experienced probabilistic relations in a nearly optimal fashion. On the other hand, it is not clear if they adopt these timed decisions by making computations based on previously learnt task parameters (time intervals, locations, and probabilities) or if they gradually develop their decisions based on trial and error. To address this question, we tested mice in the timed-switching task, which required them to anticipate when (after a short or long delay) and at which of the two delay locations a reward would be presented. The probability of short trials differed between test groups in two experiments. Critically, we first trained mice on relevant task parameters by signaling the active trial with a discriminative stimulus and delivered the corresponding reward after the associated delay without any response requirement (without inducing switching behavior). During the test phase, both options were presented simultaneously to characterize the emergence and temporal characteristics of the switching behavior. Mice exhibited timed-switching behavior starting from the first few test trials, and their performance remained stable throughout testing in the majority of the conditions. Furthermore, as the probability of the short trial increased, mice waited longer before switching from the short to long location (experiment 1). These behavioral adjustments were in directions predicted by reward maximization. These results suggest that rather than gradually adjusting their time-dependent choice behavior, mice abruptly adopted temporal decision strategies by directly integrating their previous knowledge of task parameters into their timed behavior, supporting the model-based representational account of temporal risk assessment. PMID:26733674
The high order dispersion analysis based on first-passage-time probability in financial markets
NASA Astrophysics Data System (ADS)
Liu, Chenggong; Shang, Pengjian; Feng, Guochen
2017-04-01
The study of first-passage-time (FPT) event about financial time series has gained broad research recently, which can provide reference for risk management and investment. In this paper, a new measurement-high order dispersion (HOD)-is developed based on FPT probability to explore financial time series. The tick-by-tick data of three Chinese stock markets and three American stock markets are investigated. We classify the financial markets successfully through analyzing the scaling properties of FPT probabilities of six stock markets and employing HOD method to compare the differences of FPT decay curves. It can be concluded that long-range correlation, fat-tailed broad probability density function and its coupling with nonlinearity mainly lead to the multifractality of financial time series by applying HOD method. Furthermore, we take the fluctuation function of multifractal detrended fluctuation analysis (MF-DFA) to distinguish markets and get consistent results with HOD method, whereas the HOD method is capable of fractionizing the stock markets effectively in the same region. We convince that such explorations are relevant for a better understanding of the financial market mechanisms.
McGinn, Thomas; Jervis, Ramiro; Wisnivesky, Juan; Keitz, Sheri
2008-01-01
Background Clinical prediction rules (CPR) are tools that clinicians can use to predict the most likely diagnosis, prognosis, or response to treatment in a patient based on individual characteristics. CPRs attempt to standardize, simplify, and increase the accuracy of clinicians’ diagnostic and prognostic assessments. The teaching tips series is designed to give teachers advice and materials they can use to attain specific educational objectives. Educational Objectives In this article, we present 3 teaching tips aimed at helping clinical learners use clinical prediction rules and to more accurately assess pretest probability in every day practice. The first tip is designed to demonstrate variability in physician estimation of pretest probability. The second tip demonstrates how the estimate of pretest probability influences the interpretation of diagnostic tests and patient management. The third tip exposes learners to various examples and different types of Clinical Prediction Rules (CPR) and how to apply them in practice. Pilot Testing We field tested all 3 tips with 16 learners, a mix of interns and senior residents. Teacher preparatory time was approximately 2 hours. The field test utilized a board and a data projector; 3 handouts were prepared. The tips were felt to be clear and the educational objectives reached. Potential teaching pitfalls were identified. Conclusion Teaching with these tips will help physicians appreciate the importance of applying evidence to their every day decisions. In 2 or 3 short teaching sessions, clinicians can also become familiar with the use of CPRs in applying evidence consistently in everyday practice. PMID:18491194
Confidence Probability versus Detection Probability
Axelrod, M
2005-08-18
In a discovery sampling activity the auditor seeks to vet an inventory by measuring (or inspecting) a random sample of items from the inventory. When the auditor finds every sample item in compliance, he must then make a confidence statement about the whole inventory. For example, the auditor might say: ''We believe that this inventory of 100 items contains no more than 5 defectives with 95% confidence.'' Note this is a retrospective statement in that it asserts something about the inventory after the sample was selected and measured. Contrast this to the prospective statement: ''We will detect the existence of more than 5 defective items in this inventory with 95% probability.'' The former uses confidence probability while the latter uses detection probability. For a given sample size, the two probabilities need not be equal, indeed they could differ significantly. Both these probabilities critically depend on the auditor's prior belief about the number of defectives in the inventory and how he defines non-compliance. In other words, the answer strongly depends on how the question is framed.
Probability of ventricular fibrillation: allometric model based on the ST deviation
2011-01-01
Background Allometry, in general biology, measures the relative growth of a part in relation to the whole living organism. Using reported clinical data, we apply this concept for evaluating the probability of ventricular fibrillation based on the electrocardiographic ST-segment deviation values. Methods Data collected by previous reports were used to fit an allometric model in order to estimate ventricular fibrillation probability. Patients presenting either with death, myocardial infarction or unstable angina were included to calculate such probability as, VFp = δ + β (ST), for three different ST deviations. The coefficients δ and β were obtained as the best fit to the clinical data extended over observational periods of 1, 6, 12 and 48 months from occurrence of the first reported chest pain accompanied by ST deviation. Results By application of the above equation in log-log representation, the fitting procedure produced the following overall coefficients: Average β = 0.46, with a maximum = 0.62 and a minimum = 0.42; Average δ = 1.28, with a maximum = 1.79 and a minimum = 0.92. For a 2 mm ST-deviation, the full range of predicted ventricular fibrillation probability extended from about 13% at 1 month up to 86% at 4 years after the original cardiac event. Conclusions These results, at least preliminarily, appear acceptable and still call for full clinical test. The model seems promising, especially if other parameters were taken into account, such as blood cardiac enzyme concentrations, ischemic or infarcted epicardial areas or ejection fraction. It is concluded, considering these results and a few references found in the literature, that the allometric model shows good predictive practical value to aid medical decisions. PMID:21226961
A multivariate copula-based framework for dealing with hazard scenarios and failure probabilities
NASA Astrophysics Data System (ADS)
Salvadori, G.; Durante, F.; De Michele, C.; Bernardi, M.; Petrella, L.
2016-05-01
This paper is of methodological nature, and deals with the foundations of Risk Assessment. Several international guidelines have recently recommended to select appropriate/relevant Hazard Scenarios in order to tame the consequences of (extreme) natural phenomena. In particular, the scenarios should be multivariate, i.e., they should take into account the fact that several variables, generally not independent, may be of interest. In this work, it is shown how a Hazard Scenario can be identified in terms of (i) a specific geometry and (ii) a suitable probability level. Several scenarios, as well as a Structural approach, are presented, and due comparisons are carried out. In addition, it is shown how the Hazard Scenario approach illustrated here is well suited to cope with the notion of Failure Probability, a tool traditionally used for design and risk assessment in engineering practice. All the results outlined throughout the work are based on the Copula Theory, which turns out to be a fundamental theoretical apparatus for doing multivariate risk assessment: formulas for the calculation of the probability of Hazard Scenarios in the general multidimensional case (d≥2) are derived, and worthy analytical relationships among the probabilities of occurrence of Hazard Scenarios are presented. In addition, the Extreme Value and Archimedean special cases are dealt with, relationships between dependence ordering and scenario levels are studied, and a counter-example concerning Tail Dependence is shown. Suitable indications for the practical application of the techniques outlined in the work are given, and two case studies illustrate the procedures discussed in the paper.
Residential electricity load decomposition method based on maximum a posteriori probability
NASA Astrophysics Data System (ADS)
Shan, Guangpu; Zhou, Heng; Liu, Song; Liu, Peng
2017-05-01
In order to improvement problems that the computational complexity and the accuracy is not high in load decomposition, a load decomposition method based on the maximum a posteriori probability is proposed, the electrical equipment steady-state current is chosen as load characteristic, according to the Bayesian formula, all the electric equipment's' electricity information value can be acquired at a time exactly. Experimental results show that the method can identify the running state of each power equipment, and can get a higher decomposition accuracy. In addition, the data used can be collected by the common smart meters that can be directly got from the current market, reducing the cost of hardware input.
Forestry inventory based on multistage sampling with probability proportional to size
NASA Technical Reports Server (NTRS)
Lee, D. C. L.; Hernandez, P., Jr.; Shimabukuro, Y. E.
1983-01-01
A multistage sampling technique, with probability proportional to size, is developed for a forest volume inventory using remote sensing data. The LANDSAT data, Panchromatic aerial photographs, and field data are collected. Based on age and homogeneity, pine and eucalyptus classes are identified. Selection of tertiary sampling units is made through aerial photographs to minimize field work. The sampling errors for eucalyptus and pine ranged from 8.34 to 21.89 percent and from 7.18 to 8.60 percent, respectively.
NASA Astrophysics Data System (ADS)
Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; Savara, Aditya
2017-10-01
Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of "KMC stiffness" (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps/CPU time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order to achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events-allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm is designed for use in achieving and simulating steady-state conditions in KMC simulations. As shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.
Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; ...
2017-06-09
Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of “KMC stiffness” (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps / cpu-time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order tomore » achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events -- allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm designed for use in achieving and simulating steady-state conditions in KMC simulations. Lastly, as shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.« less
3D model retrieval using probability density-based shape descriptors.
Akgül, Ceyhun Burak; Sankur, Bülent; Yemez, Yücel; Schmitt, Francis
2009-06-01
We address content-based retrieval of complete 3D object models by a probabilistic generative description of local shape properties. The proposed shape description framework characterizes a 3D object with sampled multivariate probability density functions of its local surface features. This density-based descriptor can be efficiently computed via kernel density estimation (KDE) coupled with fast Gauss transform. The non-parametric KDE technique allows reliable characterization of a diverse set of shapes and yields descriptors which remain relatively insensitive to small shape perturbations and mesh resolution. Density-based characterization also induces a permutation property which can be used to guarantee invariance at the shape matching stage. As proven by extensive retrieval experiments on several 3D databases, our framework provides state-of-the-art discrimination over a broad and heterogeneous set of shape categories.
NASA Astrophysics Data System (ADS)
Zhong, Yaoquan; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng
2010-12-01
A cost-effective and service-differentiated provisioning strategy is very desirable to service providers so that they can offer users satisfactory services, while optimizing network resource allocation. Providing differentiated protection services to connections for surviving link failure has been extensively studied in recent years. However, the differentiated protection services for workflow-based applications, which consist of many interdependent tasks, have scarcely been studied. This paper investigates the problem of providing differentiated services for workflow-based applications in optical grid. In this paper, we develop three differentiated protection services provisioning strategies which can provide security level guarantee and network-resource optimization for workflow-based applications. The simulation demonstrates that these heuristic algorithms provide protection cost-effectively while satisfying the applications' failure probability requirements.
A generative probability model of joint label fusion for multi-atlas based brain segmentation.
Wu, Guorong; Wang, Qian; Zhang, Daoqiang; Nie, Feiping; Huang, Heng; Shen, Dinggang
2014-08-01
Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling
A classification scheme for edge-localized modes based on their probability distributions
Shabbir, A.; Hornung, G.; Verdoolaege, G.; Collaboration: EUROfusion Consortium, JET, Culham Science Centre, Abingdon OX14 3DB
2016-11-15
We present here an automated classification scheme which is particularly well suited to scenarios where the parameters have significant uncertainties or are stochastic quantities. To this end, the parameters are modeled with probability distributions in a metric space and classification is conducted using the notion of nearest neighbors. The presented framework is then applied to the classification of type I and type III edge-localized modes (ELMs) from a set of carbon-wall plasmas at JET. This provides a fast, standardized classification of ELM types which is expected to significantly reduce the effort of ELM experts in identifying ELM types. Further, the classification scheme is general and can be applied to various other plasma phenomena as well.
Finding significantly connected voxels based on histograms of connection strengths
NASA Astrophysics Data System (ADS)
Kasenburg, Niklas; Pedersen, Morten Vester; Darkner, Sune
2016-03-01
We explore a new approach for structural connectivity based segmentations of subcortical brain regions. Connectivity based segmentations are usually based on fibre connections from a seed region to predefined target regions. We present a method for finding significantly connected voxels based on the distribution of connection strengths. Paths from seed voxels to all voxels in a target region are obtained from a shortest-path tractography. For each seed voxel we approximate the distribution with a histogram of path scores. We hypothesise that the majority of estimated connections are false-positives and that their connection strength is distributed differently from true-positive connections. Therefore, an empirical null-distribution is defined for each target region as the average normalized histogram over all voxels in the seed region. Single histograms are then tested against the corresponding null-distribution and significance is determined using the false discovery rate (FDR). Segmentations are based on significantly connected voxels and their FDR. In this work we focus on the thalamus and the target regions were chosen by dividing the cortex into a prefrontal/temporal zone, motor zone, somatosensory zone and a parieto-occipital zone. The obtained segmentations consistently show a sparse number of significantly connected voxels that are located near the surface of the anterior thalamus over a population of 38 subjects.
Buffa, F M; Davidson, S E; Hunter, R D; Nahum, A E; West, C M
2001-08-01
To assess whether incorporation of measurements of surviving fraction at 2 Gy (SF(2)) and colony-forming efficiency (CFE) into a tumor control probability (tcp) model increases their prognostic significance. Measurements of SF(2) and CFE were available from a study on carcinoma of the cervix treated with radiation alone. These measurements, as well as tumor volume, dose, and treatment time, were incorporated into a Poisson tcp model (tcp(alpha,rho)). Regression analysis was performed to assess the prognostic power of tcp(alpha,rho) vs. the use of either tcp models with biologic parameters fixed to best-fit estimates (but incorporating individual dose, volume, and treatment time) or the use of SF(2) and CFE measurements alone. In a univariate regression analysis of 44 patients, tcp(alpha,rho) was a better prognostic factor for both local control and survival (p < 0.001 and p = 0.049, respectively) than SF(2) alone (p = 0.009 for local control, p = 0.29 for survival) or CFE alone (p = 0.015 for local control, p = 0.38 for survival). In multivariate analysis, tcp(alpha,rho) emerged as the most important prognostic factor for local control (p < 0.001, relative risk of 2.81). After allowing for tcp(alpha,rho), CFE was still a significant independent prognostic factor for local control, whereas SF(2) was not. The sensitivities of tcp(alpha,rho) and SF(2) as predictive tests for local control were 87% and 65%, respectively. Specificities were 70% and 77%, respectively. A Poisson tcp model incorporating individual SF(2), CFE, dose, tumor volume, and treatment time was found to be the best independent prognostic factor for local control and survival in cervical carcinoma patients.
Design of Probabilistic Boolean Networks Based on Network Structure and Steady-State Probabilities.
Kobayashi, Koichi; Hiraishi, Kunihiko
2016-06-06
In this brief, we consider the problem of finding a probabilistic Boolean network (PBN) based on a network structure and desired steady-state properties. In systems biology and synthetic biology, such problems are important as an inverse problem. Using a matrix-based representation of PBNs, a solution method for this problem is proposed. The problem of finding a BN has been studied so far. In the problem of finding a PBN, we must calculate not only the Boolean functions, but also the probabilities of selecting a Boolean function and the number of candidates of the Boolean functions. Hence, the problem of finding a PBN is more difficult than that of finding a BN. The effectiveness of the proposed method is presented by numerical examples.
Tissue Probability-Based Attenuation Correction for Brain PET/MR by Using SPM8
NASA Astrophysics Data System (ADS)
Teuho, J.; Linden, J.; Johansson, J.; Tuisku, J.; Tuokkola, T.; Teräs, M.
2016-10-01
Bone attenuation remains a methodological challenge in hybrid PET/MR, as bone is hard to visualize via magnetic resonance imaging (MRI). Therefore, novel methods for taking into account bone attenuation in MR-based attenuation correction (MRAC) are needed. In this study, we propose a tissue-probability based attenuation correction (TPB-AC), which employs the commonly available neurological toolbox SPM8, to derive a subject-specific μ-map by segmentation of T1-weighted MR images. The procedures to derive a μ-map representing soft tissue, air and bone from the New Segment function in SPM8 and MATLAB are described. Visual and quantitative comparisons against CT-based attenuation correction (CTAC) data were performed using two μ-values ( 0.135 cm-1 and 0.145 cm-1) for bone. Results show improvement of visual quality and quantitative accuracy of positron emission tomography (PET) images when TPB-AC μ-map is used in PET/MR image reconstruction. Underestimation in PET images was decreased by an average of 5 ±2 percent in the whole brain across all patients. In addition, the method performed well when compared to CTAC, with maximum differences (mean ± standard deviation) of - 3 ±2 percent and 2 ±4 percent in two regions out of 28. Finally, the method is simple and computationally efficient, offering a promising platform for further development. Therefore, a subject-specific MR-based μ-map can be derived only from the tissue probability maps from the New Segment function of SPM8.
Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...
2016-02-02
Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less
NASA Astrophysics Data System (ADS)
M~Ottus, M.; Stenberg, P.
2007-12-01
Remote sensing of vegetation and modeling of canopy microclimate requires information on the fractions of incident radiation reflected, transmitted and absorbed by a plant canopy. The photon recollision probability p allows to calculate easily the amount of radiation absorbed by a vegetation canopy and to predict the spectral behavior of canopy scattering, i.e. the sum of canopy reflectance and transmittance. However, to divide the scattered radiation into reflected and transmitted fluxes, additional models are needed. To overcome this problem, we present a simple formula based on the photon recollision probability p to estimate the fraction of radiation scattered upwards by a canopy. The new semi-empirical method is tested with Monte Carlo simulations. A comparison with the analytical solution of the two-stream equation of radiative transfer in vegetation canopies is also provided. Our results indicate that the method is accurate for low to moderate leaf area index (LAI) values, and provides a reasonable approximation even at LAI=8. Finally, we present a new method to compute p using numerical radiative transfer models.
Protein single-model quality assessment by feature-based probability density functions.
Cao, Renzhi; Cheng, Jianlin
2016-04-04
Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.
NASA Astrophysics Data System (ADS)
Baldauf, Tim; Heinzig, André; Trommer, Jens; Mikolajick, Thomas; Weber, Walter Michael
2017-02-01
Mechanical stress is an established and important tool of the semiconductor industry to improve the performance of modern transistors. It is well understood for the enhancement of carrier mobility but rather unexplored for the control of the tunneling probability for injection dominated research devices based on tunneling phenomena, such as tunnel FETs, resonant tunnel FETs and reconfigurable Schottky FETs. In this work, the effect of stress on the tunneling probability and overall transistor characteristics is studied by three-dimensional device simulations in the example of reconfigurable silicon nanowire Schottky barrier transistors using two independently gated Schottky junctions. To this end, four different stress sources are investigated. The effects of mechanical stress on the average effective tunneling mass and on the multi-valley band structure applying the deformation potential theory are being considered. The transfer characteristics of strained transistors in n- and p-configuration and corresponding charge carrier tunneling are analyzed with respect to the current ratio between electron and hole conduction. For the implementation of these devices into complementary circuits, the mandatory current ratio of unity can be achieved by appropriate mechanical stress either by nanowire oxidation or the application of a stressed top layer.
A Trust-Based Adaptive Probability Marking and Storage Traceback Scheme for WSNs.
Liu, Anfeng; Liu, Xiao; Long, Jun
2016-03-30
Security is a pivotal issue for wireless sensor networks (WSNs), which are emerging as a promising platform that enables a wide range of military, scientific, industrial and commercial applications. Traceback, a key cyber-forensics technology, can play an important role in tracing and locating a malicious source to guarantee cybersecurity. In this work a trust-based adaptive probability marking and storage (TAPMS) traceback scheme is proposed to enhance security for WSNs. In a TAPMS scheme, the marking probability is adaptively adjusted according to the security requirements of the network and can substantially reduce the number of marking tuples and improve network lifetime. More importantly, a high trust node is selected to store marking tuples, which can avoid the problem of marking information being lost. Experimental results show that the total number of marking tuples can be reduced in a TAPMS scheme, thus improving network lifetime. At the same time, since the marking tuples are stored in high trust nodes, storage reliability can be guaranteed, and the traceback time can be reduced by more than 80%.
A Trust-Based Adaptive Probability Marking and Storage Traceback Scheme for WSNs
Liu, Anfeng; Liu, Xiao; Long, Jun
2016-01-01
Security is a pivotal issue for wireless sensor networks (WSNs), which are emerging as a promising platform that enables a wide range of military, scientific, industrial and commercial applications. Traceback, a key cyber-forensics technology, can play an important role in tracing and locating a malicious source to guarantee cybersecurity. In this work a trust-based adaptive probability marking and storage (TAPMS) traceback scheme is proposed to enhance security for WSNs. In a TAPMS scheme, the marking probability is adaptively adjusted according to the security requirements of the network and can substantially reduce the number of marking tuples and improve network lifetime. More importantly, a high trust node is selected to store marking tuples, which can avoid the problem of marking information being lost. Experimental results show that the total number of marking tuples can be reduced in a TAPMS scheme, thus improving network lifetime. At the same time, since the marking tuples are stored in high trust nodes, storage reliability can be guaranteed, and the traceback time can be reduced by more than 80%. PMID:27043566
Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; Martin-Martinez, Sergio; Zhang, Jie; Hodge, Bri -Mathias; Molina-Garcia, Angel
2016-02-02
Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power data are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.
Lura, Derek; Wernke, Matthew; Alqasemi, Redwan; Carey, Stephanie; Dubey, Rajiv
2012-01-01
This paper presents the probability density based gradient projection (GP) of the null space of the Jacobian for a 25 degree of freedom bilateral robotic human body model (RHBM). This method was used to predict the inverse kinematics of the RHBM and maximize the similarity between predicted inverse kinematic poses and recorded data of 10 subjects performing activities of daily living. The density function was created for discrete increments of the workspace. The number of increments in each direction (x, y, and z) was varied from 1 to 20. Performance of the method was evaluated by finding the root mean squared (RMS) of the difference between the predicted joint angles relative to the joint angles recorded from motion capture. The amount of data included in the creation of the probability density function was varied from 1 to 10 subjects, creating sets of for subjects included and excluded from the density function. The performance of the GP method for subjects included and excluded from the density function was evaluated to test the robustness of the method. Accuracy of the GP method varied with amount of incremental division of the workspace, increasing the number of increments decreased the RMS error of the method, with the error of average RMS error of included subjects ranging from 7.7° to 3.7°. However increasing the number of increments also decreased the robustness of the method.
Probability-Based Recognition Framework for Underwater Landmarks Using Sonar Images †
Choi, Jinwoo; Choi, Hyun-Taek
2017-01-01
This paper proposes a probability-based framework for recognizing underwater landmarks using sonar images. Current recognition methods use a single image, which does not provide reliable results because of weaknesses of the sonar image such as unstable acoustic source, many speckle noises, low resolution images, single channel image, and so on. However, using consecutive sonar images, if the status—i.e., the existence and identity (or name)—of an object is continuously evaluated by a stochastic method, the result of the recognition method is available for calculating the uncertainty, and it is more suitable for various applications. Our proposed framework consists of three steps: (1) candidate selection, (2) continuity evaluation, and (3) Bayesian feature estimation. Two probability methods—particle filtering and Bayesian feature estimation—are used to repeatedly estimate the continuity and feature of objects in consecutive images. Thus, the status of the object is repeatedly predicted and updated by a stochastic method. Furthermore, we develop an artificial landmark to increase detectability by an imaging sonar, which we apply to the characteristics of acoustic waves, such as instability and reflection depending on the roughness of the reflector surface. The proposed method is verified by conducting basin experiments, and the results are presented. PMID:28837068
Probability-Based Recognition Framework for Underwater Landmarks Using Sonar Images (†).
Lee, Yeongjun; Choi, Jinwoo; Ko, Nak Yong; Choi, Hyun-Taek
2017-08-24
This paper proposes a probability-based framework for recognizing underwater landmarks using sonar images. Current recognition methods use a single image, which does not provide reliable results because of weaknesses of the sonar image such as unstable acoustic source, many speckle noises, low resolution images, single channel image, and so on. However, using consecutive sonar images, if the status-i.e., the existence and identity (or name)-of an object is continuously evaluated by a stochastic method, the result of the recognition method is available for calculating the uncertainty, and it is more suitable for various applications. Our proposed framework consists of three steps: (1) candidate selection, (2) continuity evaluation, and (3) Bayesian feature estimation. Two probability methods-particle filtering and Bayesian feature estimation-are used to repeatedly estimate the continuity and feature of objects in consecutive images. Thus, the status of the object is repeatedly predicted and updated by a stochastic method. Furthermore, we develop an artificial landmark to increase detectability by an imaging sonar, which we apply to the characteristics of acoustic waves, such as instability and reflection depending on the roughness of the reflector surface. The proposed method is verified by conducting basin experiments, and the results are presented.
Liu, Zhao; Zhu, Yunhong; Wu, Chenxue
2016-01-01
Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502
Zhang, Haitao; Chen, Zewei; Liu, Zhao; Zhu, Yunhong; Wu, Chenxue
2016-01-01
Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users' privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified.
NASA Astrophysics Data System (ADS)
Sari, Dwi Ivayana; Budayasa, I. Ketut; Juniati, Dwi
2017-08-01
Formulation of mathematical learning goals now is not only oriented on cognitive product, but also leads to cognitive process, which is probabilistic thinking. Probabilistic thinking is needed by students to make a decision. Elementary school students are required to develop probabilistic thinking as foundation to learn probability at higher level. A framework of probabilistic thinking of students had been developed by using SOLO taxonomy, which consists of prestructural probabilistic thinking, unistructural probabilistic thinking, multistructural probabilistic thinking and relational probabilistic thinking. This study aimed to analyze of probability task completion based on taxonomy of probabilistic thinking. The subjects were two students of fifth grade; boy and girl. Subjects were selected by giving test of mathematical ability and then based on high math ability. Subjects were given probability tasks consisting of sample space, probability of an event and probability comparison. The data analysis consisted of categorization, reduction, interpretation and conclusion. Credibility of data used time triangulation. The results was level of boy's probabilistic thinking in completing probability tasks indicated multistructural probabilistic thinking, while level of girl's probabilistic thinking in completing probability tasks indicated unistructural probabilistic thinking. The results indicated that level of boy's probabilistic thinking was higher than level of girl's probabilistic thinking. The results could contribute to curriculum developer in developing probability learning goals for elementary school students. Indeed, teachers could teach probability with regarding gender difference.
2009-03-01
AFRL-RX-WP-TP-2009-4123 MICROSTRUCTURE-SENSITIVE EXTREME VALUE PROBABILITIES FOR HIGH CYCLE FATIGUE OF Ni- BASE SUPERALLOY IN100 (PREPRINT...PROBABILITIES FOR HIGH CYCLE FATIGUE OF Ni-BASE SUPERALLOY IN100 (PREPRINT) 5a. CONTRACT NUMBER In-house 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...statistical framework to investigate the microstructure-sensitive fatigue response of the PM Ni-base superalloy IN100. To accomplish this task, we
Reliability, failure probability, and strength of resin-based materials for CAD/CAM restorations.
Lim, Kiatlin; Yap, Adrian U-Jin; Agarwalla, Shruti Vidhawan; Tan, Keson Beng-Choon; Rosa, Vinicius
2016-01-01
This study investigated the Weibull parameters and 5% fracture probability of direct, indirect composites, and CAD/CAM composites. Discshaped (12 mm diameter x 1 mm thick) specimens were prepared for a direct composite [Z100 (ZO), 3M-ESPE], an indirect laboratory composite [Ceramage (CM), Shofu], and two CAD/CAM composites [Lava Ultimate (LU), 3M ESPE; Vita Enamic (VE), Vita Zahnfabrik] restorations (n=30 for each group). The specimens were polished, stored in distilled water for 24 hours at 37°C. Weibull parameters (m= modulus of Weibull, σ0= characteristic strength) and flexural strength for 5% fracture probability (σ5%) were determined using a piston-on-three-balls device at 1 MPa/s in distilled water. Statistical analysis for biaxial flexural strength analysis were performed either by both one-way ANOVA and Tukey's post hoc (α=0.05) or by Pearson's correlation test. Ranking of m was: VE (19.5), LU (14.5), CM (11.7), and ZO (9.6). Ranking of σ0 (MPa) was: LU (218.1), ZO (210.4), CM (209.0), and VE (126.5). σ5% (MPa) was 177.9 for LU, 163.2 for CM, 154.7 for Z0, and 108.7 for VE. There was no significant difference in the m for ZO, CM, and LU. VE presented the highest m value and significantly higher than ZO. For σ0 and σ5%, ZO, CM, and LU were similar but higher than VE. The strength characteristics of CAD/ CAM composites vary according to their composition and microstructure. VE presented the lowest strength and highest Weibull modulus among the materials.
Reliability, failure probability, and strength of resin-based materials for CAD/CAM restorations
Lim, Kiatlin; Yap, Adrian U-Jin; Agarwalla, Shruti Vidhawan; Tan, Keson Beng-Choon; Rosa, Vinicius
2016-01-01
ABSTRACT Objective: This study investigated the Weibull parameters and 5% fracture probability of direct, indirect composites, and CAD/CAM composites. Material and Methods: Discshaped (12 mm diameter x 1 mm thick) specimens were prepared for a direct composite [Z100 (ZO), 3M-ESPE], an indirect laboratory composite [Ceramage (CM), Shofu], and two CAD/CAM composites [Lava Ultimate (LU), 3M ESPE; Vita Enamic (VE), Vita Zahnfabrik] restorations (n=30 for each group). The specimens were polished, stored in distilled water for 24 hours at 37°C. Weibull parameters (m= modulus of Weibull, σ0= characteristic strength) and flexural strength for 5% fracture probability (σ5%) were determined using a piston-on-three-balls device at 1 MPa/s in distilled water. Statistical analysis for biaxial flexural strength analysis were performed either by both one-way ANOVA and Tukey's post hoc (α=0.05) or by Pearson's correlation test. Results: Ranking of m was: VE (19.5), LU (14.5), CM (11.7), and ZO (9.6). Ranking of σ0 (MPa) was: LU (218.1), ZO (210.4), CM (209.0), and VE (126.5). σ5% (MPa) was 177.9 for LU, 163.2 for CM, 154.7 for Z0, and 108.7 for VE. There was no significant difference in the m for ZO, CM, and LU. VE presented the highest m value and significantly higher than ZO. For σ0 and σ5%, ZO, CM, and LU were similar but higher than VE. Conclusion: The strength characteristics of CAD/ CAM composites vary according to their composition and microstructure. VE presented the lowest strength and highest Weibull modulus among the materials. PMID:27812614
Finite element model updating of concrete structures based on imprecise probability
NASA Astrophysics Data System (ADS)
Biswal, S.; Ramaswamy, A.
2017-09-01
Imprecise probability based methods are developed in this study for the parameter estimation, in finite element model updating for concrete structures, when the measurements are imprecisely defined. Bayesian analysis using Metropolis Hastings algorithm for parameter estimation is generalized to incorporate the imprecision present in the prior distribution, in the likelihood function, and in the measured responses. Three different cases are considered (i) imprecision is present in the prior distribution and in the measurements only, (ii) imprecision is present in the parameters of the finite element model and in the measurement only, and (iii) imprecision is present in the prior distribution, in the parameters of the finite element model, and in the measurements. Procedures are also developed for integrating the imprecision in the parameters of the finite element model, in the finite element software Abaqus. The proposed methods are then verified against reinforced concrete beams and prestressed concrete beams tested in our laboratory as part of this study.
Fall risk probability estimation based on supervised feature learning using public fall datasets.
Koshmak, Gregory A; Linden, Maria; Loutfi, Amy
2016-08-01
Risk of falling is considered among major threats for elderly population and therefore started to play an important role in modern healthcare. With recent development of sensor technology, the number of studies dedicated to reliable fall detection system has increased drastically. However, there is still a lack of universal approach regarding the evaluation of developed algorithms. In the following study we make an attempt to find publicly available fall datasets and analyze similarities among them using supervised learning. After preforming similarity assessment based on multidimensional scaling we indicate the most representative feature vector corresponding to each specific dataset. This vector obtained from a real-life data is subsequently deployed to estimate fall risk probabilities for a statistical fall detection model. Finally, we conclude with some observations regarding the similarity assessment results and provide suggestions towards an efficient approach for evaluation of fall detection studies.
Fisicaro, E; Braibanti, A; Sambasiva Rao, R; Compari, C; Ghiozzi, A; Nageswara Rao, G
1998-04-01
An algorithm is proposed for the estimation of binding parameters for the interaction of biologically important macromolecules with smaller ones from electrometric titration data. The mathematical model is based on the representation of equilibria in terms of probability concepts of statistical molecular thermodynamics. The refinement of equilibrium concentrations of the components and estimation of binding parameters (log site constant and cooperativity factor) is performed using singular value decomposition, a chemometric technique which overcomes the general obstacles due to near singularity. The present software is validated with a number of biochemical systems of varying number of sites and cooperativity factors. The effect of random errors of realistic magnitude in experimental data is studied using the simulated primary data for some typical systems. The safe area within which approximate binding parameters ensure convergence has been reported for the non-self starting optimization algorithms.
Probability of an Abnormal Screening PSA Result Based on Age, Race, and PSA Threshold
Espaldon, Roxanne; Kirby, Katharine A.; Fung, Kathy Z.; Hoffman, Richard M.; Powell, Adam A.; Freedland, Stephen J.; Walter, Louise C.
2014-01-01
Objective To determine the distribution of screening PSA values in older men and how different PSA thresholds affect the proportion of white, black, and Latino men who would have an abnormal screening result across advancing age groups. Methods We used linked national VA and Medicare data to determine the value of the first screening PSA test (ng/mL) of 327,284 men age 65+ who underwent PSA screening in the VA healthcare system in 2003. We calculated the proportion of men with an abnormal PSA result based on age, race, and common PSA thresholds. Results Among men age 65+, 8.4% had a PSA >4.0ng/mL. The percentage of men with a PSA >4.0ng/mL increased with age and was highest in black men (13.8%) versus white (8.0%) or Latino men (10.0%) (P<0.001). Combining age and race, the probability of having a PSA >4.0ng/mL ranged from 5.1% of Latino men age 65–69 to 27.4% of black men age 85+. Raising the PSA threshold from >4.0ng/mL to >10.0ng/mL, reclassified the greatest percentage of black men age 85+ (18.3% absolute change) and the lowest percentage of Latino men age 65–69 (4.8% absolute change) as being under the biopsy threshold (P<0.001). Conclusions Age, race, and PSA threshold together affect the pre-test probability of an abnormal screening PSA result. Based on screening PSA distributions, stopping screening among men whose PSA < 3ng/ml means over 80% of white and Latino men age 70+ would stop further screening, and increasing the biopsy threshold to >10ng/ml has the greatest effect on reducing the number of older black men who will face biopsy decisions after screening. PMID:24439009
Estimation of the failure probability during EGS stimulation based on borehole data
NASA Astrophysics Data System (ADS)
Meller, C.; Kohl, Th.; Gaucher, E.
2012-04-01
In recent times the search for alternative sources of energy has been fostered by the scarcity of fossil fuels. With its ability to permanently provide electricity or heat with little emission of CO2, geothermal energy will have an important share in the energy mix of the future. Within Europe, scientists identified many locations with conditions suitable for Enhanced Geothermal System (EGS) projects. In order to provide sufficiently high reservoir permeability, EGS require borehole stimulations prior to installation of power plants (Gérard et al, 2006). Induced seismicity during water injection into reservoirs EGS systems is a factor that currently cannot be predicted nor controlled. Often, people living near EGS projects are frightened by smaller earthquakes occurring during stimulation or injection. As this fear can lead to widespread disapproval of geothermal power plants, it is appreciable to find a way to estimate the probability of fractures to shear when injecting water with a distinct pressure into a geothermal reservoir. This provides knowledge, which enables to predict the mechanical behavior of a reservoir in response to a change in pore pressure conditions. In the present study an approach for estimation of the shearing probability based on statistical analyses of fracture distribution, orientation and clusters, together with their geological properties is proposed. Based on geophysical logs of five wells in Soultz-sous-Forêts, France, and with the help of statistical tools, the Mohr criterion, geological and mineralogical properties of the host rock and the fracture fillings, correlations between the wells are analyzed. This is achieved with the self-written MATLAB-code Fracdens, which enables us to statistically analyze the log files in different ways. With the application of a pore pressure change, the evolution of the critical pressure on the fractures can be determined. A special focus is on the clay fillings of the fractures and how they reduce
Rationalizing Hybrid Earthquake Probabilities
NASA Astrophysics Data System (ADS)
Gomberg, J.; Reasenberg, P.; Beeler, N.; Cocco, M.; Belardinelli, M.
2003-12-01
An approach to including stress transfer and frictional effects in estimates of the probability of failure of a single fault affected by a nearby earthquake has been suggested in Stein et al. (1997). This `hybrid' approach combines conditional probabilities, which depend on the time elapsed since the last earthquake on the affected fault, with Poissonian probabilities that account for friction and depend only on the time since the perturbing earthquake. The latter are based on the seismicity rate change model developed by Dieterich (1994) to explain the temporal behavior of aftershock sequences in terms of rate-state frictional processes. The model assumes an infinite population of nucleation sites that are near failure at the time of the perturbing earthquake. In the hybrid approach, assuming the Dieterich model can lead to significant transient increases in failure probability. We explore some of the implications of applying the Dieterich model to a single fault and its impact on the hybrid probabilities. We present two interpretations that we believe can rationalize the use of the hybrid approach. In the first, a statistical distribution representing uncertainties in elapsed and/or mean recurrence time on the fault serves as a proxy for Dieterich's population of nucleation sites. In the second, we imagine a population of nucleation patches distributed over the fault with a distribution of maturities. In both cases we find that the probability depends on the time since the last earthquake. In particular, the size of the transient probability increase may only be significant for faults already close to failure. Neglecting the maturity of a fault may lead to overestimated rate and probability increases.
Emg Amplitude Estimators Based on Probability Distribution for Muscle-Computer Interface
NASA Astrophysics Data System (ADS)
Phinyomark, Angkoon; Quaine, Franck; Laurillau, Yann; Thongpanja, Sirinee; Limsakul, Chusak; Phukpattaranont, Pornchai
To develop an advanced muscle-computer interface (MCI) based on surface electromyography (EMG) signal, the amplitude estimations of muscle activities, i.e., root mean square (RMS) and mean absolute value (MAV) are widely used as a convenient and accurate input for a recognition system. Their classification performance is comparable to advanced and high computational time-scale methods, i.e., the wavelet transform. However, the signal-to-noise-ratio (SNR) performance of RMS and MAV depends on a probability density function (PDF) of EMG signals, i.e., Gaussian or Laplacian. The PDF of upper-limb motions associated with EMG signals is still not clear, especially for dynamic muscle contraction. In this paper, the EMG PDF is investigated based on surface EMG recorded during finger, hand, wrist and forearm motions. The results show that on average the experimental EMG PDF is closer to a Laplacian density, particularly for male subject and flexor muscle. For the amplitude estimation, MAV has a higher SNR, defined as the mean feature divided by its fluctuation, than RMS. Due to a same discrimination of RMS and MAV in feature space, MAV is recommended to be used as a suitable EMG amplitude estimator for EMG-based MCIs.
NASA Astrophysics Data System (ADS)
Zhou, Chao; Su, Zhongqing; Cheng, Li
2011-12-01
The imaging technique based on guided waves has been a research focus in the field of damage detection over the years, aimed at intuitively highlighting structural damage in two- or three-dimensional images. The accuracy and efficiency of this technique substantially rely on the means of defining the field values at image pixels. In this study, a novel probability-based diagnostic imaging (PDI) approach was developed. Hybrid signal features (including temporal information, intensity of signal energy and signal correlation) were extracted from ultrasonic Lamb wave signals and integrated to retrofit the traditional way of defining field values. To acquire hybrid signal features, an active sensor network in line with pulse-echo and pitch-catch configurations was designed, supplemented with a novel concept of 'virtual sensing'. A hybrid image fusion scheme was developed to enhance the tolerance of the approach to measurement noise/uncertainties and erroneous perceptions from individual sensors. As applications, the approach was employed to identify representative damage scenarios including L-shape through-thickness crack (orientation-specific damage), polygonal damage (multi-edge damage) and multi-damage in structural plates. Results have corroborated that the developed PDI approach based on the use of hybrid signal features is capable of visualizing structural damage quantitatively, regardless of damage shape and number, by highlighting its individual edges in an easily interpretable binary image.
Yuan, Ping; Chen, Tie-Hui; Chen, Zhong-Wu; Lin, Xiu-Quan
2014-01-01
To calculate the probability of one person's life-time death caused by a malignant tumor and provide theoretical basis for cancer prevention. The probability of one person's death caused by a tumor was calculated by a probability additive formula and based on an abridged life table. All data for age-specific mortality were from the third retrospective investigation of death cause in China. The probability of one person's death caused by malignant tumor was 18.7% calculated by the probability additive formula. On the same way, the life-time death probability caused by lung cancer, gastric cancer, liver cancer, esophageal cancer, colorectal and anal cancer were 4.47%, 3.62%, 3.25%, 2.25%, 1.11%, respectively. Malignant tumor is still the main cause of death in one's life time and the most common causes of cancer death were lung, gastric, liver, esophageal, colorectal and anal cancers. Targeted forms of cancer prevention and treatment strategies should be worked out to improve people's health and prolong life in China. The probability additive formula is a more scientific and objective method to calculate the probability of one person's life-time death than cumulative death probability .
Sandoval, Santiago; Bertrand-Krajewski, Jean-Luc
2016-06-01
Total suspended solid (TSS) measurements in urban drainage systems are required for several reasons. Aiming to assess uncertainties in the mean TSS concentration due to the influence of sampling intake vertical position and vertical concentration gradients in a sewer pipe, two methods are proposed: a simplified method based on a theoretical vertical concentration profile (SM) and a time series grouping method (TSM). SM is based on flow rate and water depth time series. TSM requires additional TSS time series as input data. All time series are from the Chassieu urban catchment in Lyon, France (time series from 2007 with 2-min time step, 89 rainfall events). The probability of measuring a TSS value lower than the mean TSS along the vertical cross section (TSS underestimation) is about 0.88 with SM and about 0.64 with TSM. TSM shows more realistic TSS underestimation values (about 39 %) than SM (about 269 %). Interquartile ranges (IQR) over the probability values indicate that SM is more uncertain (IQR = 0.08) than TSM (IQR = 0.02). Differences between the two methods are mainly due to simplifications in SM (absence of TSS measurements). SM assumes a significant asymmetry of the TSS concentration profile along the vertical axis in the cross section. This is compatible with the distribution of TSS measurements found in the TSM approach. The methods provide insights towards an indicator of the measurement performance and representativeness for a TSS sampling protocol.
Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu
2013-01-04
Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .
Estimates of EPSP amplitude based on changes in motoneuron discharge rate and probability.
Powers, Randall K; Türker, K S
2010-10-01
When motor units are discharging tonically, transient excitatory synaptic inputs produce an increase in the probability of spike occurrence and also increase the instantaneous discharge rate. Several researchers have proposed that these induced changes in discharge rate and probability can be used to estimate the amplitude of the underlying excitatory post-synaptic potential (EPSP). We tested two different methods of estimating EPSP amplitude by comparing the amplitude of simulated EPSPs with their effects on the discharge of rat hypoglossal motoneurons recorded in an in vitro brainstem slice preparation. The first estimation method (simplified-trajectory method) is based on the assumptions that the membrane potential trajectory between spikes can be approximated by a 10 mV post-spike hyperpolarization followed by a linear rise to the next spike and that EPSPs sum linearly with this trajectory. We hypothesized that this estimation method would not be accurate due to interspike variations in membrane conductance and firing threshold that are not included in the model and that an alternative method based on estimating the effective distance to threshold would provide more accurate estimates of EPSP amplitude. This second method (distance-to-threshold method) uses interspike interval statistics to estimate the effective distance to threshold throughout the interspike interval and incorporates this distance-to-threshold trajectory into a threshold-crossing model. We found that the first method systematically overestimated the amplitude of small (<5 mV) EPSPs and underestimated the amplitude of large (>5 mV EPSPs). For large EPSPs, the degree of underestimation increased with increasing background discharge rate. Estimates based on the second method were more accurate for small EPSPs than those based on the first model, but estimation errors were still large for large EPSPs. These errors were likely due to two factors: (1) the distance to threshold can only be
A generic probability based algorithm to derive regional patterns of crops in time and space
NASA Astrophysics Data System (ADS)
Wattenbach, Martin; Oijen, Marcel v.; Leip, Adrian; Hutchings, Nick; Balkovic, Juraj; Smith, Pete
2013-04-01
Croplands are not only the key to human food supply, they also change the biophysical and biogeochemical properties of the land surface leading to changes in the water cycle, energy partitioning, influence soil erosion and substantially contribute to the amount of greenhouse gases entering the atmosphere. The effects of croplands on the environment depend on the type of crop and the associated management which both are related to the site conditions, economic boundary settings as well as preferences of individual farmers. However, at a given point of time the pattern of crops in a landscape is not only determined by environmental and socioeconomic conditions but also by the compatibility to the crops which had been grown in the years before at the current field and its surrounding cropping area. The crop compatibility is driven by factors like pests and diseases, crop driven changes in soil structure and timing of cultivation steps. Given these effects of crops on the biochemical cycle and their interdependence with the mentioned boundary conditions, there is a demand in the regional and global modelling community to account for these regional patterns. Here we present a Bayesian crop distribution generator algorithm that is used to calculate the combined and conditional probability for a crop to appear in time and space using sparse and disparate information. The input information to define the most probable crop per year and grid cell is based on combined probabilities derived from the a crop transition matrix representing good agricultural practice, crop specific soil suitability derived from the European soil database and statistical information about harvested area from the Eurostat database. The reported Eurostat crop area also provides the target proportion to be matched by the algorithm on the level of administrative units (Nomenclature des Unités Territoriales Statistiques - NUTS). The algorithm is applied for the EU27 to derive regional spatial and
Kukla, G.; Gavin, J.
1994-05-01
This report was prepared at the Lamont-Doherty Geological Observatory of Columbia University at Palisades, New York, under subcontract to Pacific Northwest Laboratory it is a part of a larger project of global climate studies which supports site characterization work required for the selection of a potential high-level nuclear waste repository and forms part of the Performance Assessment Scientific Support (PASS) Program at PNL. The work under the PASS Program is currently focusing on the proposed site at Yucca Mountain, Nevada, and is under the overall direction of the Yucca Mountain Project Office US Department of Energy, Las Vegas, Nevada. The final results of the PNL project will provide input to global atmospheric models designed to test specific climate scenarios which will be used in the site specific modeling work of others. The primary purpose of the data bases compiled and of the astronomic predictive models is to aid in the estimation of the probabilities of future climate states. The results will be used by two other teams working on the global climate study under contract to PNL. They are located at and the University of Maine in Orono, Maine, and the Applied Research Corporation in College Station, Texas. This report presents the results of the third year`s work on the global climate change models and the data bases describing past climates.
An RFID Indoor Positioning Algorithm Based on Bayesian Probability and K-Nearest Neighbor.
Xu, He; Ding, Ye; Li, Peng; Wang, Ruchuan; Li, Yizhu
2017-08-05
The Global Positioning System (GPS) is widely used in outdoor environmental positioning. However, GPS cannot support indoor positioning because there is no signal for positioning in an indoor environment. Nowadays, there are many situations which require indoor positioning, such as searching for a book in a library, looking for luggage in an airport, emergence navigation for fire alarms, robot location, etc. Many technologies, such as ultrasonic, sensors, Bluetooth, WiFi, magnetic field, Radio Frequency Identification (RFID), etc., are used to perform indoor positioning. Compared with other technologies, RFID used in indoor positioning is more cost and energy efficient. The Traditional RFID indoor positioning algorithm LANDMARC utilizes a Received Signal Strength (RSS) indicator to track objects. However, the RSS value is easily affected by environmental noise and other interference. In this paper, our purpose is to reduce the location fluctuation and error caused by multipath and environmental interference in LANDMARC. We propose a novel indoor positioning algorithm based on Bayesian probability and K-Nearest Neighbor (BKNN). The experimental results show that the Gaussian filter can filter some abnormal RSS values. The proposed BKNN algorithm has the smallest location error compared with the Gaussian-based algorithm, LANDMARC and an improved KNN algorithm. The average error in location estimation is about 15 cm using our method.
SPOTIN, Adel; EGHTEDAR, Sanaz TAGHIZADEH; SHAHBAZI, Abbas; SALEHPOUR, Asghar; SARAFRAZ, Seddigheh; SHARIATZADEH, Seyyed Ali; MAHAMI-OSKOUEI, Mahmoud
2016-01-01
Background: The aim of this study was to identify the Trichomonas vaginalis strains/haplotypes based on identifying their probable variations in asymptomatic patients referred to Tabriz health centers, northwestern Iran. Methods: Sampling was taken from 50-suspected women to T. vaginalis in northwestern Iran. The obtained samples were smeared and cultured. Fifty DNA samples were extracted, amplified and identified by nested polymerase chain reaction and PCR-RFLP of actin gene using two endonuclease enzymes: MseI and RsaI. To reconfirm, the amplicons of actin gene were directly sequenced in order to identify the strains/haplotypes. Results: PCR-RFLP patterns, sequencing and phylogenetic analyses revealed definitely the presence of the G (n=22; 73.4%) and E (n=8; 26.6%) strains. Multiple alignments findings of genotype G showed five haplotypes and two amino acid substitutions in codons 192 and 211 although, no remarkable unique haplotype was found in genotype E. Conclusion: The accurate identification of T. vaginalis strains based on discrimination of their unknown haplotypes particularly those which are impacted on protein translation should be considered in parasite status, drug resistance, mixed infection with HIV and monitoring of asymptomatic trichomoniasis in the region. PMID:28127362
Anderson, Chad V; Fuglevand, Andrew J
2008-07-01
Functional electrical stimulation (FES) involves artificial activation of muscles with implanted electrodes to restore motor function in paralyzed individuals. The range of motor behaviors that can be generated by FES, however, is limited to a small set of preprogrammed movements such as hand grasp and release. A broader range of movements has not been implemented because of the substantial difficulty associated with identifying the patterns of muscle stimulation needed to elicit specified movements. To overcome this limitation in controlling FES systems, we used probabilistic methods to estimate the levels of muscle activity in the human arm during a wide range of free movements based on kinematic information of the upper limb. Conditional probability distributions were generated based on hand kinematics and associated surface electromyographic (EMG) signals from 12 arm muscles recorded during a training task involving random movements of the arm in one subject. These distributions were then used to predict in four other subjects the patterns of muscle activity associated with eight different movement tasks. On average, about 40% of the variance in the actual EMG signals could be accounted for in the predicted EMG signals. These results suggest that probabilistic methods ultimately might be used to predict the patterns of muscle stimulation needed to produce a wide array of desired movements in paralyzed individuals with FES.
An RFID Indoor Positioning Algorithm Based on Bayesian Probability and K-Nearest Neighbor
Ding, Ye; Li, Peng; Wang, Ruchuan; Li, Yizhu
2017-01-01
The Global Positioning System (GPS) is widely used in outdoor environmental positioning. However, GPS cannot support indoor positioning because there is no signal for positioning in an indoor environment. Nowadays, there are many situations which require indoor positioning, such as searching for a book in a library, looking for luggage in an airport, emergence navigation for fire alarms, robot location, etc. Many technologies, such as ultrasonic, sensors, Bluetooth, WiFi, magnetic field, Radio Frequency Identification (RFID), etc., are used to perform indoor positioning. Compared with other technologies, RFID used in indoor positioning is more cost and energy efficient. The Traditional RFID indoor positioning algorithm LANDMARC utilizes a Received Signal Strength (RSS) indicator to track objects. However, the RSS value is easily affected by environmental noise and other interference. In this paper, our purpose is to reduce the location fluctuation and error caused by multipath and environmental interference in LANDMARC. We propose a novel indoor positioning algorithm based on Bayesian probability and K-Nearest Neighbor (BKNN). The experimental results show that the Gaussian filter can filter some abnormal RSS values. The proposed BKNN algorithm has the smallest location error compared with the Gaussian-based algorithm, LANDMARC and an improved KNN algorithm. The average error in location estimation is about 15 cm using our method. PMID:28783073
Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan M
2017-02-01
Biomedical data may be composed of individuals generated from distinct, meaningful sources. Due to possible contextual biases in the processes that generate data, there may exist an undesirable and unexpected variability among the probability distribution functions (PDFs) of the source subsamples, which, when uncontrolled, may lead to inaccurate or unreproducible research results. Classical statistical methods may have difficulties to undercover such variabilities when dealing with multi-modal, multi-type, multi-variate data. This work proposes two metrics for the analysis of stability among multiple data sources, robust to the aforementioned conditions, and defined in the context of data quality assessment. Specifically, a global probabilistic deviation and a source probabilistic outlyingness metrics are proposed. The first provides a bounded degree of the global multi-source variability, designed as an estimator equivalent to the notion of normalized standard deviation of PDFs. The second provides a bounded degree of the dissimilarity of each source to a latent central distribution. The metrics are based on the projection of a simplex geometrical structure constructed from the Jensen-Shannon distances among the sources PDFs. The metrics have been evaluated and demonstrated their correct behaviour on a simulated benchmark and with real multi-source biomedical data using the UCI Heart Disease data set. The biomedical data quality assessment based on the proposed stability metrics may improve the efficiency and effectiveness of biomedical data exploitation and research.
Some considerations on the definition of risk based on concepts of systems theory and probability.
Andretta, Massimo
2014-07-01
The concept of risk has been applied in many modern science and technology fields. Despite its successes in many applicative fields, there is still not a well-established vision and universally accepted definition of the principles and fundamental concepts of the risk assessment discipline. As emphasized recently, the risk fields suffer from a lack of clarity on their scientific bases that can define, in a unique theoretical framework, the general concepts in the different areas of application. The aim of this article is to make suggestions for another perspective of risk definition that could be applied and, in a certain sense, generalize some of the previously known definitions (at least in the fields of technical and scientific applications). By drawing on my experience of risk assessment in different applicative situations (particularly in the risk estimation for major industrial accidents, and in the health and ecological risk assessment for contaminated sites), I would like to revise some general and foundational concepts of risk analysis in as consistent a manner as possible from the axiomatic/deductive point of view. My proposal is based on the fundamental concepts of the systems theory and of the probability. In this way, I try to frame, in a single, broad, and general theoretical context some fundamental concepts and principles applicable in many different fields of risk assessment. I hope that this article will contribute to the revitalization and stimulation of useful discussions and new insights into the key issues and theoretical foundations of risk assessment disciplines.
Experience-Based Probabilities Modulate Expectations in a Gender-Coded Artificial Language
Öttl, Anton; Behne, Dawn M.
2016-01-01
The current study combines artificial language learning with visual world eyetracking to investigate acquisition of representations associating spoken words and visual referents using morphologically complex pseudowords. Pseudowords were constructed to consistently encode referential gender by means of suffixation for a set of imaginary figures that could be either male or female. During training, the frequency of exposure to pseudowords and their imaginary figure referents were manipulated such that a given word and its referent would be more likely to occur in either the masculine form or the feminine form, or both forms would be equally likely. Results show that these experience-based probabilities affect the formation of new representations to the extent that participants were faster at recognizing a referent whose gender was consistent with the induced expectation than a referent whose gender was inconsistent with this expectation. Disambiguating gender information available from the suffix did not mask the induced expectations. Eyetracking data provide additional evidence that such expectations surface during online lexical processing. Taken together, these findings indicate that experience-based information is accessible during the earliest stages of processing, and are consistent with the view that language comprehension depends on the activation of perceptual memory traces. PMID:27602009
Gully, J.R.; Baird, R.B.; Markle, P.J.; Bottomley, J.P.
2000-01-01
A methodology is described that incorporates the intra- and intertest variability and the biological effect of bioassay data in evaluating the toxicity of single and multiple tests for regulatory decision-making purposes. The single- and multiple-test regulatory decision probabilities were determined from t values (n {minus} 1, one-tailed) derived from the estimated biological effect and the associated standard error at the critical sample concentration. Single-test regulatory decision probabilities below the selected minimum regulatory decision probability identify individual tests as noncompliant. A multiple-test regulatory decision probability is determined by combining the regulatory decision probability of a series of single tests. A multiple-test regulatory decision probability is determined by combining the regulatory decision probability of a series of single tests. A multiple-test regulatory decision probability below the multiple-test regulatory decision minimum identifies groups of tests in which the magnitude and persistence of the toxicity is sufficient to be considered noncompliant or to require enforcement action. Regulatory decision probabilities derived from the t distribution were compared with results based on standard and bioequivalence hypothesis tests using single- and multiple-concentration toxicity test data from an actual national pollutant discharge incorporated the precision of the effect estimate into regulatory decisions at a fixed level of effect. Also, probability-based interpretation of toxicity tests provides incentive to laboratories to produce, and permit holders to use, high-quality, precise data, particularly when multiple tests are used in regulatory decisions. These results are contrasted with standard and bioequivalence hypothesis tests in which the intratest precision is a determining factor in setting the biological effect used for regulatory decisions.
Lorz, C; Fürst, C; Galic, Z; Matijasic, D; Podrazky, V; Potocic, N; Simoncic, P; Strauch, M; Vacik, H; Makeschin, F
2010-12-01
We assessed the probability of three major natural hazards--windthrow, drought, and forest fire--for Central and South-Eastern European forests which are major threats for the provision of forest goods and ecosystem services. In addition, we analyzed spatial distribution and implications for a future oriented management of forested landscapes. For estimating the probability of windthrow, we used rooting depth and average wind speed. Probabilities of drought and fire were calculated from climatic and total water balance during growing season. As an approximation to climate change scenarios, we used a simplified approach with a general increase of pET by 20%. Monitoring data from the pan-European forests crown condition program and observed burnt areas and hot spots from the European Forest Fire Information System were used to test the plausibility of probability maps. Regions with high probabilities of natural hazard are identified and management strategies to minimize probability of natural hazards are discussed. We suggest future research should focus on (i) estimating probabilities using process based models (including sensitivity analysis), (ii) defining probability in terms of economic loss, (iii) including biotic hazards, (iv) using more detailed data sets on natural hazards, forest inventories and climate change scenarios, and (v) developing a framework of adaptive risk management.
Singh, Nagendra Pratap; Srivastava, Rajeev
2016-06-01
Retinal blood vessel segmentation is a prominent task for the diagnosis of various retinal pathology such as hypertension, diabetes, glaucoma, etc. In this paper, a novel matched filter approach with the Gumbel probability distribution function as its kernel is introduced to improve the performance of retinal blood vessel segmentation. Before applying the proposed matched filter, the input retinal images are pre-processed. During pre-processing stage principal component analysis (PCA) based gray scale conversion followed by contrast limited adaptive histogram equalization (CLAHE) are applied for better enhancement of retinal image. After that an exhaustive experiments have been conducted for selecting the appropriate value of parameters to design a new matched filter. The post-processing steps after applying the proposed matched filter include the entropy based optimal thresholding and length filtering to obtain the segmented image. For evaluating the performance of proposed approach, the quantitative performance measures, an average accuracy, average true positive rate (ATPR), and average false positive rate (AFPR) are calculated. The respective values of the quantitative performance measures are 0.9522, 0.7594, 0.0292 for DRIVE data set and 0.9270, 0.7939, 0.0624 for STARE data set. To justify the effectiveness of proposed approach, receiver operating characteristic (ROC) curve is plotted and the average area under the curve (AUC) is calculated. The average AUC for DRIVE and STARE data sets are 0.9287 and 0.9140 respectively. The obtained experimental results confirm that the proposed approach performance better with respect to other prominent Gaussian distribution function and Cauchy PDF based matched filter approaches. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nakamura, Kazuhiro; Yamamoto, Masatoshi; Takagi, Kazuyoshi; Takagi, Naofumi
In this paper, a fast and memory-efficient VLSI architecture for output probability computations of continuous Hidden Markov Models (HMMs) is presented. These computations are the most time-consuming part of HMM-based recognition systems. High-speed VLSI architectures with small registers and low-power dissipation are required for the development of mobile embedded systems with capable human interfaces. We demonstrate store-based block parallel processing (StoreBPP) for output probability computations and present a VLSI architecture that supports it. When the number of HMM states is adequate for accurate recognition, compared with conventional stream-based block parallel processing (StreamBPP) architectures, the proposed architecture requires fewer registers and processing elements and less processing time. The processing elements used in the StreamBPP architecture are identical to those used in the StoreBPP architecture. From a VLSI architectural viewpoint, a comparison shows the efficiency of the proposed architecture through efficient use of registers for storing input feature vectors and intermediate results during computation.
NASA Astrophysics Data System (ADS)
Geiger, D.; Schrezenmeier, I.; Roos, M.; Neckernuss, T.; Lehn, M.; Marti, O.
2017-05-01
We present a method to detect adhesive forces of nano particles by analysis of the distribution of measured lateral forces. The measurement protocol is suitable for all types of atomic force microscopes with a lateral force channel. Lateral forces are measured, in constant normal force contact mode, by scanning of substrates decorated with nano beads. By using probability theory, geometry based measurement errors are compensated and the real adhesion force is determined within a given confidence interval. The theoretical model can be adapted for particles with arbitrary shape and distribution of adhesion forces. It is applied to the adhesion problem of spherical particles with a Gaussian distribution of adhesion forces. We analyze the measured force distribution qualitatively and quantitatively. The theory predicts a systematic underestimation of the mean value of any particle adhesion measurement done by lateral pushing. Real measurement data of 50 nm diameter silica nano beads on silicon substrate is used to test the theoretical model for plausibility by means of information theory.
An imprecise probability approach for squeal instability analysis based on evidence theory
NASA Astrophysics Data System (ADS)
Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie
2017-01-01
An imprecise probability approach based on evidence theory is proposed for squeal instability analysis of uncertain disc brakes in this paper. First, the squeal instability of the finite element (FE) model of a disc brake is investigated and its dominant unstable eigenvalue is detected by running two typical numerical simulations, i.e., complex eigenvalue analysis (CEA) and transient dynamical analysis. Next, the uncertainty mainly caused by contact and friction is taken into account and some key parameters of the brake are described as uncertain parameters. All these uncertain parameters are usually involved with imprecise data such as incomplete information and conflict information. Finally, a squeal instability analysis model considering imprecise uncertainty is established by integrating evidence theory, Taylor expansion, subinterval analysis and surrogate model. In the proposed analysis model, the uncertain parameters with imprecise data are treated as evidence variables, and the belief measure and plausibility measure are employed to evaluate system squeal instability. The effectiveness of the proposed approach is demonstrated by numerical examples and some interesting observations and conclusions are summarized from the analyses and discussions. The proposed approach is generally limited to the squeal problems without too many investigated parameters. It can be considered as a potential method for squeal instability analysis, which will act as the first step to reduce squeal noise of uncertain brakes with imprecise information.
NASA Astrophysics Data System (ADS)
Blessent, Daniela; Therrien, René; Lemieux, Jean-Michel
2011-12-01
This paper presents numerical simulations of a series of hydraulic interference tests conducted in crystalline bedrock at Olkiluoto (Finland), a potential site for the disposal of the Finnish high-level nuclear waste. The tests are in a block of crystalline bedrock of about 0.03 km3 that contains low-transmissivity fractures. Fracture density, orientation, and fracture transmissivity are estimated from Posiva Flow Log (PFL) measurements in boreholes drilled in the rock block. On the basis of those data, a geostatistical approach relying on a transitional probability and Markov chain models is used to define a conceptual model based on stochastic fractured rock facies. Four facies are defined, from sparsely fractured bedrock to highly fractured bedrock. Using this conceptual model, three-dimensional groundwater flow is then simulated to reproduce interference pumping tests in either open or packed-off boreholes. Hydraulic conductivities of the fracture facies are estimated through automatic calibration using either hydraulic heads or both hydraulic heads and PFL flow rates as targets for calibration. The latter option produces a narrower confidence interval for the calibrated hydraulic conductivities, therefore reducing the associated uncertainty and demonstrating the usefulness of the measured PFL flow rates. Furthermore, the stochastic facies conceptual model is a suitable alternative to discrete fracture network models to simulate fluid flow in fractured geological media.
Moment-Based Probability Modeling and Extreme Response Estimation, The FITS Routine Version 1.2
MANUEL,LANCE; KASHEF,TINA; WINTERSTEIN,STEVEN R.
1999-11-01
This report documents the use of the FITS routine, which provides automated fits of various analytical, commonly used probability models from input data. It is intended to complement the previously distributed FITTING routine documented in RMS Report 14 (Winterstein et al., 1994), which implements relatively complex four-moment distribution models whose parameters are fit with numerical optimization routines. Although these four-moment fits can be quite useful and faithful to the observed data, their complexity can make them difficult to automate within standard fitting algorithms. In contrast, FITS provides more robust (lower moment) fits of simpler, more conventional distribution forms. For each database of interest, the routine estimates the distribution of annual maximum response based on the data values and the duration, T, over which they were recorded. To focus on the upper tails of interest, the user can also supply an arbitrary lower-bound threshold, {chi}{sub low}, above which a shifted distribution model--exponential or Weibull--is fit.
Probability based remaining capacity estimation using data-driven and neural network model
NASA Astrophysics Data System (ADS)
Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai
2016-05-01
Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.
Dalli, Deniz; Wilm, Andreas; Mainz, Indra; Steger, Gerhard
2006-07-01
Alignment of RNA has a wide range of applications, for example in phylogeny inference, consensus structure prediction and homology searches. Yet aligning structural or non-coding RNAs (ncRNAs) correctly is notoriously difficult as these RNA sequences may evolve by compensatory mutations, which maintain base pairing but destroy sequence homology. Ideally, alignment programs would take RNA structure into account. The Sankoff algorithm for the simultaneous solution of RNA structure prediction and RNA sequence alignment was proposed 20 years ago but suffers from its exponential complexity. A number of programs implement lightweight versions of the Sankoff algorithm by restricting its application to a limited type of structure and/or only pairwise alignment. Thus, despite recent advances, the proper alignment of multiple structural RNA sequences remains a problem. Here we present StrAl, a heuristic method for alignment of ncRNA that reduces sequence-structure alignment to a two-dimensional problem similar to standard multiple sequence alignment. The scoring function takes into account sequence similarity as well as up- and downstream pairing probability. To test the robustness of the algorithm and the performance of the program, we scored alignments produced by StrAl against a large set of published reference alignments. The quality of alignments predicted by StrAl is far better than that obtained by standard sequence alignment programs, especially when sequence homologies drop below approximately 65%; nevertheless StrAl's runtime is comparable to that of ClustalW.
Micro-object motion tracking based on the probability hypothesis density particle tracker.
Shi, Chunmei; Zhao, Lingling; Wang, Junjie; Zhang, Chiping; Su, Xiaohong; Ma, Peijun
2016-04-01
Tracking micro-objects in the noisy microscopy image sequences is important for the analysis of dynamic processes in biological objects. In this paper, an automated tracking framework is proposed to extract the trajectories of micro-objects. This framework uses a probability hypothesis density particle filtering (PF-PHD) tracker to implement a recursive state estimation and trajectories association. In order to increase the efficiency of this approach, an elliptical target model is presented to describe the micro-objects using shape parameters instead of point-like targets which may cause inaccurate tracking. A novel likelihood function, not only covering the spatiotemporal distance but also dealing with geometric shape function based on the Mahalanobis norm, is proposed to improve the accuracy of particle weight in the update process of the PF-PHD tracker. Using this framework, a larger number of tracks are obtained. The experiments are performed on simulated data of microtubule movements and real mouse stem cells. We compare the PF-PHD tracker with the nearest neighbor method and the multiple hypothesis tracking method. Our PF-PHD tracker can simultaneously track hundreds of micro-objects in the microscopy image sequence.
SAR amplitude probability density function estimation based on a generalized Gaussian model.
Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B
2006-06-01
In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena.
A new probability distribution model of turbulent irradiance based on Born perturbation theory
NASA Astrophysics Data System (ADS)
Wang, Hongxing; Liu, Min; Hu, Hao; Wang, Qian; Liu, Xiguo
2010-10-01
The subject of the PDF (Probability Density Function) of the irradiance fluctuations in a turbulent atmosphere is still unsettled. Theory reliably describes the behavior in the weak turbulence regime, but theoretical description in the strong and whole turbulence regimes are still controversial. Based on Born perturbation theory, the physical manifestations and correlations of three typical PDF models (Rice-Nakagami, exponential-Bessel and negative-exponential distribution) were theoretically analyzed. It is shown that these models can be derived by separately making circular-Gaussian, strong-turbulence and strong-turbulence-circular-Gaussian approximations in Born perturbation theory, which denies the viewpoint that the Rice-Nakagami model is only applicable in the extremely weak turbulence regime and provides theoretical arguments for choosing rational models in practical applications. In addition, a common shortcoming of the three models is that they are all approximations. A new model, called the Maclaurin-spread distribution, is proposed without any approximation except for assuming the correlation coefficient to be zero. So, it is considered that the new model can exactly reflect the Born perturbation theory. Simulated results prove the accuracy of this new model.
A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities
Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.
1999-01-01
A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.
NASA Astrophysics Data System (ADS)
Gharouni-Nik, Morteza; Naeimi, Meysam; Ahadi, Sodayf; Alimoradi, Zahra
2014-06-01
In order to determine the overall safety of a tunnel support lining, a reliability-based approach is presented in this paper. Support elements in jointed rock tunnels are provided to control the ground movement caused by stress redistribution during the tunnel drive. Main support elements contribute to stability of the tunnel structure are recognized owing to identify various aspects of reliability and sustainability in the system. The selection of efficient support methods for rock tunneling is a key factor in order to reduce the number of problems during construction and maintain the project cost and time within the limited budget and planned schedule. This paper introduces a smart approach by which decision-makers will be able to find the overall reliability of tunnel support system before selecting the final scheme of the lining system. Due to this research focus, engineering reliability which is a branch of statistics and probability is being appropriately applied to the field and much effort has been made to use it in tunneling while investigating the reliability of the lining support system for the tunnel structure. Therefore, reliability analysis for evaluating the tunnel support performance is the main idea used in this research. Decomposition approaches are used for producing system block diagram and determining the failure probability of the whole system. Effectiveness of the proposed reliability model of tunnel lining together with the recommended approaches is examined using several case studies and the final value of reliability obtained for different designing scenarios. Considering the idea of linear correlation between safety factors and reliability parameters, the values of isolated reliabilities determined for different structural components of tunnel support system. In order to determine individual safety factors, finite element modeling is employed for different structural subsystems and the results of numerical analyses are obtained in
NASA Technical Reports Server (NTRS)
Poage, J. L.
1975-01-01
A sequential nonparametric pattern classification procedure is presented. The method presented is an estimated version of the Wald sequential probability ratio test (SPRT). This method utilizes density function estimates, and the density estimate used is discussed, including a proof of convergence in probability of the estimate to the true density function. The classification procedure proposed makes use of the theory of order statistics, and estimates of the probabilities of misclassification are given. The procedure was tested on discriminating between two classes of Gaussian samples and on discriminating between two kinds of electroencephalogram (EEG) responses.
NASA Technical Reports Server (NTRS)
Poage, J. L.
1975-01-01
A sequential nonparametric pattern classification procedure is presented. The method presented is an estimated version of the Wald sequential probability ratio test (SPRT). This method utilizes density function estimates, and the density estimate used is discussed, including a proof of convergence in probability of the estimate to the true density function. The classification procedure proposed makes use of the theory of order statistics, and estimates of the probabilities of misclassification are given. The procedure was tested on discriminating between two classes of Gaussian samples and on discriminating between two kinds of electroencephalogram (EEG) responses.
The Significance of Acid/Base Properties in Drug Discovery
Manallack, David T.; Prankerd, Richard J.; Yuriev, Elizabeth; Oprea, Tudor I.; Chalmers, David K.
2013-01-01
While drug discovery scientists take heed of various guidelines concerning drug-like character, the influence of acid/base properties often remains under-scrutinised. Ionisation constants (pKa values) are fundamental to the variability of the biopharmaceutical characteristics of drugs and to underlying parameters such as logD and solubility. pKa values affect physicochemical properties such as aqueous solubility, which in turn influences drug formulation approaches. More importantly, absorption, distribution, metabolism, excretion and toxicity (ADMET) are profoundly affected by the charge state of compounds under varying pH conditions. Consideration of pKa values in conjunction with other molecular properties is of great significance and has the potential to be used to further improve the efficiency of drug discovery. Given the recent low annual output of new drugs from pharmaceutical companies, this review will provide a timely reminder of an important molecular property that influences clinical success. PMID:23099561
Experimental Probability in Elementary School
ERIC Educational Resources Information Center
Andrew, Lane
2009-01-01
Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.
Experimental Probability in Elementary School
ERIC Educational Resources Information Center
Andrew, Lane
2009-01-01
Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.
NASA Astrophysics Data System (ADS)
Cavuoti, S.; Amaro, V.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.
2017-02-01
A variety of fundamental astrophysical science topics require the determination of very accurate photometric redshifts (photo-z). A wide plethora of methods have been developed, based either on template models fitting or on empirical explorations of the photometric parameter space. Machine-learning-based techniques are not explicitly dependent on the physical priors and able to produce accurate photo-z estimations within the photometric ranges derived from the spectroscopic training set. These estimates, however, are not easy to characterize in terms of a photo-z probability density function (PDF), due to the fact that the analytical relation mapping the photometric parameters on to the redshift space is virtually unknown. We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method designed to provide a reliable PDF of the error distribution for empirical techniques. The method is implemented as a modular workflow, whose internal engine for photo-z estimation makes use of the MLPQNA neural network (Multi Layer Perceptron with Quasi Newton learning rule), with the possibility to easily replace the specific machine-learning model chosen to predict photo-z. We present a summary of results on SDSS-DR9 galaxy data, used also to perform a direct comparison with PDFs obtained by the LE PHARE spectral energy distribution template fitting. We show that METAPHOR is capable to estimate the precision and reliability of photometric redshifts obtained with three different self-adaptive techniques, i.e. MLPQNA, Random Forest and the standard K-Nearest Neighbors models.
Shin, Seung Jun; Wu, Yichao
2014-07-01
This is a discussion of the papers: "Probability estimation with machine learning methods for dichotomous and multicategory outcome: Theory" by Jochen Kruppa, Yufeng Liu, Gérard Biau, Michael Kohler, Inke R. König, James D. Malley, and Andreas Ziegler; and "Probability estimation with machine learning methods for dichotomous and multicategory outcome: Applications" by Jochen Kruppa, Yufeng Liu, Hans-Christian Diener, Theresa Holste, Christian Weimar, Inke R. König, and Andreas Ziegler.
Probability and possibility-based representations of uncertainty in fault tree analysis.
Flage, Roger; Baraldi, Piero; Zio, Enrico; Aven, Terje
2013-01-01
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic-possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility-probability (probability-possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context.
Evidence-Based Medicine as a Tool for Undergraduate Probability and Statistics Education.
Masel, J; Humphrey, P T; Blackburn, B; Levine, J A
2015-01-01
Most students have difficulty reasoning about chance events, and misconceptions regarding probability can persist or even strengthen following traditional instruction. Many biostatistics classes sidestep this problem by prioritizing exploratory data analysis over probability. However, probability itself, in addition to statistics, is essential both to the biology curriculum and to informed decision making in daily life. One area in which probability is particularly important is medicine. Given the preponderance of pre health students, in addition to more general interest in medicine, we capitalized on students' intrinsic motivation in this area to teach both probability and statistics. We use the randomized controlled trial as the centerpiece of the course, because it exemplifies the most salient features of the scientific method, and the application of critical thinking to medicine. The other two pillars of the course are biomedical applications of Bayes' theorem and science and society content. Backward design from these three overarching aims was used to select appropriate probability and statistics content, with a focus on eliciting and countering previously documented misconceptions in their medical context. Pretest/posttest assessments using the Quantitative Reasoning Quotient and Attitudes Toward Statistics instruments are positive, bucking several negative trends previously reported in statistics education.
Evidence-Based Medicine as a Tool for Undergraduate Probability and Statistics Education
Masel, J.; Humphrey, P. T.; Blackburn, B.; Levine, J. A.
2015-01-01
Most students have difficulty reasoning about chance events, and misconceptions regarding probability can persist or even strengthen following traditional instruction. Many biostatistics classes sidestep this problem by prioritizing exploratory data analysis over probability. However, probability itself, in addition to statistics, is essential both to the biology curriculum and to informed decision making in daily life. One area in which probability is particularly important is medicine. Given the preponderance of pre health students, in addition to more general interest in medicine, we capitalized on students’ intrinsic motivation in this area to teach both probability and statistics. We use the randomized controlled trial as the centerpiece of the course, because it exemplifies the most salient features of the scientific method, and the application of critical thinking to medicine. The other two pillars of the course are biomedical applications of Bayes’ theorem and science and society content. Backward design from these three overarching aims was used to select appropriate probability and statistics content, with a focus on eliciting and countering previously documented misconceptions in their medical context. Pretest/posttest assessments using the Quantitative Reasoning Quotient and Attitudes Toward Statistics instruments are positive, bucking several negative trends previously reported in statistics education. PMID:26582236
Sample size planning for phase II trials based on success probabilities for phase III.
Götte, Heiko; Schüler, Armin; Kirchner, Marietta; Kieser, Meinhard
2015-01-01
In recent years, high failure rates in phase III trials were observed. One of the main reasons is overoptimistic assumptions for the planning of phase III resulting from limited phase II information and/or unawareness of realistic success probabilities. We present an approach for planning a phase II trial in a time-to-event setting that considers the whole phase II/III clinical development programme. We derive stopping boundaries after phase II that minimise the number of events under side conditions for the conditional probabilities of correct go/no-go decision after phase II as well as the conditional success probabilities for phase III. In addition, we give general recommendations for the choice of phase II sample size. Our simulations show that unconditional probabilities of go/no-go decision as well as the unconditional success probabilities for phase III are influenced by the number of events observed in phase II. However, choosing more than 150 events in phase II seems not necessary as the impact on these probabilities then becomes quite small. We recommend considering aspects like the number of compounds in phase II and the resources available when determining the sample size. The lower the number of compounds and the lower the resources are for phase III, the higher the investment for phase II should be.
Grossling, Bernardo F.
1975-01-01
Exploratory drilling is still in incipient or youthful stages in those areas of the world where the bulk of the potential petroleum resources is yet to be discovered. Methods of assessing resources from projections based on historical production and reserve data are limited to mature areas. For most of the world's petroleum-prospective areas, a more speculative situation calls for a critical review of resource-assessment methodology. The language of mathematical statistics is required to define more rigorously the appraisal of petroleum resources. Basically, two approaches have been used to appraise the amounts of undiscovered mineral resources in a geologic province: (1) projection models, which use statistical data on the past outcome of exploration and development in the province; and (2) estimation models of the overall resources of the province, which use certain known parameters of the province together with the outcome of exploration and development in analogous provinces. These two approaches often lead to widely different estimates. Some of the controversy that arises results from a confusion of the probabilistic significance of the quantities yielded by each of the two approaches. Also, inherent limitations of analytic projection models-such as those using the logistic and Gomperts functions --have often been ignored. The resource-assessment problem should be recast in terms that provide for consideration of the probability of existence of the resource and of the probability of discovery of a deposit. Then the two above-mentioned models occupy the two ends of the probability range. The new approach accounts for (1) what can be expected with reasonably high certainty by mere projections of what has been accomplished in the past; (2) the inherent biases of decision-makers and resource estimators; (3) upper bounds that can be set up as goals for exploration; and (4) the uncertainties in geologic conditions in a search for minerals. Actual outcomes can then
United States streamflow probabilities based on forecasted La Nina, winter-spring 2000
Dettinger, M.D.; Cayan, D.R.; Redmond, K.T.
1999-01-01
Although for the last 5 months the TahitiDarwin Southern Oscillation Index (SOI) has hovered close to normal, the “equatorial” SOI has remained in the La Niña category and predictions are calling for La Niña conditions this winter. In view of these predictions of continuing La Niña and as a direct extension of previous studies of the relations between El NiñoSouthern Oscil-lation (ENSO) conditions and streamflow in the United States (e.g., Redmond and Koch, 1991; Cayan and Webb, 1992; Redmond and Cayan, 1994; Dettinger et al., 1998; Garen, 1998; Cayan et al., 1999; Dettinger et al., in press), the probabilities that United States streamflows from December 1999 through July 2000 will be in upper and lower thirds (terciles) of the historical records are estimated here. The processes that link ENSO to North American streamflow are discussed in detail in these diagnostics studies. Our justification for generating this forecast is threefold: (1) Cayan et al. (1999) recently have shown that ENSO influences on streamflow variations and extremes are proportionately larger than the corresponding precipitation teleconnections. (2) Redmond and Cayan (1994) and Dettinger et al. (in press) also have shown that the low-frequency evolution of ENSO conditions support long-lead correlations between ENSO and streamflow in many rivers of the conterminous United States. (3) In many rivers, significant (weeks-to-months) delays between precipitation and the release to streams of snowmelt or ground-water discharge can support even longer term forecasts of streamflow than is possible for precipitation. The relatively slow, orderly evolution of El Niño-Southern Oscillation episodes, the accentuated dependence of streamflow upon ENSO, and the long lags between precipitation and flow encourage us to provide the following analysis as a simple prediction of this year’s river flows.
Model assisted probability of detection for a guided waves based SHM technique
NASA Astrophysics Data System (ADS)
Memmolo, V.; Ricci, F.; Maio, L.; Boffa, N. D.; Monaco, E.
2016-04-01
Guided wave (GW) Structural Health Monitoring (SHM) allows to assess the health of aerostructures thanks to the great sensitivity to delamination and/or debondings appearance. Due to the several complexities affecting wave propagation in composites, an efficient GW SHM system requires its effective quantification associated to a rigorous statistical evaluation procedure. Probability of Detection (POD) approach is a commonly accepted measurement method to quantify NDI results and it can be effectively extended to an SHM context. However, it requires a very complex setup arrangement and many coupons. When a rigorous correlation with measurements is adopted, Model Assisted POD (MAPOD) is an efficient alternative to classic methods. This paper is concerned with the identification of small emerging delaminations in composite structural components. An ultrasonic GW tomography focused to impact damage detection in composite plate-like structures recently developed by authors is investigated, getting the bases for a more complex MAPOD analysis. Experimental tests carried out on a typical wing composite structure demonstrated the effectiveness of modeling approach in order to detect damages with the tomographic algorithm. Environmental disturbances, which affect signal waveforms and consequently damage detection, are considered simulating a mathematical noise in the modeling stage. A statistical method is used for an effective making decision procedure. A Damage Index approach is implemented as metric to interpret the signals collected from a distributed sensor network and a subsequent graphic interpolation is carried out to reconstruct the damage appearance. A model validation and first reliability assessment results are provided, in view of performance system quantification and its optimization as well.
NASA Astrophysics Data System (ADS)
Ogata, Y.
2016-12-01
Although the probability of a major earthquake in an ordinary state is very small, the probability is increased in the presence of anomalies as potential precursors. This urges us quantitative studies on the statistics of anomalies against earthquakes using various relevant datasets. Such studies include delicate anomalies which can be revealed after diagnostic analysis. For example, the author has made many diagnostic analyses of aftershock sequences using the ETAS model, and provides probability gains of the relative quiescence to induce an earthquake of the similar size or more. In 2016 April 16, the earthquake of M7.3 occurred in Kumamoto, Japan. Two days prior to this event, an earthquake of M6.5 occurred and followed by aftershocks which were actually foreshocks of the M7.3 earthquake. In this case, independent anomalies and elements, to each of which the probability gain can be evaluated, are classified as follows. #1. Short-term probability gain that the M6.5 aftershock sequence will be foreshocks of a larger earthquake; #2. Intermediate-term probability gain that the preceding M6.5 and M6.4 earthquakes will trigger similar size or the larger earthquake in a neighboring area; and the another probability gain that some aftershock anomalies will induce a large earthquake in Kumamoto region. #3. long-term probabilities of 30 years' rupture incidence, on the nearest fault (Futagawa fault), or a broad fault system including the central Kyushu, or in Kumamoto region. Hence, depending on the above anomalies, the probability of M≥7.0 earthquake occurrence in Kumamoto District based on Utsu's formula of the multiple independent precursors varies 0.04 19% per day, 0.3 62% per week, and 1.3 88% per month, during the proximate period before the time of the M7.3 earthquake.
Benndorf, Matthias; Neubauer, Jakob; Langer, Mathias; Kotter, Elmar
2017-03-01
In the diagnostic process of primary bone tumors, patient age, tumor localization and to a lesser extent sex affect the differential diagnosis. We therefore aim to develop a pretest probability calculator for primary malignant bone tumors based on population data taking these variables into account. We access the SEER (Surveillance, Epidemiology and End Results Program of the National Cancer Institute, 2015 release) database and analyze data of all primary malignant bone tumors diagnosed between 1973 and 2012. We record age at diagnosis, tumor localization according to the International Classification of Diseases (ICD-O-3) and sex. We take relative probability of the single tumor entity as a surrogate parameter for unadjusted pretest probability. We build a probabilistic (naïve Bayes) classifier to calculate pretest probabilities adjusted for age, tumor localization and sex. We analyze data from 12,931 patients (647 chondroblastic osteosarcomas, 3659 chondrosarcomas, 1080 chordomas, 185 dedifferentiated chondrosarcomas, 2006 Ewing's sarcomas, 281 fibroblastic osteosarcomas, 129 fibrosarcomas, 291 fibrous malignant histiocytomas, 289 malignant giant cell tumors, 238 myxoid chondrosarcomas, 3730 osteosarcomas, 252 parosteal osteosarcomas, 144 telangiectatic osteosarcomas). We make our probability calculator accessible at http://ebm-radiology.com/bayesbone/index.html . We provide exhaustive tables for age and localization data. Results from tenfold cross-validation show that in 79.8 % of cases the pretest probability is correctly raised. Our approach employs population data to calculate relative pretest probabilities for primary malignant bone tumors. The calculator is not diagnostic in nature. However, resulting probabilities might serve as an initial evaluation of probabilities of tumors on the differential diagnosis list.
Phetkhajorn, Supawadi; Sirikaew, Siriwan; Rattanachuay, Pattamarat; Sukhumungoon, Pharanai
2014-11-01
The detection of enterotoxigenic Escherichia coli (ETEC) in food, especially raw meat, has rarely been documented in Thailand, although the presence of this bacterial pathogen is considered of important public health concern. The quantity of ETEC in 150 meat samples collected from fresh food markets in southern Thailand were determined using a most probable number (MPN)-PCR-based quantification approach. ETEC contamination of raw chicken, pork and beef samples was 42%, 25% and 12%, respectively (a significant difference between chicken and beef, p < 0.05). The maximum MPN/g value for enterotoxin gene est-positive ETEC from pork and elt-positive ETEC from chicken were > 1,100 MPN/g, but the range of MPN/g values was greater for ETEC from chicken than from pork or beef. ETEC from raw chicken meat contained significantly more elt- than est-positives (p < 0.05). Thus, a significant proportion of raw meat, in particular chicken, sold in fresh food markets in southern Thailand harbors ETEC and poses a potential threat to consumer health.
Change of flood risk under climate change based on Discharge Probability Index in Japan
NASA Astrophysics Data System (ADS)
Nitta, T.; Yoshimura, K.; Kanae, S.; Oki, T.
2010-12-01
Water-related disasters under the climate change have recently gained considerable interest, and there have been many studies referring to flood risk at the global scale (e.g. Milly et al., 2002; Hirabayashi et al., 2008). In order to build adaptive capacity, however, regional impact evaluation is needed. We thus focus on the flood risk over Japan in the present study. The output from the Regional Climate Model 20 (RCM20), which was developed by the Meteorological Research Institute, was used. The data was first compared with observed data based on Automated Meteorological Data Acquisition System and ground weather observations, and the model biases were corrected using the ratio and difference of the 20-year mean values. The bias-corrected RCM20 atmospheric data were then forced to run a land surface model and a river routing model (Yoshimura et al., 2007; Ngo-Duc, T. et al. 2007) to simulate river discharge during 1981-2000, 2031-2050, and 2081-2100. Simulated river discharge was converted to Discharge Probability Index (DPI), which was proposed by Yoshimura et al based on a statistical approach. The bias and uncertainty of the models are already taken into account in the concept of DPI, so that DPI serves as a good indicator of flood risk. We estimated the statistical parameters for DPI using the river discharge for 1981-2000 with an assumption that the parameters stay the same in the different climate periods. We then evaluated the occurrence of flood events corresponding to DPI categories in each 20 years and averaged them in 9 regions. The results indicate that low DPI flood events (return period of 2 years) will become more frequent in 2031-2050 and high DPI flood events (return period of 200 years) will become more frequent in 2081-2100 compared with the period of 1981-2000, though average precipitation will become larger during 2031-2050 than during 2081-2100 in most regions. It reflects the increased extreme precipitation during 2081-2100.
Wagner, Daniel M.; Krieger, Joshua D.; Veilleux, Andrea G.
2016-08-04
In 2013, the U.S. Geological Survey initiated a study to update regional skew, annual exceedance probability discharges, and regional regression equations used to estimate annual exceedance probability discharges for ungaged locations on streams in the study area with the use of recent geospatial data, new analytical methods, and available annual peak-discharge data through the 2013 water year. An analysis of regional skew using Bayesian weighted least-squares/Bayesian generalized-least squares regression was performed for Arkansas, Louisiana, and parts of Missouri and Oklahoma. The newly developed constant regional skew of -0.17 was used in the computation of annual exceedance probability discharges for 281 streamgages used in the regional regression analysis. Based on analysis of covariance, four flood regions were identified for use in the generation of regional regression models. Thirty-nine basin characteristics were considered as potential explanatory variables, and ordinary least-squares regression techniques were used to determine the optimum combinations of basin characteristics for each of the four regions. Basin characteristics in candidate models were evaluated based on multicollinearity with other basin characteristics (variance inflation factor < 2.5) and statistical significance at the 95-percent confidence level (p ≤ 0.05). Generalized least-squares regression was used to develop the final regression models for each flood region. Average standard errors of prediction of the generalized least-squares models ranged from 32.76 to 59.53 percent, with the largest range in flood region D. Pseudo coefficients of determination of the generalized least-squares models ranged from 90.29 to 97.28 percent, with the largest range also in flood region D. The regional regression equations apply only to locations on streams in Arkansas where annual peak discharges are not substantially affected by regulation, diversion, channelization, backwater, or urbanization
Heightened odds of large earthquakes near istanbul: An interaction-based probability calculation
Parsons; Toda; Stein; Barka; Dieterich
2000-04-28
We calculate the probability of strong shaking in Istanbul, an urban center of 10 million people, from the description of earthquakes on the North Anatolian fault system in the Marmara Sea during the past 500 years and test the resulting catalog against the frequency of damage in Istanbul during the preceding millennium. Departing from current practice, we include the time-dependent effect of stress transferred by the 1999 moment magnitude M = 7.4 Izmit earthquake to faults nearer to Istanbul. We find a 62 +/- 15% probability (one standard deviation) of strong shaking during the next 30 years and 32 +/- 12% during the next decade.
Heightened odds of large earthquakes near Istanbul: an interaction-based probability calculation
Parsons, T.; Toda, S.; Stein, R.S.; Barka, A.; Dieterich, J.H.
2000-01-01
We calculate the probability of strong shaking in Istanbul, an urban center of 10 million people, from the description of earthquakes on the North Anatolian fault system in the Marmara Sea during the past 500 years and test the resulting catalog against the frequency of damage in Istanbul during the preceding millennium, departing from current practice, we include the time-dependent effect of stress transferred by the 1999 moment magnitude M = 7.4 Izmit earthquake to faults nearer to Istanbul. We find a 62 ± 15% probability (one standard deviation) of strong shaking during the next 30 years and 32 ± 12% during the next decade.
Nichols, Alice I.; Preskorn, Sheldon H.
2015-01-01
Objective: The avoidance of adverse drug-drug interactions (DDIs) is a high priority in terms of both the US Food and Drug Administration (FDA) and the individual prescriber. With this perspective in mind, this article illustrates the process for assessing the risk of a drug (example here being desvenlafaxine) causing or being the victim of DDIs, in accordance with FDA guidance. Data Sources/Study Selection: DDI studies for the serotonin-norepinephrine reuptake inhibitor desvenlafaxine conducted by the sponsor and published since 2009 are used as examples of the systematic way that the FDA requires drug developers to assess whether their new drug is either capable of causing clinically meaningful DDIs or being the victim of such DDIs. In total, 8 open-label studies tested the effects of steady-state treatment with desvenlafaxine (50–400 mg/d) on the pharmacokinetics of cytochrome (CYP) 2D6 and/or CYP 3A4 substrate drugs, or the effect of CYP 3A4 inhibition on desvenlafaxine pharmacokinetics. The potential for DDIs mediated by the P-glycoprotein (P-gp) transporter was assessed in in vitro studies using Caco-2 monolayers. Data Extraction: Changes in area under the plasma concentration-time curve (AUC; CYP studies) and efflux (P-gp studies) were reviewed for potential DDIs in accordance with FDA criteria. Results: Desvenlafaxine coadministration had minimal effect on CYP 2D6 and/or 3A4 substrates per FDA criteria. Changes in AUC indicated either no interaction (90% confidence intervals for the ratio of AUC geometric least-squares means [GM] within 80%–125%) or weak inhibition (AUC GM ratio 125% to < 200%). Coadministration with ketoconazole resulted in a weak interaction with desvenlafaxine (AUC GM ratio of 143%). Desvenlafaxine was not a substrate (efflux ratio < 2) or inhibitor (50% inhibitory drug concentration values > 250 μM) of P-gp. Conclusions: A 2-step process based on FDA guidance can be used first to determine whether a pharmacokinetically mediated
NASA Astrophysics Data System (ADS)
Jaynes, E. T.; Bretthorst, G. Larry
2003-04-01
Foreword; Preface; Part I. Principles and Elementary Applications: 1. Plausible reasoning; 2. The quantitative rules; 3. Elementary sampling theory; 4. Elementary hypothesis testing; 5. Queer uses for probability theory; 6. Elementary parameter estimation; 7. The central, Gaussian or normal distribution; 8. Sufficiency, ancillarity, and all that; 9. Repetitive experiments, probability and frequency; 10. Physics of 'random experiments'; Part II. Advanced Applications: 11. Discrete prior probabilities, the entropy principle; 12. Ignorance priors and transformation groups; 13. Decision theory: historical background; 14. Simple applications of decision theory; 15. Paradoxes of probability theory; 16. Orthodox methods: historical background; 17. Principles and pathology of orthodox statistics; 18. The Ap distribution and rule of succession; 19. Physical measurements; 20. Model comparison; 21. Outliers and robustness; 22. Introduction to communication theory; References; Appendix A. Other approaches to probability theory; Appendix B. Mathematical formalities and style; Appendix C. Convolutions and cumulants.
Sample Size Determination for Estimation of Sensor Detection Probabilities Based on a Test Variable
2007-06-01
interest. 15. NUMBER OF PAGES 121 14. SUBJECT TERMS Sample Size, Binomial Proportion, Confidence Interval , Coverage Probability, Experimental...THE STUDY ..........................5 II. LITERATURE REVIEW .......................................7 A. CONFIDENCE INTERVAL METHODS FOR THE...BINOMIAL PROPORTION .........................................7 1. The Wald Confidence Interval ..................7 2. The Wilson Score Confidence Interval .........13
Implicit Segmentation of a Stream of Syllables Based on Transitional Probabilities: An MEG Study
ERIC Educational Resources Information Center
Teinonen, Tuomas; Huotilainen, Minna
2012-01-01
Statistical segmentation of continuous speech, i.e., the ability to utilise transitional probabilities between syllables in order to detect word boundaries, is reflected in the brain's auditory event-related potentials (ERPs). The N1 and N400 ERP components are typically enhanced for word onsets compared to random syllables during active…
A peptide-spectrum scoring system based on ion alignment, intensity, and pair probabilities.
Risk, Brian A; Edwards, Nathan J; Giddings, Morgan C
2013-09-06
Peppy, the proteogenomic/proteomic search software, employs a novel method for assessing the match quality between an MS/MS spectrum and a theorized peptide sequence. The scoring system uses three score factors calculated with binomial probabilities: the probability that a fragment ion will randomly align with a peptide ion, the probability that the aligning ions will be selected from subsets of the most intense peaks, and the probability that the intensities of fragment ions identified as y-ions are greater than those of their counterpart b-ions. The scores produced by the method act as global confidence scores, which facilitate the accurate comparison of results and the estimation of false discovery rates. Peppy has been integrated into the meta-search engine PepArML to produce meaningful comparisons with Mascot, MSGF+, OMSSA, X!Tandem, k-Score and s-Score. For two of the four data sets examined with the PepArML analysis, Peppy exceeded the accuracy performance of the other scoring systems. Peppy is available for download at http://geneffects.com/peppy .
Implicit Segmentation of a Stream of Syllables Based on Transitional Probabilities: An MEG Study
ERIC Educational Resources Information Center
Teinonen, Tuomas; Huotilainen, Minna
2012-01-01
Statistical segmentation of continuous speech, i.e., the ability to utilise transitional probabilities between syllables in order to detect word boundaries, is reflected in the brain's auditory event-related potentials (ERPs). The N1 and N400 ERP components are typically enhanced for word onsets compared to random syllables during active…
2012-09-01
incorporates macro economic and policy level information. In the first step the conditional probabilities of staying or leaving the Navy are estimated...accommodates time dependent information, cohort information, censoring problems with the data as well as incorporating macro economic and policy level ...1 Introducing the Individual Level Information (Covariates
Lühr, Armin; Löck, Steffen; Jakobi, Annika; Stützer, Kristin; Bandurska-Luque, Anna; Vogelius, Ivan Richter; Enghardt, Wolfgang; Baumann, Michael; Krause, Mechthild
2017-07-01
Objectives of this work are (1) to derive a general clinically relevant approach to model tumor control probability (TCP) for spatially variable risk of failure and (2) to demonstrate its applicability by estimating TCP for patients planned for photon and proton irradiation. The approach divides the target volume into sub-volumes according to retrospectively observed spatial failure patterns. The product of all sub-volume TCPi values reproduces the observed TCP for the total tumor. The derived formalism provides for each target sub-volume i the tumor control dose (D50,i) and slope (γ50,i) parameters at 50% TCPi. For a simultaneous integrated boost (SIB) prescription for 45 advanced head and neck cancer patients, TCP values for photon and proton irradiation were calculated and compared. The target volume was divided into gross tumor volume (GTV), surrounding clinical target volume (CTV), and elective CTV (CTVE). The risk of a local failure in each of these sub-volumes was taken from the literature. Convenient expressions for D50,i and γ50,i were provided for the Poisson and the logistic model. Comparable TCP estimates were obtained for photon and proton plans of the 45 patients using the sub-volume model, despite notably higher dose levels (on average +4.9%) in the low-risk CTVE for photon irradiation. In contrast, assuming a homogeneous dose response in the entire target volume resulted in TCP estimates contradicting clinical experience (the highest failure rate in the low-risk CTVE) and differing substantially between photon and proton irradiation. The presented method is of practical value for three reasons: It (a) is based on empirical clinical outcome data; (b) can be applied to non-uniform dose prescriptions as well as different tumor entities and dose-response models; and (c) is provided in a convenient compact form. The approach may be utilized to target spatial patterns of local failures observed in patient cohorts by prescribing different doses to
NASA Astrophysics Data System (ADS)
Hoteit, I.; Hollt, T.; Hadwiger, M.; Knio, O. M.; Gopalakrishnan, G.; Zhan, P.
2016-02-01
Ocean reanalyses and forecasts are nowadays generated by combining ensemble simulations with data assimilation techniques. Most of these techniques resample the ensemble members after each assimilation cycle. Tracking behavior over time, such as all possible paths of a particle in an ensemble vector field, becomes very difficult, as the number of combinations rises exponentially with the number of assimilation cycles. In general a single possible path is not of interest but only the probabilities that any point in space might be reached by a particle at some point in time. We present an approach using probability-weighted piecewise particle trajectories to allow for interactive probability mapping. This is achieved by binning the domain and splitting up the tracing process into the individual assimilation cycles, so that particles that fall into the same bin after a cycle can be treated as a single particle with a larger probability as input for the next cycle. As a result we loose the possibility to track individual particles, but can create probability maps for any desired seed at interactive rates. The technique is integrated in an interactive visualization system that enables the visual analysis of the particle traces side by side with other forecast variables, such as the sea surface height, and their corresponding behavior over time. By harnessing the power of modern graphics processing units (GPUs) for visualization as well as computation, our system allows the user to browse through the simulation ensembles in real-time, view specific parameter settings or simulation models and move between different spatial or temporal regions without delay. In addition our system provides advanced visualizations to highlight the uncertainty, or show the complete distribution of the simulations at user-defined positions over the complete time series of the domain.
Krofcheck, Daniel J; Hurteau, Matthew D; Scheller, Robert M; Loudermilk, E Louise
2017-09-23
In frequent fire forests of the western US a legacy of fire suppression coupled with increases in fire weather severity have altered fire regimes and vegetation dynamics. When coupled with projected climate change, these conditions have the potential to lead to vegetation type change and altered carbon (C) dynamics. In the Sierra Nevada, fuels reduction approaches that include mechanical thinning followed by regular prescribed fire are one approach to restore the ability of the ecosystem to tolerate episodic fire and still sequester C. Yet, the spatial extent of the area requiring treatment makes widespread treatment implementation unlikely. We sought to determine if a priori knowledge of where uncharacteristic wildfire is most probable could be used to optimize the placement of fuels treatments in a Sierra Nevada watershed. We developed two treatment placement strategies: the naive strategy, based on treating all operationally available area and the optimized strategy, which only treated areas where crown-killing fires were most probable. We ran forecast simulations using projected climate data through 2100 to determine how the treatments differed in terms of C sequestration, fire severity, and C emissions relative to a no-management scenario. We found that in both the short (20 years) and long (100 years) term, both management scenarios increased C stability, reduced burn severity, and consequently emitted less C as a result of wildfires than no-management. Across all metrics, both scenarios performed the same, but the optimized treatment required significantly less C removal (naive = 0.42 Tg C, optimized = 0.25 Tg C) to achieve the same treatment efficacy. Given the extent of western forests in need of fire restoration, efficiently allocating treatments is a critical task if we are going to restore adaptive capacity in frequent-fire forests. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Eash, David A.; Barnes, Kimberlee K.; Veilleux, Andrea G.
2013-01-01
A statewide study was performed to develop regional regression equations for estimating selected annual exceedance-probability statistics for ungaged stream sites in Iowa. The study area comprises streamgages located within Iowa and 50 miles beyond the State’s borders. Annual exceedance-probability estimates were computed for 518 streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data through 2010. The estimation of the selected statistics included a Bayesian weighted least-squares/generalized least-squares regression analysis to update regional skew coefficients for the 518 streamgages. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low flows. Also, geographic information system software was used to measure 59 selected basin characteristics for each streamgage. Regional regression analysis, using generalized least-squares regression, was used to develop a set of equations for each flood region in Iowa for estimating discharges for ungaged stream sites with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. A total of 394 streamgages were included in the development of regional regression equations for three flood regions (regions 1, 2, and 3) that were defined for Iowa based on landform regions and soil regions. Average standard errors of prediction range from 31.8 to 45.2 percent for flood region 1, 19.4 to 46.8 percent for flood region 2, and 26.5 to 43.1 percent for flood region 3. The pseudo coefficients of determination for the generalized least-squares equations range from 90.8 to 96.2 percent for flood region 1, 91.5 to 97
NASA Astrophysics Data System (ADS)
Koravos, George Ch.; Yadav, R. B. S.; Tsapanos, Theodoros M.
2015-09-01
The Pacific tsunamigenic rim is one of the most tsunamigenic regions of the world which has experienced large catastrophic tsunamis in the past, resulting in huge loss of lives and properties. In this study, probabilities of occurrences of large tsunamis with tsunami intensity (Soloviev-Imamura intensity scale) I ≥ 1.5, I ≥ 2.0, I ≥ 2.5, I ≥ 3.0, I ≥ 3.5 and I ≥ 4.0 have been calculated over the next 100 years in ten main tsunamigenic zones of the Pacific rim area using a homogeneous and complete tsunami catalogue covering the time periods from 684 to 2011. In order to evaluate tsunami potential, we applied the conditional probability method in each zone by considering the inter-occurrence times between the successive tsunamis generated in the past that follow the lognormal distribution. Thus, we assessed the probability of the next generation of large tsunamis in each zone by considering the time of the last tsunami occurrence. The a-posteriori occurrence of the last large tsunami has been also assessed, assuming that the time of the last occurrence coincides with the time of the event prior to the last one. The estimated a-posteriori probabilities exhibit satisfactory results in most of the zones, revealing a promising technique and confirming the reliability of the tsunami data used. Furthermore, the tsunami potential in different tsunamigenic zones is also expressed in terms of spatial maps of conditional probabilities for two levels of tsunami intensities I ≥ 1.5 and I ≥ 2.5 during next 10, 20, 50 and 100 years. Estimated results reveal that the conditional probabilities in the South America and Alaska-Aleutian zones for larger tsunami intensity I ≥ 2.5 are in the range of 92-93%, much larger than the Japan (69%), for a time period of 100 years, suggesting that those are the most vulnerable tsunamigenic zones. The spatial maps provide brief atlas of tsunami potential in the Pacific rim area.
Pure perceptual-based learning of second-, third-, and fourth-order sequential probabilities.
Remillard, Gilbert
2011-07-01
There is evidence that sequence learning in the traditional serial reaction time task (SRTT), where target location is the response dimension, and sequence learning in the perceptual SRTT, where target location is not the response dimension, are handled by different mechanisms. The ability of the latter mechanism to learn sequential contingencies that can be learned by the former mechanism was examined. Prior research has established that people can learn second-, third-, and fourth-order probabilities in the traditional SRTT. The present study reveals that people can learn such probabilities in the perceptual SRTT. This suggests that the two mechanisms may have similar architectures. A possible neural basis of the two mechanisms is discussed.
Sensitivity analysis of limit state functions for probability-based plastic design
NASA Technical Reports Server (NTRS)
Frangopol, D. M.
1984-01-01
The evaluation of the total probability of a plastic collapse failure P sub f for a highly redundant structure of random interdependent plastic moments acted on by random interdepedent loads is a difficult and computationally very costly process. The evaluation of reasonable bounds to this probability requires the use of second moment algebra which involves man statistical parameters. A computer program which selects the best strategy for minimizing the interval between upper and lower bounds of P sub f is now in its final stage of development. The relative importance of various uncertainties involved in the computational process on the resulting bounds of P sub f, sensitivity is analyzed. Response sensitivities for both mode and system reliability of an ideal plastic portal frame are shown.
NASA Astrophysics Data System (ADS)
Delattre, Sylvain; Graf, Siegfried; Luschgy, Harald; Pages, Gilles
2006-06-01
For a probability measure P on and consider where the infimum is taken over all subsets [alpha] of with card([alpha])[less-than-or-equals, slant]n and V is a nondecreasing function. Under certain conditions on V, we derive the precise n-asymptotics of en for self-similar distributions P and we find the asymptotic performance of optimal quantizers using weighted empirical measures.
ERIC Educational Resources Information Center
Ghitza, Udi E.; Epstein, David H.; Schmittner, John; Vahabzadeh, Massoud; Lin, Jia-Ling; Preston, Kenzie L.
2008-01-01
Although treatment outcome in prize-based contingency management has been shown to depend on reinforcement schedule, the optimal schedule is still unknown. Therefore, we conducted a retrospective analysis of data from a randomized clinical trial (Ghitza et al., 2007) to determine the effects of the probability of winning a prize (low vs. high) and…
The Minnesota Children's Pesticide Exposure Study is a probability-based sample of 102 children 3-13 years old who were monitored for commonly used pesticides. During the summer of 1997, first-morning-void urine samples (1-3 per child) were obtained for 88% of study children a...
ERIC Educational Resources Information Center
Kaplan, Danielle E.; Wu, Erin Chia-ling
2006-01-01
Our research suggests static and animated graphics can lead to more animated thinking and more correct problem solving in computer-based probability learning. Pilot software modules were developed for graduate online statistics courses and representation research. A study with novice graduate student statisticians compared problem solving in five…
ERIC Educational Resources Information Center
Huynh, Huynh
2006-01-01
By analyzing the Fisher information allotted to the correct response of a Rasch binary item, Huynh (1994) established the response probability criterion 0.67 (RP67) for standard settings based on bookmarks and item mapping. The purpose of this note is to help clarify the conceptual and psychometric framework of the RP criterion.
The Minnesota Children's Pesticide Exposure Study is a probability-based sample of 102 children 3-13 years old who were monitored for commonly used pesticides. During the summer of 1997, first-morning-void urine samples (1-3 per child) were obtained for 88% of study children a...
We conducted a probability-based sampling of Lake Superior in 2006 and compared the zooplankton biomass estimate with laser optical plankton counter (LOPC) predictions. The net survey consisted of 52 sites stratified across three depth zones (0-30, 30-150, >150 m). The LOPC tow...
We conducted a probability-based sampling of Lake Superior in 2006 and compared the zooplankton biomass estimate with laser optical plankton counter (LOPC) predictions. The net survey consisted of 52 sites stratified across three depth zones (0-30, 30-150, >150 m). The LOPC tow...
Yu, Hancheng; Gao, Jianlin; Li, Aiting
2016-03-01
In this Letter, a probability-based non-local means filter is proposed for speckle reduction in optical coherence tomography (OCT). Originally developed for additive white Gaussian noise, the non-local means filter is not suitable for multiplicative speckle noise suppression. This Letter presents a two-stage non-local means algorithm using the uncorrupted probability of each pixel to effectively reduce speckle noise in OCT. Experiments on real OCT images demonstrate that the proposed filter is competitive with other state-of-the-art speckle removal techniques and able to accurately preserve edges and structural details with small computational cost.
Ghitza, Udi E; Epstein, David H; Schmittner, John; Vahabzadeh, Massoud; Lin, Jia-Ling; Preston, Kenzie L
2008-01-01
Although treatment outcome in prize-based contingency management has been shown to depend on reinforcement schedule, the optimal schedule is still unknown. Therefore, we conducted a retrospective analysis of data from a randomized clinical trial (Ghitza et al., 2007) to determine the effects of the probability of winning a prize (low vs. high) and the size of the prize won (small, large, or jumbo) on likelihood of abstinence until the next urine-collection day for heroin and cocaine users (N=116) in methadone maintenance. Higher probability of winning, but not the size of individual prizes, was associated with a greater percentage of cocaine-negative, but not opiate-negative, urines.
NASA Astrophysics Data System (ADS)
Rosa, A. N. F.; Wiatr, P.; Cavdar, C.; Carvalho, S. V.; Costa, J. C. W. A.; Wosinska, L.
2015-11-01
In Elastic Optical Network (EON), spectrum fragmentation refers to the existence of non-aligned, small-sized blocks of free subcarrier slots in the optical spectrum. Several metrics have been proposed in order to quantify a level of spectrum fragmentation. Approximation methods might be used for estimating average blocking probability and some fragmentation measures, but are so far unable to accurately evaluate the influence of different sizes of connection requests and do not allow in-depth investigation of blocking events and their relation to fragmentation. The analytical study of the effect of fragmentation on requests' blocking probability is still under-explored. In this work, we introduce new definitions for blocking that differentiate between the reasons for the blocking events. We developed a framework based on Markov modeling to calculate steady-state probabilities for the different blocking events and to analyze fragmentation related problems in elastic optical links under dynamic traffic conditions. This framework can also be used for evaluation of different definitions of fragmentation in terms of their relation to the blocking probability. We investigate how different allocation request sizes contribute to fragmentation and blocking probability. Moreover, we show to which extend blocking events, due to insufficient amount of available resources, become inevitable and, compared to the amount of blocking events due to fragmented spectrum, we draw conclusions on the possible gains one can achieve by system defragmentation. We also show how efficient spectrum allocation policies really are in reducing the part of fragmentation that in particular leads to actual blocking events. Simulation experiments are carried out showing good match with our analytical results for blocking probability in a small scale scenario. Simulated blocking probabilities for the different blocking events are provided for a larger scale elastic optical link.
NASA Astrophysics Data System (ADS)
Barrera, Manuel; Suarez-Llorens, Alfonso; Casas-Ruiz, Melquiades; Alonso, José J.; Vidal, Juan
2017-05-01
A generic theoretical methodology for the calculation of the efficiency of gamma spectrometry systems is introduced in this work. The procedure is valid for any type of source and detector and can be applied to determine the full energy peak and the total efficiency of any source-detector system. The methodology is based on the idea of underlying probability of detection, which describes the physical model for the detection of the gamma radiation at the particular studied situation. This probability depends explicitly on the direction of the gamma radiation, allowing the use of this dependence the development of more realistic and complex models than the traditional models based on the point source integration. The probability function that has to be employed in practice must reproduce the relevant characteristics of the detection process occurring at the particular studied situation. Once the probability is defined, the efficiency calculations can be performed in general by using numerical methods. Monte Carlo integration procedure is especially useful to perform the calculations when complex probability functions are used. The methodology can be used for the direct determination of the efficiency and also for the calculation of corrections that require this determination of the efficiency, as it is the case of coincidence summing, geometric or self-attenuation corrections. In particular, we have applied the procedure to obtain some of the classical self-attenuation correction factors usually employed to correct for the sample attenuation of cylindrical geometry sources. The methodology clarifies the theoretical basis and approximations associated to each factor, by making explicit the probability which is generally hidden and implicit to each model. It has been shown that most of these self-attenuation correction factors can be derived by using a common underlying probability, having this probability a growing level of complexity as it reproduces more precisely
Anthias, Chloe; Billen, Annelies; Arkwright, Rebecca; Szydlo, Richard M; Madrigal, J Alejandro; Shaw, Bronwen E
2016-05-01
Previous studies have demonstrated the importance of bone marrow (BM) harvest yield in determining transplant outcomes, but little is known regarding donor and procedure variables associated with achievement of an optimal yield. We hypothesized that donor demographics and variables relating to the procedure were likely to impact the yield (total nucleated cells [TNCs]/kg recipient weight) and quality (TNCs/mL) of the harvest. To test our hypothesis, BM harvests of 110 consecutive unrelated donors were evaluated. The relationship between donor or procedure characteristics and the BM harvest yield was examined. The relationship between donor and recipient weight significantly influenced the harvest yield; only 14% of BM harvests from donors who weighed less than their recipient achieved a TNC count of more than 4 × 10(8) /kg compared to 56% of harvests from donors heavier than their recipient (p = 0.001). Higher-volume harvests were significantly less likely to achieve an optimal yield than lower-volume harvests (32% vs. 78%; p = 0.007), and higher-volume harvests contained significantly fewer TNCs/mL, indicating peripheral blood contamination. BM harvest quality also varied significantly between collection centers adding to recent concerns regarding maintenance of BM harvest expertise within the transplant community. Since the relationship between donor and recipient weight has a critical influence yield, we recommend prioritizing this secondary donor characteristic when selecting from multiple well-matched donors. Given the declining number of requests for BM harvests, it is crucial that systems are developed to train operators and ensure expertise in this procedure is retained. © 2016 AABB.
NASA Astrophysics Data System (ADS)
Gao, Dongyue; Wu, Zhanjun; Yang, Lei; Zheng, Yuebin
2016-04-01
Multi-damage identification is an important and challenging task in the research of guide waves-based structural health monitoring. In this paper, a multi-damage identification method is presented using a guide waves-based local probability-based diagnostic imaging (PDI) method. The method includes a path damage judgment stage, a multi-damage judgment stage and a multi-damage imaging stage. First, damage imaging was performed by partition. The damage imaging regions are divided into beside damage signal paths. The difference in guide waves propagation characteristics between cross and beside damage paths is proposed by theoretical analysis of the guide wave signal feature. The time-of-flight difference of paths is used as a factor to distinguish between cross and beside damage paths. Then, a global PDI method (damage identification using all paths in the sensor network) is performed using the beside damage path network. If the global PDI damage zone crosses the beside damage path, it means that the discrete multi-damage model (such as a group of holes or cracks) has been misjudged as a continuum single-damage model (such as a single hole or crack) by the global PDI method. Subsequently, damage imaging regions are separated by beside damage path and local PDI (damage identification using paths in the damage imaging regions) is performed in each damage imaging region. Finally, multi-damage identification results are obtained by superimposing the local damage imaging results and the marked cross damage paths. The method is employed to inspect the multi-damage in an aluminum plate with a surface-mounted piezoelectric ceramic sensors network. The results show that the guide waves-based multi-damage identification method is capable of visualizing the presence, quantity and location of structural damage.
NASA Astrophysics Data System (ADS)
Kang, Liqiang; Guo, Liejin; Liu, Dayou
2008-04-01
The probability distribution of lift-off velocity of the saltating grains is a bridge to linking microscopic and macroscopic research of aeolian sand transport. The lift-off parameters of saltating grains (i.e., the horizontal and vertical lift-off velocities, resultant lift-off velocity, and lift-off angle) in a wind tunnel are measured by using a Phase Doppler Particle Analyzer (PDPA). The experimental results show that the probability distribution of horizontal lift-off velocity of saltating particles on a bed surface is a normal function, and that of vertical lift-off velocity is an exponential function. The probability distribution of resultant lift-off velocity of saltating grains can be expressed as a log-normal function, and that of lift-off angle complies with an exponential function. A numerical model for the vertical distribution of aeolian mass flux based on the probability distribution of lift-off velocity is established. The simulation gives a sand mass flux distribution which is consistent with the field data of Namikas (Namikas, S.L., 2003. Field measurement and numerical modelling of aeolian mass flux distributions on a sandy beach, Sedimentology 50, 303-326). Therefore, these findings are helpful to further understand the probability characteristics of lift-off grains in aeolian sand transport.
The Significance of Trust in School-Based Collaborative Leadership
ERIC Educational Resources Information Center
Coleman, Andrew
2012-01-01
The expectation that schools should work in partnership to promote the achievement of children has arguably been the defining feature of school policy over the last decade. This rise in school-to-school partnerships and increased emphasis on multi-agency-based interventions for vulnerable children have seen the emergence of a new form of school…
The Significance of Trust in School-Based Collaborative Leadership
ERIC Educational Resources Information Center
Coleman, Andrew
2012-01-01
The expectation that schools should work in partnership to promote the achievement of children has arguably been the defining feature of school policy over the last decade. This rise in school-to-school partnerships and increased emphasis on multi-agency-based interventions for vulnerable children have seen the emergence of a new form of school…
Lexicographic Probability, Conditional Probability, and Nonstandard Probability
2009-11-11
the following conditions: CP1. µ(U |U) = 1 if U ∈ F ′. CP2 . µ(V1 ∪ V2 |U) = µ(V1 |U) + µ(V2 |U) if V1 ∩ V2 = ∅, U ∈ F ′, and V1, V2 ∈ F . CP3. µ(V |U...µ(V |X)× µ(X |U) if V ⊆ X ⊆ U , U,X ∈ F ′, V ∈ F . Note that it follows from CP1 and CP2 that µ(· |U) is a probability measure on (W,F) (and, in... CP2 hold. This is easily seen to determine µ. Moreover, µ vaciously satisfies CP3, since there do not exist distinct sets U and X in F ′ such that U
Competency-based curricular design to encourage significant learning.
Hurtubise, Larry; Roman, Brenda
2014-07-01
Most significant learning (SL) experiences produce long-lasting learning experiences that meaningfully change the learner's thinking, feeling, and/or behavior. Most significant teaching experiences involve strong connections with the learner and recognition that the learner felt changed by the teaching effort. L. Dee Fink in Creating Significant Learning Experiences: An Integrated Approach to Designing College Course defines six kinds of learning goals: Foundational Knowledge, Application, Integration, Human Dimension, Caring, and Learning to Learn. SL occurs when learning experiences promote interaction between the different kinds of goals, for example, acquiring knowledge alone is not enough, but when paired with a learning experience, such as an effective patient experience as in Caring, then significant (and lasting) learning occurs. To promote SL, backward design principles that start with clearly defined learning goals and the context of the situation of the learner are particularly effective. Emphasis on defining assessment methods prior to developing teaching/learning activities is the key: this ensures that assessment (where the learner should be at the end of the educational activity/process) drives instruction and that assessment and learning/instruction are tightly linked so that assessment measures a defined outcome (competency) of the learner. Employing backward design and the AAMC's MedBiquitous standard vocabulary for medical education can help to ensure that curricular design and redesign efforts effectively enhance educational program quality and efficacy, leading to improved patient care. Such methods can promote successful careers in health care for learners through development of self-directed learning skills and active learning, in ways that help learners become fully committed to lifelong learning and continuous professional development.
Rae, Caroline D; Davidson, Joanne E; Maher, Anthony D; Rowlands, Benjamin D; Kashem, Mohammed A; Nasrallah, Fatima A; Rallapalli, Sundari K; Cook, James M; Balcar, Vladimir J
2014-04-01
Ethanol is a known neuromodulatory agent with reported actions at a range of neurotransmitter receptors. Here, we measured the effect of alcohol on metabolism of [3-¹³C]pyruvate in the adult Guinea pig brain cortical tissue slice and compared the outcomes to those from a library of ligands active in the GABAergic system as well as studying the metabolic fate of [1,2-¹³C]ethanol. Analyses of metabolic profile clusters suggest that the significant reductions in metabolism induced by ethanol (10, 30 and 60 mM) are via action at neurotransmitter receptors, particularly α4β3δ receptors, whereas very low concentrations of ethanol may produce metabolic responses owing to release of GABA via GABA transporter 1 (GAT1) and the subsequent interaction of this GABA with local α5- or α1-containing GABA(A)R. There was no measureable metabolism of [1,2-¹³C]ethanol with no significant incorporation of ¹³C from [1,2-¹³C]ethanol into any measured metabolite above natural abundance, although there were measurable effects on total metabolite sizes similar to those seen with unlabelled ethanol.
Community detection based on significance optimization in complex networks
NASA Astrophysics Data System (ADS)
Xiang, Ju; Wang, Zhi-Zhong; Li, Hui-Jia; Zhang, Yan; Li, Fang; Dong, Li-Ping; Li, Jian-Ming; Guo, Li-Juan
2017-05-01
Community structure is an important topological property that extensively exists in various complex networks. In the past decade, much attention has been paid to the design of community-detection methods, while analyzing the behaviors of the methods is also of interest in theoretical research and real applications. Here, we focus on an important measure for community structure, i.e. significance (2013 Sci. Rep. 3 2930). Specifically, we study the effect of various network parameters on this measure, analyze the critical behaviors in partition transition, and then deduce the formula of the critical points and the phase diagrams theoretically. The results show that the critical number of communities in partition transition increases dramatically with the difference between inter-community and intra-community link densities, and thus significance optimization displays higher resolution in community detection than many other methods, but it also may lead to the excessive splitting of communities. By employing the Louvain algorithm to optimize the significance, we confirm the theoretical results on artificial and real-world networks, and further perform a series of comparisons with some classical methods.
Miao, Zhichao; Westhof, Eric
2015-06-23
We describe a general binding score for predicting the nucleic acid binding probability in proteins. The score is directly derived from physicochemical and evolutionary features and integrates a residue neighboring network approach. Our process achieves stable and high accuracies on both DNA- and RNA-binding proteins and illustrates how the main driving forces for nucleic acid binding are common. Because of the effective integration of the synergetic effects of the network of neighboring residues and the fact that the prediction yields a hierarchical scoring on the protein surface, energy funnels for nucleic acid binding appear on protein surfaces, pointing to the dynamic process occurring in the binding of nucleic acids to proteins.
Significance of hair-dye base-induced sensory irritation.
Fujita, F; Azuma, T; Tajiri, M; Okamoto, H; Sano, M; Tominaga, M
2010-06-01
Oxidation hair-dyes, which are the principal hair-dyes, sometimes induce painful sensory irritation of the scalp caused by the combination of highly reactive substances, such as hydrogen peroxide and alkali agents. Although many cases of severe facial and scalp dermatitis have been reported following the use of hair-dyes, sensory irritation caused by contact of the hair-dye with the skin has not been reported clearly. In this study, we used a self-assessment questionnaire to measure the sensory irritation in various regions of the body caused by two model hair-dye bases that contained different amounts of alkali agents without dyes. Moreover, the occipital region was found as an alternative region of the scalp to test for sensory irritation of the hair-dye bases. We used this region to evaluate the relationship of sensitivity with skin properties, such as trans-epidermal water loss (TEWL), stratum corneum water content, sebum amount, surface temperature, current perception threshold (CPT), catalase activities in tape-stripped skin and sensory irritation score with the model hair-dye bases. The hair-dye sensitive group showed higher TEWL, a lower sebum amount, a lower surface temperature and higher catalase activity than the insensitive group, and was similar to that of damaged skin. These results suggest that sensory irritation caused by hair-dye could occur easily on the damaged dry scalp, as that caused by skin cosmetics reported previously.
NASA Astrophysics Data System (ADS)
Lazri, Mourad; Ameur, Soltane
2016-09-01
In this paper, an algorithm based on the probability of rainfall intensities classification for rainfall estimation from Meteosat Second Generation/Spinning Enhanced Visible and Infrared Imager (MSG-SEVIRI) has been developed. The classification scheme uses various spectral parameters of SEVIRI that provide information about cloud top temperature and optical and microphysical cloud properties. The presented method is developed and trained for the north of Algeria. The calibration of the method is carried out using as a reference rain classification fields derived from radar for rainy season from November 2006 to March 2007. Rainfall rates are assigned to rain areas previously identified and classified according to the precipitation formation processes. The comparisons between satellite-derived precipitation estimates and validation data show that the developed scheme performs reasonably well. Indeed, the correlation coefficient presents a significant level (r:0.87). The values of POD, POFD and FAR are 80%, 13% and 25%, respectively. Also, for a rainfall estimation of about 614 mm, the RMSD, Bias, MAD and PD indicate 102.06(mm), 2.18(mm), 68.07(mm) and 12.58, respectively.
A Probability-Base Alerting Logic for Aircraft on Parallel Approach
NASA Technical Reports Server (NTRS)
Carpenter, Brenda D.; Kuchar, James K.
1997-01-01
This document discusses the development and evaluation of an airborne collision alerting logic for aircraft on closely-spaced approaches to parallel runways. A novel methodology is used when links alerts to collision probabilities: alerting thresholds are set such that when the probability of a collision exceeds an acceptable hazard level an alert is issued. The logic was designed to limit the hazard level to that estimated for the Precision Runway Monitoring system: one accident in every one thousand blunders which trigger alerts. When the aircraft were constrained to be coaltitude, evaluations of a two-dimensional version of the alerting logic show that the achieved hazard level is approximately one accident in every 250 blunders. Problematic scenarios have been identified and corrections to the logic can be made. The evaluations also show that over eighty percent of all unnecessary alerts were issued during scenarios in which the miss distance would have been less than 1000 ft, indicating that the alerts may have been justified. Also, no unnecessary alerts were generated during normal approaches.
Avoidance based on shock intensity reduction with no change in shock probability.
Bersh, P J; Alloy, L B
1978-11-01
Rats were trained on a free-operant avoidance procedure in which shock intensity was controlled by interresponse time. Shocks were random at a density of about 10 shocks per minute. Shock probability was response independent. As long as interresponse times remained less than the limit in effect, any shocks received were at the lower of two intensities (0.75 mA). Whenever interresponse times exceeded the limit, any shocks received were at the higher intensity (1.6 mA). The initial limit of 15 seconds was decreased in 3-second steps to either 6 or 3 seconds. All animals lever pressed to avoid higher intensity shock. As the interresponse time limit was reduced, the response rate during the lower intensity shock and the proportion of brief interresponse times increased. Substantial warmup effects were evident, particularly at the shorter interresponse-time limits. Shock intensity reduction without change in shock probability was effective in the acquisition and maintenance of avoidance responding, as well as in differentiation of interresponse times. This research suggests limitations on the generality of a safety signal interpretation of avoidance conditioning.
Mesh-Based Entry Vehicle and Explosive Debris Re-Contact Probability Modeling
NASA Technical Reports Server (NTRS)
McPherson, Mark A.; Mendeck, Gavin F.
2011-01-01
The risk to a crewed vehicle arising from potential re-contact with fragments from an explosive breakup of any jettisoned spacecraft segments during entry has long sought to be quantified. However, great difficulty lies in efficiently capturing the potential locations of each fragment and their collective threat to the vehicle. The method presented in this paper addresses this problem by using a stochastic approach that discretizes simulated debris pieces into volumetric cells, and then assesses strike probabilities accordingly. Combining spatial debris density and relative velocity between the debris and the entry vehicle, the strike probability can be calculated from the integral of the debris flux inside each cell over time. Using this technique it is possible to assess the risk to an entry vehicle along an entire trajectory as it separates from the jettisoned segment. By decoupling the fragment trajectories from that of the entry vehicle, multiple potential separation maneuvers can then be evaluated rapidly to provide an assessment of the best strategy to mitigate the re-contact risk.
Gonyon, Thomas; Carter, Phillip W; Phillips, Gerald; Owen, Heather; Patel, Dipa; Kotha, Priyanka; Green, John-Bruce D
2014-08-01
The information content of the calcium phosphate compatibility curves for adult parenteral nutrition (PN) solutions may benefit from a more sophisticated statistical treatment. Binary logistic regression analyses were evaluated as part of an alternate method for generating formulation compatibility curves. A commercial PN solution was challenged with a systematic array of calcium and phosphate concentrations. These formulations were then characterized for particulates by visual inspection, light obscuration, and filtration followed by optical microscopy. Logistic regression analyses of the data were compared with traditional treatments for generating compatibility curves. Assay-dependent differences were observed in the compatibility curves and associated probability contours; the microscopic method of precipitate detection generated the most robust results. Calcium and phosphate compatibility data generated from small-volume glass containers reasonably predicted the observed compatibility of clinically relevant flexible containers. The published methods for creating calcium and phosphate compatibility curves via connecting the highest passing or lowest failing calcium concentrations should be augmented or replaced by probability contours of the entire experimental design to determine zones of formulation incompatibilities. We recommend researchers evaluate their data with logistic regression analysis to help build a more comprehensive probabilistic database of compatibility information. © 2013 American Society for Parenteral and Enteral Nutrition.
NASA Astrophysics Data System (ADS)
Zhao, Yongli; Zhang, Jie
2017-02-01
A routing and wavelength assignment (RWA) algorithm against high-power jamming based on software-defined optical networks (SDONs) is proposed. The SDON architecture is designed with power monitors at each node, which can collect the abnormal power information from each port and wavelength. Based on the abnormal power information, a metric, the weighted attack probability (WAP), can be calculated. A WAP-based RWA algorithm (WAP-RWA) is proposed considering the WAP values of each link and node along the selected lightpath. Numerical results show that the WAP-RWA algorithm can achieve a better performance in terms of blocking probability and resource utilization compared with the attack-aware dedicated path protection (AA-DPP) RWA (AA-DPP-RWA) algorithm, while providing a protection comparable with the AA-DPP-RWA algorithm.
RDX-based nanocomposite microparticles for significantly reduced shock sensitivity.
Qiu, Hongwei; Stepanov, Victor; Di Stasio, Anthony R; Chou, Tsengming; Lee, Woo Y
2011-01-15
Cyclotrimethylenetrinitramine (RDX)-based nanocomposite microparticles were produced by a simple, yet novel spray drying method. The microparticles were characterized by scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD) and high performance liquid chromatography (HPLC), which shows that they consist of small RDX crystals (∼0.1-1 μm) uniformly and discretely dispersed in a binder. The microparticles were subsequently pressed to produce dense energetic materials which exhibited a markedly lower shock sensitivity. The low sensitivity was attributed to small crystal size as well as small void size (∼250 nm). The method developed in this work may be suitable for the preparation of a wide range of insensitive explosive compositions. Copyright © 2010 Elsevier B.V. All rights reserved.
Wojcik, Mariusz; Tachiya, M
2009-03-14
This paper deals with the exact extension of the original Onsager theory of the escape probability to the case of finite recombination rate at nonzero reaction radius. The empirical theories based on the Eigen model and the Braun model, which are applicable in the absence and presence of an external electric field, respectively, are based on a wrong assumption that both recombination and separation processes in geminate recombination follow exponential kinetics. The accuracies of the empirical theories are examined against the exact extension of the Onsager theory. The Eigen model gives the escape probability in the absence of an electric field, which is different by a factor of 3 from the exact one. We have shown that this difference can be removed by operationally redefining the volume occupied by the dissociating partner before dissociation, which appears in the Eigen model as a parameter. The Braun model gives the escape probability in the presence of an electric field, which is significantly different from the exact one over the whole range of electric fields. Appropriate modification of the original Braun model removes the discrepancy at zero or low electric fields, but it does not affect the discrepancy at high electric fields. In all the above theories it is assumed that recombination takes place only at the reaction radius. The escape probability in the case when recombination takes place over a range of distances is also calculated and compared with that in the case of recombination only at the reaction radius.
Model-Based Calculations of the Probability of a Country's Nuclear Proliferation Decisions
Li, Jun; Yim, Man-Sung; McNelis, David N.
2007-07-01
explain the occurrences of proliferation decisions. However, predicting major historical proliferation events using model-based predictions has been unreliable. Nuclear proliferation decisions by a country is affected by three main factors: (1) technology; (2) finance; and (3) political motivation [1]. Technological capability is important as nuclear weapons development needs special materials, detonation mechanism, delivery capability, and the supporting human resources and knowledge base. Financial capability is likewise important as the development of the technological capabilities requires a serious financial commitment. It would be difficult for any state with a gross national product (GNP) significantly less than that of about $100 billion to devote enough annual governmental funding to a nuclear weapon program to actually achieve positive results within a reasonable time frame (i.e., 10 years). At the same time, nuclear proliferation is not a matter determined by a mastery of technical details or overcoming financial constraints. Technology or finance is a necessary condition but not a sufficient condition for nuclear proliferation. At the most fundamental level, the proliferation decision by a state is controlled by its political motivation. To effectively address the issue of predicting proliferation events, all three of the factors must be included in the model. To the knowledge of the authors, none of the exiting models considered the 'technology' variable as part of the modeling. This paper presents an attempt to develop a methodology for statistical modeling and predicting a country's nuclear proliferation decisions. The approach is based on the combined use of data on a country's nuclear technical capability profiles economic development status, security environment factors and internal political and cultural factors. All of the information utilized in the study was from open source literature. (authors)
Significant pathways detection in osteoporosis based on the bibliometric network.
Sun, G J; Guo, T; Chen, Y; Xu, B; Guo, J H; Zhao, J N
2013-01-01
Osteoporosis is a significant public health issue worldwide. The underlying mechanism of osteoporosis is an imbalance between bone resorption and bone formation. However, the exact pathology is still unclear, and more related genes are on demand. Here, we aim to identify the differentially expressed genes in osteoporosis patients and control. Biblio-MetReS, a tool to reconstruct gene and protein networks from automated literature analysis, was used for identifying potential interactions among target genes. Relevant signaling pathways were also identified through pathway enrichment analysis. Our results showed that 56 differentially expressed genes were identified. Of them, STAT1, CXCL10, SOCS3, ADM, THBS1, SOD2, and ERG2 have been demonstrated involving in osteoporosis. Further, a bibliometric network was constructed between DEGs and other genes through the Biblio-MetReS. The results showed that STAT1 could interact with CXCL10 through Toll-like receptor signaling pathway and Chemokine signaling pathway. STAT1 interacted with SOCS3 through JAK/STAT pathway.
Detection of significant pathways in osteoporosis based on graph clustering.
Xiao, Haijun; Shan, Liancheng; Zhu, Haiming; Xue, Feng
2012-12-01
Osteoporosis is the most common and serious skeletal disorder among the elderly, characterized by a low bone mineral density (BMD). Low bone mass in the elderly is highly dependent on their peak bone mass (PBM) as young adults. Circulating monocytes serve as early progenitors of osteoclasts and produce significant molecules for bone metabolism. An improved understanding of the biology and genetics of osteoclast differentiation at the pathway level is likely to be beneficial for the development of novel targeted approaches for osteoporosis. The objective of this study was to explore gene expression profiles comprehensively by grouping individual differentially expressed genes (DEGs) into gene sets and pathways using the graph clustering approach and Gene Ontology (GO) term enrichment analysis. The results indicated that the DEGs between high and low PBM samples were grouped into nine gene sets. The genes in clusters 1 and 8 (including GBP1, STAT1, CXCL10 and EIF2AK2) may be associated with osteoclast differentiation by the immune system response. The genes in clusters 2, 7 and 9 (including SOCS3, SOD2, ATF3, ADM EGR2 and BCL2A1) may be associated with osteoclast differentiation by responses to various stimuli. This study provides a number of candidate genes that warrant further investigation, including DDX60, HERC5, RSAD2, SIGLEC1, CMPK2, MX1, SEPING1, EPSTI1, C9orf72, PHLDA2, PFKFB3, PLEKHG2, ANKRD28, IL1RN and RNF19B.
Implicit segmentation of a stream of syllables based on transitional probabilities: an MEG study.
Teinonen, Tuomas; Huotilainen, Minna
2012-02-01
Statistical segmentation of continuous speech, i.e., the ability to utilise transitional probabilities between syllables in order to detect word boundaries, is reflected in the brain's auditory event-related potentials (ERPs). The N1 and N400 ERP components are typically enhanced for word onsets compared to random syllables during active listening. We used magnetoencephalography (MEG) to record event-related fields (ERFs) simultaneously with ERPs to syllables in a continuous sequence consisting of ten repeating tri-syllabic pseudowords and unexpected syllables presented between these pseudowords. We found the responses to differ between the syllables within the pseudowords and between the expected and unexpected syllables, reflecting an implicit process extracting the statistical characteristics of the sequence and monitoring for unexpected syllables.
Zhang, Feihu; Buckl, Christian; Knoll, Alois
2014-01-01
This paper studies the problem of multiple vehicle cooperative localization with spatial registration in the formulation of the probability hypothesis density (PHD) filter. Assuming vehicles are equipped with proprioceptive and exteroceptive sensors (with biases) to cooperatively localize positions, a simultaneous solution for joint spatial registration and state estimation is proposed. For this, we rely on the sequential Monte Carlo implementation of the PHD filtering. Compared to other methods, the concept of multiple vehicle cooperative localization with spatial registration is first proposed under Random Finite Set Theory. In addition, the proposed solution also addresses the challenges for multiple vehicle cooperative localization, e.g., the communication bandwidth issue and data association uncertainty. The simulation result demonstrates its reliability and feasibility in large-scale environments. PMID:24406860
A Regression-based Approach to Assessing Stream Nitrogen Impairment Probabilities
NASA Astrophysics Data System (ADS)
McMahon, G.; Qian, S.; Roessler, C.
2002-05-01
A recently completed National Research Council study of the Total Maximum Daily Load (TMDL) program of the U.S. Environmental Protection Agency recommends an increased use of models to assess the conditions of waters for which nutrient load limits may need to be developed. Models can synthesize data to fill gaps associated with limited monitoring networks and estimate impairment probabilities for contaminants of interest. The U.S. Geological Survey, as part of the National Water-Quality Assessment Program, the North Carolina Division of Water Quality, has developed a nonlinear regression model to estimate impairment probabilities for all river segments, or reaches, in North Carolina's Neuse River. In this study, a reach is considered impaired if the annual mean concentration of total nitrogen is greater than 1.5 milligrams per liter (mg/L), a concentration associated with stream eutrophication. A SPARROW (Spatially Referenced Regressions on Watershed attributes) total nitrogen model was calibrated using data from three large basins in eastern North Carolina, including the Neuse River. The model specifies that in-stream nitrogen flux is a function of a nonlinear relation of nitrogen sources, including point sources, atmospheric deposition, inputs from agricultural and developed land, and terrestrial and aquatic nutrient processing. Because data are managed in a geographic information system, the SPARROW model uses information that can be derived from the stream reach network about the spatial relations among nitrogen fluxes, sources, landscape characteristics, and stream characteristics. This presentation describes a process for estimating the proportion (and 90-percent confidence interval) of Neuse River reaches with a total nitrogen concentration less than 1.5 mg/L and discusses the incorporation of prediction errors into the analysis.
Jaspers, Ellen; Balsters, Joshua H; Kassraian Fard, Pegah; Mantini, Dante; Wenderoth, Nicole
2017-03-01
Over the last decade, structure-function relationships have begun to encompass networks of brain areas rather than individual structures. For example, corticostriatal circuits have been associated with sensorimotor, limbic, and cognitive information processing, and damage to these circuits has been shown to produce unique behavioral outcomes in Autism, Parkinson's Disease, Schizophrenia and healthy ageing. However, it remains an open question how abnormal or absent connectivity can be detected at the individual level. Here, we provide a method for clustering gross morphological structures into subregions with unique functional connectivity fingerprints, and generate network probability maps usable as a baseline to compare individual cases against. We used connectivity metrics derived from resting-state fMRI (N = 100), in conjunction with hierarchical clustering methods, to parcellate the striatum into functionally distinct clusters. We identified three highly reproducible striatal subregions, across both hemispheres and in an independent replication dataset (N = 100) (dice-similarity values 0.40-1.00). Each striatal seed region resulted in a highly reproducible distinct connectivity fingerprint: the putamen showed predominant connectivity with cortical and cerebellar sensorimotor and language processing areas; the ventromedial striatum cluster had a distinct limbic connectivity pattern; the caudate showed predominant connectivity with the thalamus, frontal and occipital areas, and the cerebellum. Our corticostriatal probability maps agree with existing connectivity data in humans and non-human primates, and showed a high degree of replication. We believe that these maps offer an efficient tool to further advance hypothesis driven research and provide important guidance when investigating deviant connectivity in neurological patient populations suffering from e.g., stroke or cerebral palsy. Hum Brain Mapp 38:1478-1491, 2017. © 2016 Wiley Periodicals, Inc.
Nanomaterial-Based Electrochemical Immunosensors for Clinically Significant Biomarkers
Ronkainen, Niina J.; Okon, Stanley L.
2014-01-01
Nanotechnology has played a crucial role in the development of biosensors over the past decade. The development, testing, optimization, and validation of new biosensors has become a highly interdisciplinary effort involving experts in chemistry, biology, physics, engineering, and medicine. The sensitivity, the specificity and the reproducibility of biosensors have improved tremendously as a result of incorporating nanomaterials in their design. In general, nanomaterials-based electrochemical immunosensors amplify the sensitivity by facilitating greater loading of the larger sensing surface with biorecognition molecules as well as improving the electrochemical properties of the transducer. The most common types of nanomaterials and their properties will be described. In addition, the utilization of nanomaterials in immunosensors for biomarker detection will be discussed since these biosensors have enormous potential for a myriad of clinical uses. Electrochemical immunosensors provide a specific and simple analytical alternative as evidenced by their brief analysis times, inexpensive instrumentation, lower assay cost as well as good portability and amenability to miniaturization. The role nanomaterials play in biosensors, their ability to improve detection capabilities in low concentration analytes yielding clinically useful data and their impact on other biosensor performance properties will be discussed. Finally, the most common types of electroanalytical detection methods will be briefly touched upon. PMID:28788700
Weiner, Kevin S; Barnett, Michael A; Witthoft, Nathan; Golarai, Golijeh; Stigliani, Anthony; Kay, Kendrick N; Gomez, Jesse; Natu, Vaidehi S; Amunts, Katrin; Zilles, Karl; Grill-Spector, Kalanit
2017-04-18
The parahippocampal place area (PPA) is a widely studied high-level visual region in the human brain involved in place and scene processing. The goal of the present study was to identify the most probable location of place-selective voxels in medial ventral temporal cortex. To achieve this goal, we first used cortex-based alignment (CBA) to create a probabilistic place-selective region of interest (ROI) from one group of 12 participants. We then tested how well this ROI could predict place selectivity in each hemisphere within a new group of 12 participants. Our results reveal that a probabilistic ROI (pROI) generated from one group of 12 participants accurately predicts the location and functional selectivity in individual brains from a new group of 12 participants, despite between subject variability in the exact location of place-selective voxels relative to the folding of parahippocampal cortex. Additionally, the prediction accuracy of our pROI is significantly higher than that achieved by volume-based Talairach alignment. Comparing the location of the pROI of the PPA relative to published data from over 500 participants, including data from the Human Connectome Project, shows a striking convergence of the predicted location of the PPA and the cortical location of voxels exhibiting the highest place selectivity across studies using various methods and stimuli. Specifically, the most predictive anatomical location of voxels exhibiting the highest place selectivity in medial ventral temporal cortex is the junction of the collateral and anterior lingual sulci. Methodologically, we make this pROI freely available (vpnl.stanford.edu/PlaceSelectivity), which provides a means to accurately identify a functional region from anatomical MRI data when fMRI data are not available (for example, in patient populations). Theoretically, we consider different anatomical and functional factors that may contribute to the consistent anatomical location of place selectivity
Peeters, Bart; Geerts, Inge; Van Mullem, Mia; Micalessi, Isabel; Saegeman, Veroniek; Moerman, Jan
2016-05-01
Many hospitals opt for early postnatal discharge of newborns with a potential risk of readmission for neonatal hyperbilirubinemia. Assays/algorithms with the possibility to improve prediction of significant neonatal hyperbilirubinemia are needed to optimize screening protocols and safe discharge of neonates. This study investigated the predictive value of umbilical cord blood (UCB) testing for significant hyperbilirubinemia. Neonatal UCB bilirubin, UCB direct antiglobulin test (DAT), and blood group were determined, as well as the maternal blood group and the red blood cell antibody status. Moreover, in newborns with clinically apparent jaundice after visual assessment, plasma total bilirubin (TB) was measured. Clinical factors positively associated with UCB bilirubin were ABO incompatibility, positive DAT, presence of maternal red cell antibodies, alarming visual assessment and significant hyperbilirubinemia in the first 6 days of life. UCB bilirubin performed clinically well with an area under the receiver-operating characteristic curve (AUC) of 0.82 (95 % CI 0.80-0.84). The combined UCB bilirubin, DAT, and blood group analysis outperformed results of these parameters considered separately to detect significant hyperbilirubinemia and correlated exponentially with hyperbilirubinemia post-test probability. Post-test probabilities for neonatal hyperbilirubinemia can be calculated using exponential functions defined by UCB bilirubin, DAT, and ABO compatibility results. • The diagnostic value of the triad umbilical cord blood bilirubin measurement, direct antiglobulin testing and blood group analysis for neonatal hyperbilirubinemia remains unclear in literature. • Currently no guideline recommends screening for hyperbilirubinemia using umbilical cord blood. What is New: • Post-test probability for hyperbilirubinemia correlated exponentially with umbilical cord blood bilirubin in different risk groups defined by direct antiglobulin test and ABO blood group
Guan, Li; Hao, Bibo; Cheng, Qijin; Yip, Paul Sf; Zhu, Tingshao
2015-01-01
Traditional offline assessment of suicide probability is time consuming and difficult in convincing at-risk individuals to participate. Identifying individuals with high suicide probability through online social media has an advantage in its efficiency and potential to reach out to hidden individuals, yet little research has been focused on this specific field. The objective of this study was to apply two classification models, Simple Logistic Regression (SLR) and Random Forest (RF), to examine the feasibility and effectiveness of identifying high suicide possibility microblog users in China through profile and linguistic features extracted from Internet-based data. There were nine hundred and nine Chinese microblog users that completed an Internet survey, and those scoring one SD above the mean of the total Suicide Probability Scale (SPS) score, as well as one SD above the mean in each of the four subscale scores in the participant sample were labeled as high-risk individuals, respectively. Profile and linguistic features were fed into two machine learning algorithms (SLR and RF) to train the model that aims to identify high-risk individuals in general suicide probability and in its four dimensions. Models were trained and then tested by 5-fold cross validation; in which both training set and test set were generated under the stratified random sampling rule from the whole sample. There were three classic performance metrics (Precision, Recall, F1 measure) and a specifically defined metric "Screening Efficiency" that were adopted to evaluate model effectiveness. Classification performance was generally matched between SLR and RF. Given the best performance of the classification models, we were able to retrieve over 70% of the labeled high-risk individuals in overall suicide probability as well as in the four dimensions. Screening Efficiency of most models varied from 1/4 to 1/2. Precision of the models was generally below 30%. Individuals in China with high suicide
Smith, Carlas S.; Stallinga, Sjoerd; Lidke, Keith A.; Rieger, Bernd; Grunwald, David
2015-01-01
Single-molecule detection in fluorescence nanoscopy has become a powerful tool in cell biology but can present vexing issues in image analysis, such as limited signal, unspecific background, empirically set thresholds, image filtering, and false-positive detection limiting overall detection efficiency. Here we present a framework in which expert knowledge and parameter tweaking are replaced with a probability-based hypothesis test. Our method delivers robust and threshold-free signal detection with a defined error estimate and improved detection of weaker signals. The probability value has consequences for downstream data analysis, such as weighing a series of detections and corresponding probabilities, Bayesian propagation of probability, or defining metrics in tracking applications. We show that the method outperforms all current approaches, yielding a detection efficiency of >70% and a false-positive detection rate of <5% under conditions down to 17 photons/pixel background and 180 photons/molecule signal, which is beneficial for any kind of photon-limited application. Examples include limited brightness and photostability, phototoxicity in live-cell single-molecule imaging, and use of new labels for nanoscopy. We present simulations, experimental data, and tracking of low-signal mRNAs in yeast cells. PMID:26424801
Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation
NASA Astrophysics Data System (ADS)
Demir, Uygar; Toker, Cenk; Çenet, Duygu
2016-07-01
Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent
A generic probability based model to derive regional patterns of crops in time and space
NASA Astrophysics Data System (ADS)
Wattenbach, Martin; Luedtke, Stefan; Redweik, Richard; van Oijen, Marcel; Balkovic, Juraj; Reinds, Gert Jan
2015-04-01
Croplands are not only the key to human food supply, they also change the biophysical and biogeochemical properties of the land surface leading to changes in the water cycle, energy portioning, they influence soil erosion and substantially contribute to the amount of greenhouse gases entering the atmosphere. The effects of croplands on the environment depend on the type of crop and the associated management which both are related to the site conditions, economic boundary settings as well as preferences of individual farmers. The method described here is designed to predict the most probable crop to appear at a given location and time. The method uses statistical crop area information on NUTS2 level from EUROSTAT and the Common Agricultural Policy Regionalized Impact Model (CAPRI) as observation. These crops are then spatially disaggregated to the 1 x 1 km grid scale within the region, using the assumption that the probability of a crop appearing at a given location and a given year depends on a) the suitability of the land for the cultivation of the crop derived from the MARS Crop Yield Forecast System (MCYFS) and b) expert knowledge of agricultural practices. The latter includes knowledge concerning the feasibility of one crop following another (e.g. a late-maturing crop might leave too little time for the establishment of a winter cereal crop) and the need to combat weed infestations or crop diseases. The model is implemented in R and PostGIS. The quality of the generated crop sequences per grid cell is evaluated on the basis of the given statistics reported by the joint EU/CAPRI database. The assessment is given on NUTS2 level using per cent bias as a measure with a threshold of 15% as minimum quality. The results clearly indicates that crops with a large relative share within the administrative unit are not as error prone as crops that allocate only minor parts of the unit. However, still roughly 40% show an absolute per cent bias above the 15% threshold. This
Adgate, J L; Barr, D B; Clayton, C A; Eberly, L E; Freeman, N C; Lioy, P J; Needham, L L; Pellizzari, E D; Quackenboss, J J; Roy, A; Sexton, K
2001-01-01
The Minnesota Children's Pesticide Exposure Study is a probability-based sample of 102 children 3-13 years old who were monitored for commonly used pesticides. During the summer of 1997, first-morning-void urine samples (1-3 per child) were obtained for 88% of study children and analyzed for metabolites of insecticides and herbicides: carbamates and related compounds (1-NAP), atrazine (AM), malathion (MDA), and chlorpyrifos and related compounds (TCPy). TCPy was present in 93% of the samples, whereas 1-NAP, MDA, and AM were detected in 45%, 37%, and 2% of samples, respectively. Measured intrachild means ranged from 1.4 microg/L for MDA to 9.2 microg/L for TCPy, and there was considerable intrachild variability. For children providing three urine samples, geometric mean TCPy levels were greater than the detection limit in 98% of the samples, and nearly half the children had geometric mean 1-NAP and MDA levels greater than the detection limit. Interchild variability was significantly greater than intrachild variability for 1-NAP (p = 0.0037) and TCPy (p < 0.0001). The four metabolites measured were not correlated within urine samples, and children's metabolite levels did not vary systematically by sex, age, race, household income, or putative household pesticide use. On a log scale, mean TCPy levels were significantly higher in urban than in nonurban children (7.2 vs. 4.7 microg/L; p = 0.036). Weighted population mean concentrations were 3.9 [standard error (SE) = 0.7; 95% confidence interval (CI), 2.5, 5.3] microg/L for 1-NAP, 1.7 (SE = 0.3; 95% CI, 1.1, 2.3) microg/L for MDA, and 9.6 (SE = 0.9; 95% CI, 7.8, 11) microg/L for TCPy. The weighted population results estimate the overall mean and variability of metabolite levels for more than 84,000 children in the census tracts sampled. Levels of 1-NAP were lower than reported adult reference range concentrations, whereas TCPy concentrations were substantially higher. Concentrations of MDA were detected more frequently
Huynh, Phat; Do, Trong-Hop; Yoo, Myungsik
2017-01-01
This paper proposes a probability-based algorithm to track the LED in vehicle visible light communication systems using a camera. In this system, the transmitters are the vehicles’ front and rear LED lights. The receivers are high speed cameras that take a series of images of the LEDs. The data embedded in the light is extracted by first detecting the position of the LEDs in these images. Traditionally, LEDs are detected according to pixel intensity. However, when the vehicle is moving, motion blur occurs in the LED images, making it difficult to detect the LEDs. Particularly at high speeds, some frames are blurred at a high degree, which makes it impossible to detect the LED as well as extract the information embedded in these frames. The proposed algorithm relies not only on the pixel intensity, but also on the optical flow of the LEDs and on statistical information obtained from previous frames. Based on this information, the conditional probability that a pixel belongs to a LED is calculated. Then, the position of LED is determined based on this probability. To verify the suitability of the proposed algorithm, simulations are conducted by considering the incidents that can happen in a real-world situation, including a change in the position of the LEDs at each frame, as well as motion blur due to the vehicle speed. PMID:28208637
Huynh, Phat; Do, Trong-Hop; Yoo, Myungsik
2017-02-10
This paper proposes a probability-based algorithm to track the LED in vehicle visible light communication systems using a camera. In this system, the transmitters are the vehicles' front and rear LED lights. The receivers are high speed cameras that take a series of images of the LEDs. ThedataembeddedinthelightisextractedbyﬁrstdetectingthepositionoftheLEDsintheseimages. Traditionally, LEDs are detected according to pixel intensity. However, when the vehicle is moving, motion blur occurs in the LED images, making it difﬁcult to detect the LEDs. Particularly at high speeds, some frames are blurred at a high degree, which makes it impossible to detect the LED as well as extract the information embedded in these frames. The proposed algorithm relies not only on the pixel intensity, but also on the optical ﬂow of the LEDs and on statistical information obtained from previous frames. Based on this information, the conditional probability that a pixel belongs to a LED is calculated. Then, the position of LED is determined based on this probability. To verify the suitability of the proposed algorithm, simulations are conducted by considering the incidents that can happen in a real-world situation, including a change in the position of the LEDs at each frame, as well as motion blur due to the vehicle speed.
Towards smart prosthetic hand: Adaptive probability based skeletan muscle fatigue model.
Kumar, Parmod; Sebastian, Anish; Potluri, Chandrasekhar; Urfer, Alex; Naidu, D; Schoen, Marco P
2010-01-01
Skeletal muscle force can be estimated using surface electromyographic (sEMG) signals. Usually, the surface location for the sensors is near the respective muscle motor unit points. Skeletal muscles generate a spatial EMG signal, which causes cross talk between different sEMG signal sensors. In this study, an array of three sEMG sensors is used to capture the information of muscle dynamics in terms of sEMG signals. The recorded sEMG signals are filtered utilizing optimized nonlinear Half-Gaussian Bayesian filters parameters, and the muscle force signal using a Chebyshev type-II filter. The filter optimization is accomplished using Genetic Algorithms. Three discrete time state-space muscle fatigue models are obtained using system identification and modal transformation for three sets of sensors for single motor unit. The outputs of these three muscle fatigue models are fused with a probabilistic Kullback Information Criterion (KIC) for model selection. The final fused output is estimated with an adaptive probability of KIC, which provides improved force estimates.
Filled pause refinement based on the pronunciation probability for lecture speech.
Long, Yan-Hua; Ye, Hong
2015-01-01
Nowadays, although automatic speech recognition has become quite proficient in recognizing or transcribing well-prepared fluent speech, the transcription of speech that contains many disfluencies remains problematic, such as spontaneous conversational and lecture speech. Filled pauses (FPs) are the most frequently occurring disfluencies in this type of speech. Most recent studies have shown that FPs are widely believed to increase the error rates for state-of-the-art speech transcription, primarily because most FPs are not well annotated or provided in training data transcriptions and because of the similarities in acoustic characteristics between FPs and some common non-content words. To enhance the speech transcription system, we propose a new automatic refinement approach to detect FPs in British English lecture speech transcription. This approach combines the pronunciation probabilities for each word in the dictionary and acoustic language model scores for FP refinement through a modified speech recognition forced-alignment framework. We evaluate the proposed approach on the Reith Lectures speech transcription task, in which only imperfect training transcriptions are available. Successful results are achieved for both the development and evaluation datasets. Acoustic models trained on different styles of speech genres have been investigated with respect to FP refinement. To further validate the effectiveness of the proposed approach, speech transcription performance has also been examined using systems built on training data transcriptions with and without FP refinement.
Bažant, Zdeněk P.; Le, Jia-Liang; Bazant, Martin Z.
2009-01-01
The failure probability of engineering structures such as aircraft, bridges, dams, nuclear structures, and ships, as well as microelectronic components and medical implants, must be kept extremely low, typically <10−6. The safety factors needed to ensure it have so far been assessed empirically. For perfectly ductile and perfectly brittle structures, the empirical approach is sufficient because the cumulative distribution function (cdf) of random material strength is known and fixed. However, such an approach is insufficient for structures consisting of quasibrittle materials, which are brittle materials with inhomogeneities that are not negligible compared with the structure size. The reason is that the strength cdf of quasibrittle structure varies from Gaussian to Weibullian as the structure size increases. In this article, a recently proposed theory for the strength cdf of quasibrittle structure is refined by deriving it from fracture mechanics of nanocracks propagating by small, activation-energy-controlled, random jumps through the atomic lattice. This refinement also provides a plausible physical justification of the power law for subcritical creep crack growth, hitherto considered empirical. The theory is further extended to predict the cdf of structural lifetime at constant load, which is shown to be size- and geometry-dependent. The size effects on structure strength and lifetime are shown to be related and the latter to be much stronger. The theory fits previously unexplained deviations of experimental strength and lifetime histograms from the Weibull distribution. Finally, a boundary layer method for numerical calculation of the cdf of structural strength and lifetime is outlined. PMID:19561294
Dictionary-based probability density function estimation for high-resolution SAR data
NASA Astrophysics Data System (ADS)
Krylov, Vladimir; Moser, Gabriele; Serpico, Sebastiano B.; Zerubia, Josiane
2009-02-01
In the context of remotely sensed data analysis, a crucial problem is represented by the need to develop accurate models for the statistics of pixel intensities. In this work, we develop a parametric finite mixture model for the statistics of pixel intensities in high resolution synthetic aperture radar (SAR) images. This method is an extension of previously existing method for lower resolution images. The method integrates the stochastic expectation maximization (SEM) scheme and the method of log-cumulants (MoLC) with an automatic technique to select, for each mixture component, an optimal parametric model taken from a predefined dictionary of parametric probability density functions (pdf). The proposed dictionary consists of eight state-of-the-art SAR-specific pdfs: Nakagami, log-normal, generalized Gaussian Rayleigh, Heavy-tailed Rayleigh, Weibull, K-root, Fisher and generalized Gamma. The designed scheme is endowed with the novel initialization procedure and the algorithm to automatically estimate the optimal number of mixture components. The experimental results with a set of several high resolution COSMO-SkyMed images demonstrate the high accuracy of the designed algorithm, both from the viewpoint of a visual comparison of the histograms, and from the viewpoint of quantitive accuracy measures such as correlation coefficient (above 99,5%). The method proves to be effective on all the considered images, remaining accurate for multimodal and highly heterogeneous scenes.
Nongpiur, Monisha E; Haaland, Benjamin A; Perera, Shamira A; Friedman, David S; He, Mingguang; Sakata, Lisandro M; Baskaran, Mani; Aung, Tin
2014-01-01
To develop a score along with an estimated probability of disease for detecting angle closure based on anterior segment optical coherence tomography (AS OCT) imaging. Cross-sectional study. A total of 2047 subjects 50 years of age and older were recruited from a community polyclinic in Singapore. All subjects underwent standardized ocular examination including gonioscopy and imaging by AS OCT (Carl Zeiss Meditec). Customized software (Zhongshan Angle Assessment Program) was used to measure AS OCT parameters. Complete data were available for 1368 subjects. Data from the right eyes were used for analysis. A stepwise logistic regression model with Akaike information criterion was used to generate a score that then was converted to an estimated probability of the presence of gonioscopic angle closure, defined as the inability to visualize the posterior trabecular meshwork for at least 180 degrees on nonindentation gonioscopy. Of the 1368 subjects, 295 (21.6%) had gonioscopic angle closure. The angle closure score was calculated from the shifted linear combination of the AS OCT parameters. The score can be converted to an estimated probability of having angle closure using the relationship: estimated probability = e(score)/(1 + e(score)), where e is the natural exponential. The score performed well in a second independent sample of 178 angle-closure subjects and 301 normal controls, with an area under the receiver operating characteristic curve of 0.94. A score derived from a single AS OCT image, coupled with an estimated probability, provides an objective platform for detection of angle closure. Copyright © 2014 Elsevier Inc. All rights reserved.
Fujimoto, Shinichiro; Kondo, Takeshi; Yamamoto, Hideya; Yokoyama, Naoyuki; Tarutani, Yasuhiro; Takamura, Kazuhisa; Urabe, Yoji; Konno, Kumiko; Nishizaki, Yuji; Shinozaki, Tomohiro; Kihara, Yasuki; Daida, Hiroyuki; Isshiki, Takaaki; Takase, Shinichi
2015-09-01
Existing methods to calculate pre-test probability of obstructive coronary artery disease (CAD) have been established using selected high-risk patients who were referred to conventional coronary angiography. The purpose of this study is to develop and validate our new method for pre-test probability of obstructive CAD using patients who underwent coronary CT angiography (CTA), which could be applicable to a wider range of patient population. Using consecutive 4137 patients with suspected CAD who underwent coronary CTA at our institution, a multivariate logistic regression model including clinical factors as covariates calculated the pre-test probability (K-score) of obstructive CAD determined by coronary CTA. The K-score was compared with the Duke clinical score using the area under the curve (AUC) for the receiver-operating characteristic curve. External validation was performed by an independent sample of 319 patients. The final model included eight significant predictors: age, gender, coronary risk factor (hypertension, diabetes mellitus, dyslipidemia, smoking), history of cerebral infarction, and chest symptom. The AUC of the K-score was significantly greater than that of the Duke clinical score for both derivation (0.736 vs. 0.699) and validation (0.714 vs. 0.688) data sets. Among patients who underwent coronary CTA, newly developed K-score had better pre-test prediction ability of obstructive CAD compared to Duke clinical score in Japanese population.
NASA Astrophysics Data System (ADS)
Mahmud, Zamalia; Porter, Anne; Salikin, Masniyati; Ghani, Nor Azura Md
2015-12-01
Students' understanding of probability concepts have been investigated from various different perspectives. Competency on the other hand is often measured separately in the form of test structure. This study was set out to show that perceived understanding and competency can be calibrated and assessed together using Rasch measurement tools. Forty-four students from the STAT131 Understanding Uncertainty and Variation course at the University of Wollongong, NSW have volunteered to participate in the study. Rasch measurement which is based on a probabilistic model is used to calibrate the responses from two survey instruments and investigate the interactions between them. Data were captured from the e-learning platform Moodle where students provided their responses through an online quiz. The study shows that majority of the students perceived little understanding about conditional and independent events prior to learning about it but tend to demonstrate a slightly higher competency level afterward. Based on the Rasch map, there is indication of some increase in learning and knowledge about some probability concepts at the end of the two weeks lessons on probability concepts.
García-Sosa, Alfonso T; Oja, Mare; Hetényi, Csaba; Maran, Uko
2012-08-27
The increasing knowledge of both structure and activity of compounds provides a good basis for enhancing the pharmacological characterization of chemical libraries. In addition, pharmacology can be seen as incorporating both advances from molecular biology as well as chemical sciences, with innovative insight provided from studying target-ligand data from a ligand molecular point of view. Predictions and profiling of libraries of drug candidates have previously focused mainly on certain cases of oral bioavailability. Inclusion of other administration routes and disease-specificity would improve the precision of drug profiling. In this work, recent data are extended, and a probability-based approach is introduced for quantitative and gradual classification of compounds into categories of drugs/nondrugs, as well as for disease- or organ-specificity. Using experimental data of over 1067 compounds and multivariate logistic regressions, the classification shows good performance in training and independent test cases. The regressions have high statistical significance in terms of the robustness of coefficients and 95% confidence intervals provided by a 1000-fold bootstrapping resampling. Besides their good predictive power, the classification functions remain chemically interpretable, containing only one to five variables in total, and the physicochemical terms involved can be easily calculated. The present approach is useful for an improved description and filtering of compound libraries. It can also be applied sequentially or in combinations of filters, as well as adapted to particular use cases. The scores and equations may be able to suggest possible routes for compound or library modification. The data is made available for reuse by others, and the equations are freely accessible at http://hermes.chem.ut.ee/~alfx/druglogit.html.
NASA Astrophysics Data System (ADS)
Wang, Bei; Sugi, Takenao; Wang, Xingyu; Nakamura, Masatoshi
Data for human sleep study may be affected by internal and external influences. The recorded sleep data contains complex and stochastic factors, which increase the difficulties for the computerized sleep stage determination techniques to be applied for clinical practice. The aim of this study is to develop an automatic sleep stage determination system which is optimized for variable sleep data. The main methodology includes two modules: expert knowledge database construction and automatic sleep stage determination. Visual inspection by a qualified clinician is utilized to obtain the probability density function of parameters during the learning process of expert knowledge database construction. Parameter selection is introduced in order to make the algorithm flexible. Automatic sleep stage determination is manipulated based on conditional probability. The result showed close agreement comparing with the visual inspection by clinician. The developed system can meet the customized requirements in hospitals and institutions.
A Framework for the Statistical Analysis of Probability of Mission Success Based on Bayesian Theory
2014-06-01
Mission Success Prediction Capability (MSPC) is a Model-Based Systems Engineering ( MBSE ) approach to mission planning, used for the analysis of complex... MBSE ) approach to mission planning, used for the analysis of complex systems of precision strike (air-to-surface) weapons. This report focuses on...Based Systems Engineering ( MBSE ) approach to mission planning. It focuses on holistically analysing complex systems, more specifically those of
Results from probability-based, simplified, off-shore Louisiana CSEM hydrocarbon reservoir modeling
NASA Astrophysics Data System (ADS)
Stalnaker, J. L.; Tinley, M.; Gueho, B.
2009-12-01
Perhaps the biggest impediment to the commercial application of controlled-source electromagnetic (CSEM) geophysics marine hydrocarbon exploration is the inefficiency of modeling and data inversion. If an understanding of the typical (in a statistical sense) geometrical and electrical nature of a reservoir can be attained, then it is possible to derive therefrom a simplified yet accurate model of the electromagnetic interactions that produce a measured marine CSEM signal, leading ultimately to efficient modeling and inversion. We have compiled geometric and resistivity measurements from roughly 100 known, producing off-shore Louisiana Gulf of Mexico reservoirs. Recognizing that most reservoirs could be recreated roughly from a sectioned hemi-ellipsoid, we devised a unified, compact reservoir geometry description. Each reservoir was initially fit to the ellipsoid by eye, though we plan in the future to perform a more rigorous least-squares fit. We created, using kernel density estimation, initial probabilistic descriptions of reservoir parameter distributions, with the understanding that additional information would not fundamentally alter our results, but rather increase accuracy. From the probabilistic description, we designed an approximate model consisting of orthogonally oriented current segments distributed across the ellipsoid--enough to define the shape, yet few enough to be resolved during inversion. The moment and length of the currents are mapped to geometry and resistivity of the ellipsoid. The probability density functions (pdfs) derived from reservoir statistics serve as a workbench. We first use the pdfs in a Monte Carlo simulation designed to assess the detectability off-shore Louisiana reservoirs using magnitude versus offset (MVO) anomalies. From the pdfs, many reservoir instances are generated (using rejection sampling) and each normalized MVO response is calculated. The response strength is summarized by numerically computing MVO power, and that
2012-01-01
Background Osteoporotic hip fractures represent major cause of disability, loss of quality of life and even mortality among the elderly population. Decisions on drug therapy are based on the assessment of risk factors for fracture, from BMD measurements. The combination of biomechanical models with clinical studies could better estimate bone strength and supporting the specialists in their decision. Methods A model to assess the probability of fracture, based on the Damage and Fracture Mechanics has been developed, evaluating the mechanical magnitudes involved in the fracture process from clinical BMD measurements. The model is intended for simulating the degenerative process in the skeleton, with the consequent lost of bone mass and hence the decrease of its mechanical resistance which enables the fracture due to different traumatisms. Clinical studies were chosen, both in non-treatment conditions and receiving drug therapy, and fitted to specific patients according their actual BMD measures. The predictive model is applied in a FE simulation of the proximal femur. The fracture zone would be determined according loading scenario (sideway fall, impact, accidental loads, etc.), using the mechanical properties of bone obtained from the evolutionary model corresponding to the considered time. Results BMD evolution in untreated patients and in those under different treatments was analyzed. Evolutionary curves of fracture probability were obtained from the evolution of mechanical damage. The evolutionary curve of the untreated group of patients presented a marked increase of the fracture probability, while the curves of patients under drug treatment showed variable decreased risks, depending on the therapy type. Conclusion The FE model allowed to obtain detailed maps of damage and fracture probability, identifying high-risk local zones at femoral neck and intertrochanteric and subtrochanteric areas, which are the typical locations of osteoporotic hip fractures. The
Viego, Valentina; Temporelli, Karina
2017-01-01
Background Hypertension, diabetes and hypercholesterolemia are the most frequent and diagnosed chronic diseases in Argentina. They contribute largely to the burden of chronic disease and they are strongly influenced by a small number of risk factors. These risk factors are all modifiable at the population and individual level and offer major prospects for their prevention. We are interested in socioeconomic determinants of prevalence of those 3 specific diseases. Design and methods We estimate 3-equation probit model, combined with 3 separate probit estimations and a probit-based Heckman correction considering possible sample selection bias. Estimations were carried out using secondary self-reported data coming from the 2013 Risk Factor National Survey. Results We find a negative association between socioeconomic status and prevalence of hypertension, cholesterolemia and diabetes; main increases concentrate in the transition from low to high SES in hypertension and diabetes. In cholesterol, the major effect takes place when individual crosses from low to middle SES and then vanishes. Anyway, in Argentina SES exhibit and independent effect on chronic diseases apart from those based on habits and body weight. Conclusions Public strategies to prevent chronic diseases must be specially targeted at women, poorest households and the least educated individuals in order to achieve efficacy. Also, as the probability of having a condition related to excessive blood pressure, high levels of cholesterol or glucose in the blood do not increase proportionally with age, so public campaigns promoting healthy diets, physical activity and medical checkups should be focused on young individuals to facilitate prophylaxis. Significance for public health Latin American countries are going through an epidemiological transition where infectious illnesses are being superseded by chronic diseases which, in turn, are related to lifestyles and socioeconomic factors. Specificities in the
The sampling design for the National Children¿s Study (NCS) calls for a population-based, multi-stage, clustered household sampling approach (visit our website for more information on the NCS : www.nationalchildrensstudy.gov). The full sample is designed to be representative of ...
The sampling design for the National Children¿s Study (NCS) calls for a population-based, multi-stage, clustered household sampling approach (visit our website for more information on the NCS : www.nationalchildrensstudy.gov). The full sample is designed to be representative of ...
Davies, Christopher E; Giles, Lynne C; Glonek, Gary Fv
2017-01-01
One purpose of a longitudinal study is to gain insight of how characteristics at earlier points in time can impact on subsequent outcomes. Typically, the outcome variable varies over time and the data for each individual can be used to form a discrete path of measurements, that is a trajectory. Group-based trajectory modelling methods seek to identify subgroups of individuals within a population with trajectories that are more similar to each other than to trajectories in distinct groups. An approach to modelling the influence of covariates measured at earlier time points in the group-based setting is to consider models wherein these covariates affect the group membership probabilities. Models in which prior covariates impact the trajectories directly are also possible but are not considered here. In the present study, we compared six different methods for estimating the effect of covariates on the group membership probabilities, which have different approaches to account for the uncertainty in the group membership assignment. We found that when investigating the effect of one or several covariates on a group-based trajectory model, the full likelihood approach minimized the bias in the estimate of the covariate effect. In this '1-step' approach, the estimation of the effect of covariates and the trajectory model are carried out simultaneously. Of the '3-step' approaches, where the effect of the covariates is assessed subsequent to the estimation of the group-based trajectory model, only Vermunt's improved 3 step resulted in bias estimates similar in size to the full likelihood approach. The remaining methods considered resulted in considerably higher bias in the covariate effect estimates and should not be used. In addition to the bias empirically demonstrated for the probability regression approach, we have shown analytically that it is biased in general.
An EEG-Based Fuzzy Probability Model for Early Diagnosis of Alzheimer's Disease.
Chiang, Hsiu-Sen; Pao, Shun-Chi
2016-05-01
Alzheimer's disease is a degenerative brain disease that results in cardinal memory deterioration and significant cognitive impairments. The early treatment of Alzheimer's disease can significantly reduce deterioration. Early diagnosis is difficult, and early symptoms are frequently overlooked. While much of the literature focuses on disease detection, the use of electroencephalography (EEG) in Alzheimer's diagnosis has received relatively little attention. This study combines the fuzzy and associative Petri net methodologies to develop a model for the effective and objective detection of Alzheimer's disease. Differences in EEG patterns between normal subjects and Alzheimer patients are used to establish prediction criteria for Alzheimer's disease, potentially providing physicians with a reference for early diagnosis, allowing for early action to delay the disease progression.
Translating CFC-based piston ages into probability density functions of ground-water age in karst
Long, A.J.; Putnam, L.D.
2006-01-01
Temporal age distributions are equivalent to probability density functions (PDFs) of transit time. The type and shape of a PDF provides important information related to ground-water mixing at the well or spring and the complex nature of flow networks in karst aquifers. Chlorofluorocarbon (CFC) concentrations measured for samples from 12 locations in the karstic Madison aquifer were used to evaluate the suitability of various PDF types for this aquifer. Parameters of PDFs could not be estimated within acceptable confidence intervals for any of the individual sites. Therefore, metrics derived from CFC-based apparent ages were used to evaluate results of PDF modeling in a more general approach. The ranges of these metrics were established as criteria against which families of PDFs could be evaluated for their applicability to different parts of the aquifer. Seven PDF types, including five unimodal and two bimodal models, were evaluated. Model results indicate that unimodal models may be applicable to areas close to conduits that have younger piston (i.e., apparent) ages and that bimodal models probably are applicable to areas farther from conduits that have older piston ages. The two components of a bimodal PDF are interpreted as representing conduit and diffuse flow, and transit times of as much as two decades may separate these PDF components. Areas near conduits may be dominated by conduit flow, whereas areas farther from conduits having bimodal distributions probably have good hydraulic connection to both diffuse and conduit flow. ?? 2006 Elsevier B.V. All rights reserved.
Probability-Based Design Criteria of the ASCE 7 Tsunami Loads and Effects Provisions (Invited)
NASA Astrophysics Data System (ADS)
Chock, G.
2013-12-01
Mitigation of tsunami risk requires a combination of emergency preparedness for evacuation in addition to providing structural resilience of critical facilities, infrastructure, and key resources necessary for immediate response and economic and social recovery. Critical facilities would include emergency response, medical, tsunami refuges and shelters, ports and harbors, lifelines, transportation, telecommunications, power, financial institutions, and major industrial/commercial facilities. The Tsunami Loads and Effects Subcommittee of the ASCE/SEI 7 Standards Committee is developing a proposed new Chapter 6 - Tsunami Loads and Effects for the 2016 edition of the ASCE 7 Standard. ASCE 7 provides the minimum design loads and requirements for structures subject to building codes such as the International Building Code utilized in the USA. In this paper we will provide a review emphasizing the intent of these new code provisions and explain the design methodology. The ASCE 7 provisions for Tsunami Loads and Effects enables a set of analysis and design methodologies that are consistent with performance-based engineering based on probabilistic criteria. . The ASCE 7 Tsunami Loads and Effects chapter will be initially applicable only to the states of Alaska, Washington, Oregon, California, and Hawaii. Ground shaking effects and subsidence from a preceding local offshore Maximum Considered Earthquake will also be considered prior to tsunami arrival for Alaska and states in the Pacific Northwest regions governed by nearby offshore subduction earthquakes. For national tsunami design provisions to achieve a consistent reliability standard of structural performance for community resilience, a new generation of tsunami inundation hazard maps for design is required. The lesson of recent tsunami is that historical records alone do not provide a sufficient measure of the potential heights of future tsunamis. Engineering design must consider the occurrence of events greater than
Meerwijk, Esther L; Sevelius, Jae M
2017-02-01
Transgender individuals have a gender identity that differs from the sex they were assigned at birth. The population size of transgender individuals in the United States is not well-known, in part because official records, including the US Census, do not include data on gender identity. Population surveys today more often collect transgender-inclusive gender-identity data, and secular trends in culture and the media have created a somewhat more favorable environment for transgender people. To estimate the current population size of transgender individuals in the United States and evaluate any trend over time. In June and July 2016, we searched PubMed, Cumulative Index to Nursing and Allied Health Literature, and Web of Science for national surveys, as well as "gray" literature, through an Internet search. We limited the search to 2006 through 2016. We selected population-based surveys that used probability sampling and included self-reported transgender-identity data. We used random-effects meta-analysis to pool eligible surveys and used meta-regression to address our hypothesis that the transgender population size estimate would increase over time. We used subsample and leave-one-out analysis to assess for bias. Our meta-regression model, based on 12 surveys covering 2007 to 2015, explained 62.5% of model heterogeneity, with a significant effect for each unit increase in survey year (F = 17.122; df = 1,10; b = 0.026%; P = .002). Extrapolating these results to 2016 suggested a current US population size of 390 adults per 100 000, or almost 1 million adults nationally. This estimate may be more indicative for younger adults, who represented more than 50% of the respondents in our analysis. Future national surveys are likely to observe higher numbers of transgender people. The large variety in questions used to ask about transgender identity may account for residual heterogeneity in our models. Public health implications. Under- or nonrepresentation
Sevelius, Jae M.
2017-01-01
Background. Transgender individuals have a gender identity that differs from the sex they were assigned at birth. The population size of transgender individuals in the United States is not well-known, in part because official records, including the US Census, do not include data on gender identity. Population surveys today more often collect transgender-inclusive gender-identity data, and secular trends in culture and the media have created a somewhat more favorable environment for transgender people. Objectives. To estimate the current population size of transgender individuals in the United States and evaluate any trend over time. Search methods. In June and July 2016, we searched PubMed, Cumulative Index to Nursing and Allied Health Literature, and Web of Science for national surveys, as well as “gray” literature, through an Internet search. We limited the search to 2006 through 2016. Selection criteria. We selected population-based surveys that used probability sampling and included self-reported transgender-identity data. Data collection and analysis. We used random-effects meta-analysis to pool eligible surveys and used meta-regression to address our hypothesis that the transgender population size estimate would increase over time. We used subsample and leave-one-out analysis to assess for bias. Main results. Our meta-regression model, based on 12 surveys covering 2007 to 2015, explained 62.5% of model heterogeneity, with a significant effect for each unit increase in survey year (F = 17.122; df = 1,10; b = 0.026%; P = .002). Extrapolating these results to 2016 suggested a current US population size of 390 adults per 100 000, or almost 1 million adults nationally. This estimate may be more indicative for younger adults, who represented more than 50% of the respondents in our analysis. Authors’ conclusions. Future national surveys are likely to observe higher numbers of transgender people. The large variety in questions used to ask
NASA Astrophysics Data System (ADS)
Li, Hanshan; Lei, Zhiyong
2013-01-01
To improve projectile coordinate measurement precision in fire measurement system, this paper introduces the optical fiber coding fire measurement method and principle, sets up their measurement model, and analyzes coordinate errors by using the differential method. To study the projectile coordinate position distribution, using the mathematical statistics hypothesis method to analyze their distributing law, firing dispersion and probability of projectile shooting the object center were put under study. The results show that exponential distribution testing is relatively reasonable to ensure projectile position distribution on the given significance level. Through experimentation and calculation, the optical fiber coding fire measurement method is scientific and feasible, which can gain accurate projectile coordinate position.
Computing posterior probabilities for score-based alignments using ppALIGN.
Wolfsheimer, Stefan; Hartmann, Alexander; Rabus, Ralf; Nuel, Gregory
2012-05-16
Score-based pairwise alignments are widely used in bioinformatics in particular with molecular database search tools, such as the BLAST family. Due to sophisticated heuristics, such algorithms are usually fast but the underlying scoring model unfortunately lacks a statistical description of the reliability of the reported alignments. In particular, close to gaps, in low-score or low-complexity regions, a huge number of alternative alignments arise which results in a decrease of the certainty of the alignment. ppALIGN is a software package that uses hidden Markov Model techniques to compute position-wise reliability of score-based pairwise alignments of DNA or protein sequences. The design of the model allows for a direct connection between the scoring function and the parameters of the probabilistic model. For this reason it is suitable to analyze the outcomes of popular score based aligners and search tools without having to choose a complicated set of parameters. By contrast, our program only requires the classical score parameters (the scoring function and gap costs). The package comes along with a library written in C++, a standalone program for user defined alignments (ppALIGN) and another program (ppBLAST) which can process a complete result set of BLAST. The main algorithms essentially exhibit a linear time complexity (in the alignment lengths), and they are hence suitable for on-line computations. We have also included alternative decoding algorithms to provide alternative alignments. ppALIGN is a fast program/library that helps detect and quantify questionable regions in pairwise alignments. Due to its structure, the input/output interface it can to be connected to other post-processing tools. Empirically, we illustrate its usefulness in terms of correctly predicted reliable regions for sequences generated using the ROSE model for sequence evolution, and identify sensor-specific regions in the denitrifying betaproteobacterium Aromatoleum aromaticum.
Base Rate Effects on the Interpretation of Probability and Frequency Expressions
1988-06-01
defined context. However, for this approach to risk analysis to have any hope of success, it is necessary that the membership functions for specific...as chance, and 0.60 and 0.70 as likely. utner prooaoiiity terms are not al I uv 11 p POP’v.S , b1% they can be used in other ways. For example...perceived base rates. To summarize, the design can be conceptualized in either of two ways, both of which were utilized for analysis . First, each of
NASA Astrophysics Data System (ADS)
Eleftheriadou, Anastasia K.; Baltzopoulou, Aikaterini D.; Karabinis, Athanasios I.
2016-06-01
The current seismic risk assessment is based on two discrete approaches, actual and probable, validating afterwards the produced results. In the first part of this research, the seismic risk is evaluated from the available data regarding the mean statistical repair/strengthening or replacement cost for the total number of damaged structures (180,427 buildings) after the 7/9/1999 Parnitha (Athens) earthquake. The actual evaluated seismic risk is afterwards compared to the estimated probable structural losses, which is presented in the second part of the paper, based on a damage scenario in the referring earthquake. The applied damage scenario is based on recently developed damage probability matrices (DPMs) from Athens (Greece) damage database. The seismic risk estimation refers to 750,085 buildings situated in the extended urban region of Athens. The building exposure is categorized in five typical structural types and represents 18.80 % of the entire building stock in Greece. The last information is provided by the National Statistics Service of Greece (NSSG) according to the 2000-2001 census. The seismic input is characterized by the ratio, a g/ a o, where a g is the regional peak ground acceleration (PGA) which is evaluated from the earlier estimated research macroseismic intensities, and a o is the PGA according to the hazard map of the 2003 Greek Seismic Code. Finally, the collected investigated financial data derived from different National Services responsible for the post-earthquake crisis management concerning the repair/strengthening or replacement costs or other categories of costs for the rehabilitation of earthquake victims (construction and function of settlements for earthquake homeless, rent supports, demolitions, shorings) are used to determine the final total seismic risk factor.
Probability of Occurrence of Life-Limiting Fatigue Mechanism in P/M Nickel-Based Alloys (Postprint)
2016-03-30
1) has to be greater than 1 for initiation of a life -limiting failure. After rearranging, this criterion can be represented by: 1 1 \\μΜ\\μ + μ·ψΛ...be used to assess the likelihood of the life -limiting mechanism in other specimen geometries by accounting for the total stressed volume after ...AFRL-RX-WP-JA-2017-0146 PROBABILITY OF OCCURRENCE OF LIFE -LIMITING FATIGUE MECHANISM IN P/M NICKEL-BASED ALLOYS (POSTPRINT) M.J
EUPDF: An Eulerian-Based Monte Carlo Probability Density Function (PDF) Solver. User's Manual
NASA Technical Reports Server (NTRS)
Raju, M. S.
1998-01-01
EUPDF is an Eulerian-based Monte Carlo PDF solver developed for application with sprays, combustion, parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and spray solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type. The manual provides the user with the coding required to couple the PDF code to any given flow code and a basic understanding of the EUPDF code structure as well as the models involved in the PDF formulation. The source code of EUPDF will be available with the release of the National Combustion Code (NCC) as a complete package.
Don’t make cache too complex: A simple probability-based cache management scheme for SSDs
Cho, Sangyeun; Choi, Jongmoo
2017-01-01
Solid-state drives (SSDs) have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-)based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme. PMID:28358897
Don't make cache too complex: A simple probability-based cache management scheme for SSDs.
Baek, Seungjae; Cho, Sangyeun; Choi, Jongmoo
2017-01-01
Solid-state drives (SSDs) have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-)based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme.
Research on particle swarm optimization algorithm based on optimal movement probability
NASA Astrophysics Data System (ADS)
Ma, Jianhong; Zhang, Han; He, Baofeng
2017-01-01
The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.
Field, Edward H.
2015-01-01
A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.
Borguesan, Bruno; Barbachan e Silva, Mariel; Grisci, Bruno; Inostroza-Ponta, Mario; Dorn, Márcio
2015-12-01
Tertiary protein structure prediction is one of the most challenging problems in structural bioinformatics. Despite the advances in algorithm development and computational strategies, predicting the folded structure of a protein only from its amino acid sequence remains as an unsolved problem. We present a new computational approach to predict the native-like three-dimensional structure of proteins. Conformational preferences of amino acid residues and secondary structure information were obtained from protein templates stored in the Protein Data Bank and represented as an Angle Probability List. Two knowledge-based prediction methods based on Genetic Algorithms and Particle Swarm Optimization were developed using this information. The proposed method has been tested with twenty-six case studies selected to validate our approach with different classes of proteins and folding patterns. Stereochemical and structural analysis were performed for each predicted three-dimensional structure. Results achieved suggest that the Angle Probability List can improve the effectiveness of metaheuristics used to predicted the three-dimensional structure of protein molecules by reducing its conformational search space. Copyright © 2015 Elsevier Ltd. All rights reserved.
Xie, Xin-Ping; Xie, Yu-Feng; Wang, Hong-Qiang
2017-08-23
Large-scale accumulation of omics data poses a pressing challenge of integrative analysis of multiple data sets in bioinformatics. An open question of such integrative analysis is how to pinpoint consistent but subtle gene activity patterns across studies. Study heterogeneity needs to be addressed carefully for this goal. This paper proposes a regulation probability model-based meta-analysis, jGRP, for identifying differentially expressed genes (DEGs). The method integrates multiple transcriptomics data sets in a gene regulatory space instead of in a gene expression space, which makes it easy to capture and manage data heterogeneity across studies from different laboratories or platforms. Specifically, we transform gene expression profiles into a united gene regulation profile across studies by mathematically defining two gene regulation events between two conditions and estimating their occurring probabilities in a sample. Finally, a novel differential expression statistic is established based on the gene regulation profiles, realizing accurate and flexible identification of DEGs in gene regulation space. We evaluated the proposed method on simulation data and real-world cancer datasets and showed the effectiveness and efficiency of jGRP in identifying DEGs identification in the context of meta-analysis. Data heterogeneity largely influences the performance of meta-analysis of DEGs identification. Existing different meta-analysis methods were revealed to exhibit very different degrees of sensitivity to study heterogeneity. The proposed method, jGRP, can be a standalone tool due to its united framework and controllable way to deal with study heterogeneity.
Bratt, Ola; Drevin, Linda; Akre, Olof; Garmo, Hans; Stattin, Pär
2016-10-01
Familial prostate cancer risk estimates are inflated by clinically insignificant low-risk cancer, diagnosed after prostate-specific antigen testing. We provide age-specific probabilities of non-low- and high-risk prostate cancer. Fifty-one thousand, eight hundred ninety-seven brothers of 32 807 men with prostate cancer were identified in Prostate Cancer data Base Sweden (PCBaSe). Nelson-Aalen estimates with 95% confidence intervals (CIs) were calculated for cumulative, family history-stratified probabilities of any, non-low- (any of Gleason score ≥ 7, prostate-specific antigen [PSA] ≥ 10 ng/mL, T3-4, N1, and/or M1) and high-risk prostate cancer (Gleason score ≥ 8 and/or T3-4 and/or PSA ≥ 20 ng/mL and/or N1 and/or M1). The population probability of any prostate cancer was 4.8% (95% CI = 4.8% to 4.9%) at age 65 years and 12.9% (95% CI = 12.8% to 12.9%) at age 75 years, of non-low-risk prostate cancer 2.8% (95% CI = 2.7% to 2.8%) at age 65 years and 8.9% (95% CI = 8.8% to 8.9%) at age 75 years, and of high-risk prostate cancer 1.4% (95% CI = 1.3% to 1.4%) at age 65 years and 5.2% (95% CI = 5.1% to 5.2%) at age 75 years. For men with one affected brother, probabilities of any prostate cancer were 14.9% (95% CI = 14.1% to 15.8%) at age 65 years and 30.3% (95% CI = 29.3% to 31.3%) at age 75 years, of non-low-risk prostate cancer 7.3% (95% CI = 6.7% to 7.9%) at age 65 years and 18.8% (95% CI = 17.9% to 19.6%) at age 75 years, and of high-risk prostate cancer 3.0% (95% CI = 2.6% to 3.4%) at age 65 years and 8.9% (95% CI = 8.2% to 9.5%) at age 75 years. Probabilities were higher for men with a stronger family history. For example, men with two affected brothers had a 13.6% (95% CI = 9.9% to 17.6 %) probability of high-risk cancer at age 75 years. The age-specific probabilities of non-low- and high-risk cancer presented here are more informative than relative risks of any prostate cancer and more suitable to use
Wang, Bei; Wang, Xingyu; Zhang, Tao; Nakamura, Masatoshi
2013-01-01
An automatic sleep level estimation method was developed for monitoring and regulation of day time nap sleep. The recorded nap data is separated into continuous 5-second segments. Features are extracted from EEGs, EOGs and EMG. A parameter of sleep level is defined which is estimated based on the conditional probability of sleep stages. An exponential smoothing method is applied for the estimated sleep level. There were totally 12 healthy subjects, with an averaged age of 22 yeas old, participated into the experimental work. Comparing with sleep stage determination, the presented sleep level estimation method showed better performance for nap sleep interpretation. Real time monitoring and regulation of nap is realizable based on the developed technique.
NASA Astrophysics Data System (ADS)
Chen, Ning; Yu, Dejie; Xia, Baizhan; Beer, Michael
2016-12-01
Imprecise probabilities can capture epistemic uncertainty, which reflects limited available knowledge so that a precise probabilistic model cannot be established. In this paper, the parameters of a structural-acoustic problem are represented with the aid of p-boxes to capture epistemic uncertainty in the model. To perform the necessary analysis of the structural-acoustic problem with p-boxes, a first-order matrix decomposition perturbation method (FMDPM) for interval analysis is proposed, and an efficient interval Monte Carlo method based on FMDPM is derived. In the implementation of the efficient interval Monte Carlo method based on FMDPM, constant matrices are obtained, first, through an uncertain parameter extraction on the basis of the matrix decomposition technique. Then, these constant matrices are employed to perform multiple interval analyses by using the first-order perturbation method. A numerical example is provided to illustrate the feasibility and effectiveness of the presented approach.
1994-06-27
the check Qb3 , which forces the exchange of queens leads to a win (it does). If it does not, then white would be foolish to give away his existing...f:e6 11 B:c4 0-0 8 Qb3 Qd7 12 Qc2 .5 9 Qsb7 Rb8 13 Ratl e:d4 10 Qa6 Nf6 14 N:d4 Qe7 11 Nbd2 Bb4 15 042 d5 12 Nc4 0-0 16 *:d5 N:d5 13 a3 Be7 17 N:d5...16, 1994 27 8d6 .5 White Black 28 R:a7 Rd7 B* Hitech Hitech 5.6 29 Qb3 + Kh8 30 R:d7 N:d7 1 44 45 31 Qd5 RcS 2 c4 d:c4 32 f3 h5 3 e4 05 33 c6 Nf6 4
Anusavice, Kenneth J.; Jadaan, Osama M.; Esquivel–Upshaw, Josephine
2013-01-01
Recent reports on bilayer ceramic crown prostheses suggest that fractures of the veneering ceramic represent the most common reason for prosthesis failure. Objective The aims of this study were to test the hypotheses that: (1) an increase in core ceramic/veneer ceramic thickness ratio for a crown thickness of 1.6 mm reduces the time-dependent fracture probability (Pf) of bilayer crowns with a lithium-disilicate-based glass-ceramic core, and (2) oblique loading, within the central fossa, increases Pf for 1.6-mm-thick crowns compared with vertical loading. Materials and methods Time-dependent fracture probabilities were calculated for 1.6-mm-thick, veneered lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation in the central fossa area. Time-dependent fracture probability analyses were computed by CARES/Life software and finite element analysis, using dynamic fatigue strength data for monolithic discs of a lithium-disilicate glass-ceramic core (Empress 2), and ceramic veneer (Empress 2 Veneer Ceramic). Results Predicted fracture probabilities (Pf) for centrally-loaded 1,6-mm-thick bilayer crowns over periods of 1, 5, and 10 years are 1.2%, 2.7%, and 3.5%, respectively, for a core/veneer thickness ratio of 1.0 (0.8 mm/0.8 mm), and 2.5%, 5.1%, and 7.0%, respectively, for a core/veneer thickness ratio of 0.33 (0.4 mm/1.2 mm). Conclusion CARES/Life results support the proposed crown design and load orientation hypotheses. Significance The application of dynamic fatigue data, finite element stress analysis, and CARES/Life analysis represent an optimal approach to optimize fixed dental prosthesis designs produced from dental ceramics and to predict time-dependent fracture probabilities of ceramic-based fixed dental prostheses that can minimize the risk for clinical failures. PMID:24060349
Park, Dong-Uk; Colt, Joanne S.; Baris, Dalsu; Schwenn, Molly; Karagas, Margaret R.; Armenti, Karla R.; Johnson, Alison; Silverman, Debra T; Stewart, Patricia A
2014-01-01
We describe here an approach for estimating the probability that study subjects were exposed to metalworking fluids (MWFs) in a population-based case-control study of bladder cancer. Study subject reports on the frequency of machining and use of specific MWFs (straight, soluble, and synthetic/semi-synthetic) were used to estimate exposure probability when available. Those reports also were used to develop estimates for job groups, which were then applied to jobs without MWF reports. Estimates using both cases and controls and controls only were developed. The prevalence of machining varied substantially across job groups (10-90%), with the greatest percentage of jobs that machined being reported by machinists and tool and die workers. Reports of straight and soluble MWF use were fairly consistent across job groups (generally, 50-70%). Synthetic MWF use was lower (13-45%). There was little difference in reports by cases and controls vs. controls only. Approximately, 1% of the entire study population was assessed as definitely exposed to straight or soluble fluids in contrast to 0.2% definitely exposed to synthetic/semi-synthetics. A comparison between the reported use of the MWFs and the US production levels by decade found high correlations (r generally >0.7). Overall, the method described here is likely to have provided a systematic and reliable ranking that better reflects the variability of exposure to three types of MWFs than approaches applied in the past. PMID:25256317
Hedman, M; Björk-Eriksson, T; Brodin, O; Toma-Dasu, I
2013-05-01
The aim of this study was to compare patient-specific radiobiological parameters with population averages in predicting the clinical outcome after radiotherapy (RT) using a tumour control probability (TCP) model based on the biological effective dose (BED). A previously published study of 46 head and neck carcinomas with individually identified radiobiological parameters, radiosensitivity and potential doubling time (Tpot), and known tumour size was investigated. These patients had all been treated with external beam RT, and the majority had also received brachytherapy. The TCP for each individual based on the BED using patient-specific radiobiological parameters was compared with the TCP based on the BED using average radiobiological parameters (α=0.3 Gy(-1), Tpot=3 days). 43 patients remained in the final analysis. There was only a weak trend for increasing local tumour control with increasing BED in both groups. However, when the TCP was calculated, the use of patient-specific parameters was better for identifying local control correctly. The sensitivity and specificity for tumour-specific parameters were 63% and 80%, respectively. The corresponding values for population-based averages were 0% and 91%, respectively. The positive predictive value was 92% when tumour-specific parameters were used compared with 0% for population-based averages. A receiver operating characteristic curve confirmed the superiority of patient-specific parameters over population averages in predicting local control. Individual radiobiological parameters are better than population-derived averages when used in a mathematical model to predict TCP after curative RT in head and neck carcinomas. TCP based on individual radiobiological parameters is better than TCP based on population-based averages for identifying local control correctly.
Hoffman, E.
2012-08-23
A series of cyclic potentiodynamic polarization tests was performed on samples of A537 carbon steel in support of a probability-based approach to evaluate the effect of chloride and sulfate on corrosion susceptibility. Testing solutions were chosen to build off previous experimental results from FY07, FY08, FY09 and FY10 to systemically evaluate the influence of the secondary aggressive species, chloride, and sulfate. The FY11 results suggest that evaluating the combined effect of all aggressive species, nitrate, chloride, and sulfate, provides a consistent response for determining corrosion susceptibility. The results of this work emphasize the importance for not only nitrate concentration limits, but also chloride and sulfate concentration limits as well.
Aljasser, Faisal; Vitevitch, Michael S
2017-03-24
A number of databases (Storkel Behavior Research Methods, 45, 1159-1167, 2013) and online calculators (Vitevitch & Luce Behavior Research Methods, Instruments, and Computers, 36, 481-487, 2004) have been developed to provide statistical information about various aspects of language, and these have proven to be invaluable assets to researchers, clinicians, and instructors in the language sciences. The number of such resources for English is quite large and continues to grow, whereas the number of such resources for other languages is much smaller. This article describes the development of a Web-based interface to calculate phonotactic probability in Modern Standard Arabic (MSA). A full description of how the calculator can be used is provided. It can be freely accessed at http://phonotactic.drupal.ku.edu/ .
NASA Astrophysics Data System (ADS)
Mandrekas, John
2004-08-01
GTNEUT is a two-dimensional code for the calculation of the transport of neutral particles in fusion plasmas. It is based on the Transmission and Escape Probabilities (TEP) method and can be considered a computationally efficient alternative to traditional Monte Carlo methods. The code has been benchmarked extensively against Monte Carlo and has been used to model the distribution of neutrals in fusion experiments. Program summaryTitle of program: GTNEUT Catalogue identifier: ADTX Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTX Computer for which the program is designed and others on which it has been tested: The program was developed on a SUN Ultra 10 workstation and has been tested on other Unix workstations and PCs. Operating systems or monitors under which the program has been tested: Solaris 8, 9, HP-UX 11i, Linux Red Hat v8.0, Windows NT/2000/XP. Programming language used: Fortran 77 Memory required to execute with typical data: 6 219 388 bytes No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: No No. of bytes in distributed program, including test data, etc.: 300 709 No. of lines in distributed program, including test data, etc.: 17 365 Distribution format: compressed tar gzip file Keywords: Neutral transport in plasmas, Escape probability methods Nature of physical problem: This code calculates the transport of neutral particles in thermonuclear plasmas in two-dimensional geometric configurations. Method of solution: The code is based on the Transmission and Escape Probability (TEP) methodology [1], which is part of the family of integral transport methods for neutral particles and neutrons. The resulting linear system of equations is solved by standard direct linear system solvers (sparse and non-sparse versions are included). Restrictions on the complexity of the problem: The current version of the code can
Park, Dong-Uk; Colt, Joanne S; Baris, Dalsu; Schwenn, Molly; Karagas, Margaret R; Armenti, Karla R; Johnson, Alison; Silverman, Debra T; Stewart, Patricia A
2014-01-01
We describe an approach for estimating the probability that study subjects were exposed to metalworking fluids (MWFs) in a population-based case-control study of bladder cancer. Study subject reports on the frequency of machining and use of specific MWFs (straight, soluble, and synthetic/semi-synthetic) were used to estimate exposure probability when available. Those reports also were used to develop estimates for job groups, which were then applied to jobs without MWF reports. Estimates using both cases and controls and controls only were developed. The prevalence of machining varied substantially across job groups (0.1->0.9%), with the greatest percentage of jobs that machined being reported by machinists and tool and die workers. Reports of straight and soluble MWF use were fairly consistent across job groups (generally 50-70%). Synthetic MWF use was lower (13-45%). There was little difference in reports by cases and controls vs. controls only. Approximately, 1% of the entire study population was assessed as definitely exposed to straight or soluble fluids in contrast to 0.2% definitely exposed to synthetic/semi-synthetics. A comparison between the reported use of the MWFs and U.S. production levels found high correlations (r generally >0.7). Overall, the method described here is likely to have provided a systematic and reliable ranking that better reflects the variability of exposure to three types of MWFs than approaches applied in the past. [Supplementary materials are available for this article. Go to the publisher's online edition of Journal of Occupational and Environmental Hygiene for the following free supplemental resources: a list of keywords in the occupational histories that were used to link study subjects to the metalworking fluids (MWFs) modules; recommendations from the literature on selection of MWFs based on type of machining operation, the metal being machined and decade; popular additives to MWFs; the number and proportion of controls who
Juang, Kai-Wei; Liao, Wan-Jiun; Liu, Ten-Lin; Tsui, L; Lee, Dar-Yuan
2008-01-15
Kriging-based delineation when used to determine a cost-effective remediation plan should be based on the spatial distribution of the pollutant. This study proposed an adaptive cluster sampling (ACS) approach based on the regulation threshold and kriging variance for additional sampling to improve the reliability of delineating a heavy-metal contaminated site. A reliability index for reducing the probability of false delineation was used to determine the size and configuration of additional samples. A data set of Ni concentrations in soil was used for illustration. The results showed that the additional sampled observations during ACS were clustered where the Ni concentrations were close to the regulation threshold of 200 mg kg(-1), and were located where the first-phased sampling density was low. Compared with a simple random sampling (SRS), the relative frequency of misclassification over the whole study area (RFMW) using ACS in a 100 replicates simulation was lower when the same sample number of pooled data was used. In addition, the spatial distribution of the local misclassification rate (LMR) showed that the area with a high-valued LMR could be reduced and that the LMR gradients in the region could be lowered by using ACS instead of SRS. The above results suggest that the proposed ACS approach could improve the reliability of kriging-based delineation of heavy-metal contaminated soils.
La Russa, D
2015-06-15
Purpose: The purpose of this project is to develop a robust method of parameter estimation for a Poisson-based TCP model using Bayesian inference. Methods: Bayesian inference was performed using the PyMC3 probabilistic programming framework written in Python. A Poisson-based TCP regression model that accounts for clonogen proliferation was fit to observed rates of local relapse as a function of equivalent dose in 2 Gy fractions for a population of 623 stage-I non-small-cell lung cancer patients. The Slice Markov Chain Monte Carlo sampling algorithm was used to sample the posterior distributions, and was initiated using the maximum of the posterior distributions found by optimization. The calculation of TCP with each sample step required integration over the free parameter α, which was performed using an adaptive 24-point Gauss-Legendre quadrature. Convergence was verified via inspection of the trace plot and posterior distribution for each of the fit parameters, as well as with comparisons of the most probable parameter values with their respective maximum likelihood estimates. Results: Posterior distributions for α, the standard deviation of α (σ), the average tumour cell-doubling time (Td), and the repopulation delay time (Tk), were generated assuming α/β = 10 Gy, and a fixed clonogen density of 10{sup 7} cm−{sup 3}. Posterior predictive plots generated from samples from these posterior distributions are in excellent agreement with the observed rates of local relapse used in the Bayesian inference. The most probable values of the model parameters also agree well with maximum likelihood estimates. Conclusion: A robust method of performing Bayesian inference of TCP data using a complex TCP model has been established.
NASA Technical Reports Server (NTRS)
Johnson, J. R. (Principal Investigator)
1974-01-01
The author has identified the following significant results. The broad scale vegetation classification was developed for a 3,200 sq mile area in southeastern Arizona. The 31 vegetation types were derived from association tables which contained information taken at about 500 ground sites. The classification provided an information base that was suitable for use with small scale photography. A procedure was developed and tested for objectively comparing photo images. The procedure consisted of two parts, image groupability testing and image complexity testing. The Apollo and ERTS photos were compared for relative suitability as first stage stratification bases in two stage proportional probability sampling. High altitude photography was used in common at the second stage.
Casares-Magaz, Oscar; van der Heide, Uulke A; Rørvik, Jarle; Steenbergen, Peter; Muren, Ludvig Paul
2016-04-01
Standard tumour control probability (TCP) models assume uniform tumour cell density across the tumour. The aim of this study was to develop an individualised TCP model by including index-tumour regions extracted form multi-parametric magnetic resonance imaging (MRI) and apparent diffusion coefficient (ADC) maps-based cell density distributions. ADC maps in a series of 20 prostate cancer patients were applied to estimate the initial number of cells within each voxel, using three different approaches for the relation between ADC values and cell density: a linear, a binary and a sigmoid relation. All TCP models were based on linear-quadratic cell survival curves assuming α/β=1.93Gy (consistent with a recent meta-analysis) and α set to obtain a 70% of TCP when 77Gy was delivered to the entire prostate in 35 fractions (α=0.18Gy(-1)). Overall, TCP curves based on ADC maps showed larger differences between individuals than those assuming uniform cell densities. The range of the dose required to reach 50% TCP across the patient cohort was 20.1Gy, 18.7Gy and 13.2Gy using an MRI-based voxel density (linear, binary and sigmoid approach, respectively), compared to 4.1Gy using a constant density. Inclusion of tumour-index information together with ADC maps-based cell density increases inter-patient tumour response differentiation for use in prostate cancer RT, resulting in TCP curves with a larger range in D50% across the cohort compared with those based on uniform cell densities. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Albatayneh, Aiman; Alterman, Dariusz; Page, Adrian; Moghtaderi, Behdad
2017-05-01
The design of low energy buildings requires accurate thermal simulation software to assess the heating and cooling loads. Such designs should sustain thermal comfort for occupants and promote less energy usage over the life time of any building. One of the house energy rating used in Australia is AccuRate, star rating tool to assess and compare the thermal performance of various buildings where the heating and cooling loads are calculated based on fixed operational temperatures between 20 °C to 25 °C to sustain thermal comfort for the occupants. However, these fixed settings for the time and temperatures considerably increase the heating and cooling loads. On the other hand the adaptive thermal model applies a broader range of weather conditions, interacts with the occupants and promotes low energy solutions to maintain thermal comfort. This can be achieved by natural ventilation (opening window/doors), suitable clothes, shading and low energy heating/cooling solutions for the occupied spaces (rooms). These activities will save significant amount of operating energy what can to be taken into account to predict energy consumption for a building. Most of the buildings thermal assessment tools depend on energy-based approaches to predict the thermal performance of any building e.g. AccuRate in Australia. This approach encourages the use of energy to maintain thermal comfort. This paper describes the advantages of a temperature-based approach to assess the building's thermal performance (using an adaptive thermal comfort model) over energy based approach (AccuRate Software used in Australia). The temperature-based approach was validated and compared with the energy-based approach using four full scale housing test modules located in Newcastle, Australia (Cavity Brick (CB), Insulated Cavity Brick (InsCB), Insulated Brick Veneer (InsBV) and Insulated Reverse Brick Veneer (InsRBV)) subjected to a range of seasonal conditions in a moderate climate. The time required for
NASA Astrophysics Data System (ADS)
Monaco, E.; Memmolo, V.; Ricci, F.; Boffa, N. D.; Maio, L.
2015-03-01
Maintenance approaches based on sensorised structures and Structural Health Monitoring systems could represent one of the most promising innovations in the fields of aerostructures since many years, mostly when composites materials (fibers reinforced resins) are considered. Layered materials still suffer today of drastic reductions of maximum allowable stress values during the design phase as well as of costly and recurrent inspections during the life cycle phase that don't permit of completely exploit their structural and economic potentialities in today aircrafts. Those penalizing measures are necessary mainly to consider the presence of undetected hidden flaws within the layered sequence (delaminations) or in bonded areas (partial disbonding); in order to relax design and maintenance constraints a system based on sensors permanently installed on the structure to detect and locate eventual flaws can be considered (SHM system) once its effectiveness and reliability will be statistically demonstrated via a rigorous Probability Of Detection function definition and evaluation. This paper presents an experimental approach with a statistical procedure for the evaluation of detection threshold of a guided waves based SHM system oriented to delaminations detection on a typical wing composite layered panel. The experimental tests are mostly oriented to characterize the statistical distribution of measurements and damage metrics as well as to characterize the system detection capability using this approach. Numerically it is not possible to substitute part of the experimental tests aimed at POD where the noise in the system response is crucial. Results of experiments are presented in the paper and analyzed.
Chen, Teng; Gong, Xingchu; Chen, Huali; Zhang, Ying; Qu, Haibin
2016-01-01
A Monte Carlo method was used to develop the design space of a chromatographic elution process for the purification of saponins in Panax notoginseng extract. During this process, saponin recovery ratios, saponin purity, and elution productivity are determined as process critical quality attributes, and ethanol concentration, elution rate, and elution volume are identified as critical process parameters. Quadratic equations between process critical quality attributes and critical process parameters were established using response surface methodology. Then probability-based design space was computed by calculating the prediction errors using Monte Carlo simulations. The influences of calculation parameters on computation results were investigated. The optimized calculation condition was as follows: calculation step length of 0.02, simulation times of 10 000, and a significance level value of 0.15 for adding or removing terms in a stepwise regression. Recommended normal operation region is located in ethanol concentration of 65.0-70.0%, elution rate of 1.7-2.0 bed volumes (BV)/h and elution volume of 3.0-3.6 BV. Verification experiments were carried out and the experimental values were in a good agreement with the predicted values. The application of present method is promising to develop a probability-based design space for other botanical drug manufacturing process.
NASA Astrophysics Data System (ADS)
Bachmann, C. E.; Wiemer, S.; Woessner, J.; Hainzl, S.
2011-08-01
Geothermal energy is becoming an important clean energy source, however, the stimulation of a reservoir for an Enhanced Geothermal System (EGS) is associated with seismic risk due to induced seismicity. Seismicity occurring due to the water injection at depth have to be well recorded and monitored. To mitigate the seismic risk of a damaging event, an appropriate alarm system needs to be in place for each individual experiment. In recent experiments, the so-called traffic-light alarm system, based on public response, local magnitude and peak ground velocity, was used. We aim to improve the pre-defined alarm system by introducing a probability-based approach; we retrospectively model the ongoing seismicity in real time with multiple statistical forecast models and then translate the forecast to seismic hazard in terms of probabilities of exceeding a ground motion intensity level. One class of models accounts for the water injection rate, the main parameter that can be controlled by the operators during an experiment. By translating the models into time-varying probabilities of exceeding various intensity levels, we provide tools which are well understood by the decision makers and can be used to determine thresholds non-exceedance during a reservoir stimulation; this, however, remains an entrepreneurial or political decision of the responsible project coordinators. We introduce forecast models based on the data set of an EGS experiment in the city of Basel. Between 2006 December 2 and 8, approximately 11 500 m3 of water was injected into a 5-km-deep well at high pressures. A six-sensor borehole array, was installed by the company Geothermal Explorers Limited (GEL) at depths between 300 and 2700 m around the well to monitor the induced seismicity. The network recorded approximately 11 200 events during the injection phase, more than 3500 of which were located. With the traffic-light system, actions where implemented after an ML 2.7 event, the water injection was
Bayesian Brains without Probabilities.
Sanborn, Adam N; Chater, Nick
2016-12-01
Bayesian explanations have swept through cognitive science over the past two decades, from intuitive physics and causal learning, to perception, motor control and language. Yet people flounder with even the simplest probability questions. What explains this apparent paradox? How can a supposedly Bayesian brain reason so poorly with probabilities? In this paper, we propose a direct and perhaps unexpected answer: that Bayesian brains need not represent or calculate probabilities at all and are, indeed, poorly adapted to do so. Instead, the brain is a Bayesian sampler. Only with infinite samples does a Bayesian sampler conform to the laws of probability; with finite samples it systematically generates classic probabilistic reasoning errors, including the unpacking effect, base-rate neglect, and the conjunction fallacy. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Predicting the probability of H3K4me3 occupation at a base pair from the genome sequence context.
Ha, Misook; Hong, Soondo; Li, Wen-Hsiung
2013-05-01
Histone modifications regulate chromatin structure and gene expression. Although nucleosome formation is known to be affected by primary DNA sequence composition, no sequence signature has been identified for histone modifications. It is known that dense H3K4me3 nucleosome sites are accompanied by a low density of other nucleosomes and are associated with gene activation. This observation suggests a different sequence composition of H3K4me3 from other nucleosomes. To understand the relationship between genome sequence and chromatin structure, we studied DNA sequences at histone modification sites in various human cell types. We found sequence specificity for H3K4me3, but not for other histone modifications. Using the sequence specificities of H3 and H3K4me3 nucleosomes, we developed a model that computes the probability of H3K4me3 occupation at each base pair from the genome sequence context. A comparison of our predictions with experimental data suggests a high performance of our method, revealing a strong association between H3K4me3 and specific genomic DNA context. The high probability of H3K4me3 occupation occurs at transcription start and termination sites, exon boundaries and binding sites of transcription regulators involved in chromatin modification activities, including histone acetylases and enhancer- and insulator-associated factors. Thus, the human genome sequence contains signatures for chromatin modifications essential for gene regulation and development. Our method may be applied to find new sequence elements functioning by chromatin modulation. Software and supplementary data are available at Bioinformatics online.
Predicting the probability of H3K4me3 occupation at a base pair from the genome sequence context
Ha, Misook; Hong, Soondo; Li, Wen-Hsiung
2013-01-01
Motivation: Histone modifications regulate chromatin structure and gene expression. Although nucleosome formation is known to be affected by primary DNA sequence composition, no sequence signature has been identified for histone modifications. It is known that dense H3K4me3 nucleosome sites are accompanied by a low density of other nucleosomes and are associated with gene activation. This observation suggests a different sequence composition of H3K4me3 from other nucleosomes. Approach: To understand the relationship between genome sequence and chromatin structure, we studied DNA sequences at histone modification sites in various human cell types. We found sequence specificity for H3K4me3, but not for other histone modifications. Using the sequence specificities of H3 and H3K4me3 nucleosomes, we developed a model that computes the probability of H3K4me3 occupation at each base pair from the genome sequence context. Results: A comparison of our predictions with experimental data suggests a high performance of our method, revealing a strong association between H3K4me3 and specific genomic DNA context. The high probability of H3K4me3 occupation occurs at transcription start and termination sites, exon boundaries and binding sites of transcription regulators involved in chromatin modification activities, including histone acetylases and enhancer- and insulator-associated factors. Thus, the human genome sequence contains signatures for chromatin modifications essential for gene regulation and development. Our method may be applied to find new sequence elements functioning by chromatin modulation. Availability: Software and supplementary data are available at Bioinformatics online. Contact: misook.ha@samsung.com or wli@uchicago.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23511541
Shankar Subramaniam
2009-04-01
This final project report summarizes progress made towards the objectives described in the proposal entitled “Developing New Mathematical Models for Multiphase Flows Based on a Fundamental Probability Density Function Approach”. Substantial progress has been made in theory, modeling and numerical simulation of turbulent multiphase flows. The consistent mathematical framework based on probability density functions is described. New models are proposed for turbulent particle-laden flows and sprays.
Cencerrado, Andrés; Cortés, Ana; Margalef, Tomàs
2013-01-01
This work presents a framework for assessing how the existing constraints at the time of attending an ongoing forest fire affect simulation results, both in terms of quality (accuracy) obtained and the time needed to make a decision. In the wildfire spread simulation and prediction area, it is essential to properly exploit the computational power offered by new computing advances. For this purpose, we rely on a two-stage prediction process to enhance the quality of traditional predictions, taking advantage of parallel computing. This strategy is based on an adjustment stage which is carried out by a well-known evolutionary technique: Genetic Algorithms. The core of this framework is evaluated according to the probability theory principles. Thus, a strong statistical study is presented and oriented towards the characterization of such an adjustment technique in order to help the operation managers deal with the two aspects previously mentioned: time and quality. The experimental work in this paper is based on a region in Spain which is one of the most prone to forest fires: El Cap de Creus. PMID:24453898
Cencerrado, Andrés; Cortés, Ana; Margalef, Tomàs
2013-01-01
This work presents a framework for assessing how the existing constraints at the time of attending an ongoing forest fire affect simulation results, both in terms of quality (accuracy) obtained and the time needed to make a decision. In the wildfire spread simulation and prediction area, it is essential to properly exploit the computational power offered by new computing advances. For this purpose, we rely on a two-stage prediction process to enhance the quality of traditional predictions, taking advantage of parallel computing. This strategy is based on an adjustment stage which is carried out by a well-known evolutionary technique: Genetic Algorithms. The core of this framework is evaluated according to the probability theory principles. Thus, a strong statistical study is presented and oriented towards the characterization of such an adjustment technique in order to help the operation managers deal with the two aspects previously mentioned: time and quality. The experimental work in this paper is based on a region in Spain which is one of the most prone to forest fires: El Cap de Creus.
NASA Technical Reports Server (NTRS)
Bauman, William H., III
2009-01-01
The threat of lightning is a daily concern during the warm season in Florida. Research has revealed distinct spatial and temporal distributions of lightning occurrence that are strongly influenced by large-scale atmospheric flow regimes. Previously, the Applied Meteorology Unit (AMU) calculated the gridded lightning climatologies based on seven flow regimes over Florida for 1-, 3- and 6-hr intervals in 5-, 10-, 20-, and 30-NM diameter range rings around the Shuttle Landing Facility (SLF) and eight other airfields in the National Weather Service in Melbourne (NWS MLB) county warning area (CWA). In this update to the work, the AMU recalculated the lightning climatologies for using individual lightning strike data to improve the accuracy of the climatologies. The AMU included all data regardless of flow regime as one of the stratifications, added monthly stratifications, added three years of data to the period of record and used modified flow regimes based work from the AMU's Objective Lightning Probability Forecast Tool, Phase II. The AMU made changes so the 5- and 10-NM radius range rings are consistent with the aviation forecast requirements at NWS MLB, while the 20- and 30-NM radius range rings at the SLF assist the Spaceflight Meteorology Group in making forecasts for weather Flight Rule violations during Shuttle landings. The AMU also updated the graphical user interface with the new data.
Incompatible Stochastic Processes and Complex Probabilities
NASA Technical Reports Server (NTRS)
Zak, Michail
1997-01-01
The definition of conditional probabilities is based upon the existence of a joint probability. However, a reconstruction of the joint probability from given conditional probabilities imposes certain constraints upon the latter, so that if several conditional probabilities are chosen arbitrarily, the corresponding joint probability may not exist.
Espaldon, Roxanne; Kirby, Katharine A; Fung, Kathy Z; Hoffman, Richard M; Powell, Adam A; Freedland, Stephen J; Walter, Louise C
2014-03-01
To determine the distribution of screening prostate-specific antigen (PSA) values in older men, and how different PSA thresholds affect the proportion of white, black, and Latino men who would have an abnormal screening result across advancing age groups. We used linked national Veterans Affairs and Medicare data to determine the value of the first screening PSA test (ng/mL) of 327,284 men older than 65 years who underwent PSA screening in the Veterans Affairs health care system in 2003. We calculated the proportion of men with an abnormal PSA result based on age, race, and common PSA thresholds. Among men older than 65 years, 8.4% had a PSA >4.0 ng/mL. The percentage of men with a PSA >4.0 ng/mL increased with age and was highest in black men (13.8%) vs white (8.0%) or Latino men (10.0%) (P <.001). Combining age and race, the probability of having a PSA >4.0 ng/mL ranged from 5.1% of Latino men aged 65-69 years to 27.4% of black men older than 85 years. Raising the PSA threshold from >4.0 ng/mL to >10.0 ng/mL reclassified the greatest percentage of black men older than 85 years (18.3% absolute change) and the lowest percentage of Latino men aged 65-69 years (4.8% absolute change) as being under the biopsy threshold (P <.001). Age, race, and PSA threshold together affect the pretest probability of an abnormal screening PSA result. Based on screening PSA distributions, stopping screening among men whose PSA <3 ng/mL means more than 80% of white and Latino men older than 70 years would stop further screening, and increasing the biopsy threshold to >10 ng/mL has the greatest effect on reducing the number of older black men who will face biopsy decisions after screening. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Shouyu; Xue, Liang; Yan, Keding
2017-07-01
Light scattering from randomly rough surfaces is of great significance in various fields such as remote sensing and target identification. As numerical methods can obtain scattering distributions without complex setups and complicated operations, they become important tools in light scattering study. However, most of them suffer from huge computing load and low operating efficiency, limiting their applications in dynamic measurements and high-speed detections. Here, to overcome these disadvantages, microfacet slope probability density function based method is presented, providing scattering information without computing ensemble average from numerous scattered fields, thus it can obtain light scattering distributions with extremely fast speed. Additionally, it can reach high-computing accuracy quantitatively certificated by mature light scattering computing algorithms. It is believed the provided approach is useful in light scattering study and offers potentiality for real-time detections.
Painter, Colin C.; Heimann, David C.; Lanning-Rush, Jennifer L.
2017-08-14
A study was done by the U.S. Geological Survey in cooperation with the Kansas Department of Transportation and the Federal Emergency Management Agency to develop regression models to estimate peak streamflows of annual exceedance probabilities of 50, 20, 10, 4, 2, 1, 0.5, and 0.2 percent at ungaged locations in Kansas. Peak streamflow frequency statistics from selected streamgages were related to contributing drainage area and average precipitation using generalized least-squares regression analysis. The peak streamflow statistics were derived from 151 streamgages with at least 25 years of streamflow data through 2015. The developed equations can be used to predict peak streamflow magnitude and frequency within two hydrologic regions that were defined based on the effects of irrigation. The equations developed in this report are applicable to streams in Kansas that are not substantially affected by regulation, surface-water diversions, or urbanization. The equations are intended for use for streams with contributing drainage areas ranging from 0.17 to 14,901 square miles in the nonirrigation effects region and, 1.02 to 3,555 square miles in the irrigation-affected region, corresponding to the range of drainage areas of the streamgages used in the development of the regional equations.
Ghane, Alireza; Mazaheri, Mehdi; Mohammad Vali Samani, Jamal
2016-09-15
The pollution of rivers due to accidental spills is a major threat to environment and human health. To protect river systems from accidental spills, it is essential to introduce a reliable tool for identification process. Backward Probability Method (BPM) is one of the most recommended tools that is able to introduce information related to the prior location and the release time of the pollution. This method was originally developed and employed in groundwater pollution source identification problems. One of the objectives of this study is to apply this method in identifying the pollution source location and release time in surface waters, mainly in rivers. To accomplish this task, a numerical model is developed based on the adjoint analysis. Then the developed model is verified using analytical solution and some real data. The second objective of this study is to extend the method to pollution source identification in river networks. In this regard, a hypothetical test case is considered. In the later simulations, all of the suspected points are identified, using only one backward simulation. The results demonstrated that all suspected points, determined by the BPM could be a possible pollution source. The proposed approach is accurate and computationally efficient and does not need any simplification in river geometry and flow. Due to this simplicity, it is highly recommended for practical purposes. Copyright © 2016. Published by Elsevier Ltd.
Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo
2017-02-01
Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.
Liu, X; Zhai, Z
2008-02-01
Indoor pollutions jeopardize human health and welfare and may even cause serious morbidity and mortality under extreme conditions. To effectively control and improve indoor environment quality requires immediate interpretation of pollutant sensor readings and accurate identification of indoor pollution history and source characteristics (e.g. source location and release time). This procedure is complicated by non-uniform and dynamic contaminant indoor dispersion behaviors as well as diverse sensor network distributions. This paper introduces a probability concept based inverse modeling method that is able to identify the source location for an instantaneous point source placed in an enclosed environment with known source release time. The study presents the mathematical models that address three different sensing scenarios: sensors without concentration readings, sensors with spatial concentration readings, and sensors with temporal concentration readings. The paper demonstrates the inverse modeling method and algorithm with two case studies: air pollution in an office space and in an aircraft cabin. The predictions were successfully verified against the forward simulation settings, indicating good capability of the method in finding indoor pollutant sources. The research lays a solid ground for further study of the method for more complicated indoor contamination problems. The method developed can help track indoor contaminant source location with limited sensor outputs. This will ensure an effective and prompt execution of building control strategies and thus achieve a healthy and safe indoor environment. The method can also assist the design of optimal sensor networks.
NASA Astrophysics Data System (ADS)
Weiser, Deborah Anne
Induced seismicity is occurring at increasing rates around the country. Brodsky and Lajoie (2013) and others have recognized anthropogenic quakes at a few geothermal fields in California. I use three techniques to assess if there are induced earthquakes in California geothermal fields; there are three sites with clear induced seismicity: Brawley, The Geysers, and Salton Sea. Moderate to strong evidence is found at Casa Diablo, Coso, East Mesa, and Susanville. Little to no evidence is found for Heber and Wendel. I develop a set of tools to reduce or cope with the risk imposed by these earthquakes, and also to address uncertainties through simulations. I test if an earthquake catalog may be bounded by an upper magnitude limit. I address whether the earthquake record during pumping time is consistent with the past earthquake record, or if injection can explain all or some of the earthquakes. I also present ways to assess the probability of future earthquake occurrence based on past records. I summarize current legislation for eight states where induced earthquakes are of concern. Unlike tectonic earthquakes, the hazard from induced earthquakes has the potential to be modified. I discuss direct and indirect mitigation practices. I present a framework with scientific and communication techniques for assessing uncertainty, ultimately allowing more informed decisions to be made.
NASA Astrophysics Data System (ADS)
Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo
2017-02-01
Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.
NASA Astrophysics Data System (ADS)
Bairwa, Arvind Kumar; Khosa, Rakesh; Maheswaran, R.
2016-11-01
In this study, presence of multi-scale behaviour in rainfall IDF relationship has been established using Linear Probability Weighted Moments (LPWMs) for some selected stations in India. Simple, non-central moments (SMs) have seen widespread use in similar scaling studies but these latter statistical attributes are known to mask the 'true' scaling pattern and, consequently, leading to inappropriate inferences. There is a general agreement amongst researchers that conventional higher order moments do indeed amplify the extreme observations and drastically affect scaling exponents. Additional advantage of LPWMs over SMs is that they exist even when the standard moments do not exist. As an alternative, this study presents a comparison with results based on use of the robust LPWMs which have revealed, in sharp contrast with the conventional moments, a definitive multi-scaling behaviour in all four rainfall observation stations that were selected from different climatic zones. The multi-scale IDF curves derived using LPWMs show a good agreement with observations and it is accordingly concluded that LPWMs provide a more reliable tool for investigating scaling in sequences of observed rainfall corresponding to various durations.
NASA Astrophysics Data System (ADS)
Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo
2016-07-01
Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.
Arnold, W Ray; Warren-Hicks, William J
2007-01-01
The object of this study was to estimate site- and region-specific dissolved copper criteria for a large embayment, the Chesapeake Bay, USA. The intent is to show the utility of 2 copper saltwater quality site-specific criteria estimation models and associated region-specific criteria selection methods. The criteria estimation models and selection methods are simple, efficient, and cost-effective tools for resource managers. The methods are proposed as potential substitutes for the US Environmental Protection Agency's water effect ratio methods. Dissolved organic carbon data and the copper criteria models were used to produce probability-based estimates of site-specific copper saltwater quality criteria. Site- and date-specific criteria estimations were made for 88 sites (n = 5,296) in the Chesapeake Bay. The average and range of estimated site-specific chronic dissolved copper criteria for the Chesapeake Bay were 7.5 and 5.3 to 16.9 microg Cu/L. The average and range of estimated site-specific acute dissolved copper criteria for the Chesapeake Bay were 11.7 and 8.3 to 26.4 microg Cu/L. The results suggest that applicable national and state copper criteria can increase in much of the Chesapeake Bay and remain protective. Virginia Department of Environmental Quality copper criteria near the mouth of the Chesapeake Bay, however, need to decrease to protect species of equal or greater sensitivity to that of the marine mussel, Mytilus sp.
NASA Astrophysics Data System (ADS)
Zhong, Rumian; Zong, Zhouhong; Niu, Jie; Liu, Qiqi; Zheng, Peijuan
2016-05-01
Modeling and simulation are routinely implemented to predict the behavior of complex structures. These tools powerfully unite theoretical foundations, numerical models and experimental data which include associated uncertainties and errors. A new methodology for multi-scale finite element (FE) model validation is proposed in this paper. The method is based on two-step updating method, a novel approach to obtain coupling parameters in the gluing sub-regions of a multi-scale FE model, and upon Probability Box (P-box) theory that can provide a lower and upper bound for the purpose of quantifying and transmitting the uncertainty of structural parameters. The structural health monitoring data of Guanhe Bridge, a composite cable-stayed bridge with large span, and Monte Carlo simulation were used to verify the proposed method. The results show satisfactory accuracy, as the overlap ratio index of each modal frequency is over 89% without the average absolute value of relative errors, and the CDF of normal distribution has a good coincidence with measured frequencies of Guanhe Bridge. The validated multiscale FE model may be further used in structural damage prognosis and safety prognosis.
Chen, Y Z; Prohofsky, E W
1994-01-01
We calculate room temperature thermal fluctuational base pair opening probability of a daunomycin-poly d(GCAT).poly d(ATGC) complex. This system is constructed at an atomic level of detail based on x-ray analysis of a crystal structure. The base pair opening probabilities are calculated from a modified self-consistent phonon approach of anharmonic lattice dynamics theory. We find that daunomycin binding substantially enhances the thermal stability of one of the base pairs adjacent the drug because of strong hydrogen bonding between the drug and the base. The possible effect of this enhanced stability on the drug inhibition of DNA transcription and replication is discussed. We also calculate the probability of drug dissociation from the helix based on the selfconsistent calculation of the probability of the disruption of drug-base H-bonds and the unstacking probability of the drug. The calculations can be used to determine the equilibrium drug binding constant which is found to be in good agreement with observations on similar daunomycin-DNA systems. PMID:8011914
Dauer, Daniel M; Llansó, Roberto J
2003-01-01
The extent of degradation of benthic communities of the Chesapeake Bay was determined by applying a previously developed benthic index of biotic integrity at three spatial scales. Allocation of sampling was probability-based allowing areal estimates of degradation with known confidence intervals. The three spatial scales were: (1) the tidal Chesapeake Bay; (2) the Elizabeth River watershed: and (3) two small tidal creeks within the Southern Branch of the Elizabeth River that are part of a sediment contaminant remediation effort. The areas covered varied from 10(-1) to 10(4) km2 and all were sampled in 1999. The Chesapeake Bay was divided into ten strata, the Elizabeth River into five strata and each of the two tidal creeks was a single stratum. The determination of the number and size of strata was based upon consideration of both managerially useful units for restoration and limitations of funding. Within each stratum 25 random locations were sampled for benthic community condition. In 1999 the percent of the benthos with poor benthic community condition for the entire Chesapeake Bay was 47% and varied from 20% at the mouth of the Bay to 72% in the Potomac River. The estimated area of benthos with poor benthic community condition for the Elizabeth River was 64% and varied from 52-92%. Both small tidal creeks had estimates of 76% of poor benthic community condition. These kinds of estimates allow environmental managers to better direct restoration efforts and evaluate progress towards restoration. Patterns of benthic community condition at smaller spatial scales may not be correctly inferred from larger spatial scales. Comparisons of patterns in benthic community condition across spatial scales, and between combinations of strata, must be cautiously interpreted.
Univariate Probability Distributions
ERIC Educational Resources Information Center
Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.
2012-01-01
We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…
Univariate Probability Distributions
ERIC Educational Resources Information Center
Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.
2012-01-01
We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…
NASA Astrophysics Data System (ADS)
Nakamura, Kazuhiro; Shimazaki, Ryo; Yamamoto, Masatoshi; Takagi, Kazuyoshi; Takagi, Naofumi
This paper presents a memory-efficient VLSI architecture for output probability computations (OPCs) of continuous hidden Markov models (HMMs) and likelihood score computations (LSCs). These computations are the most time consuming part of HMM-based isolated word recognition systems. We demonstrate multiple fast store-based block parallel processing (MultipleFastStoreBPP) for OPCs and LSCs and present a VLSI architecture that supports it. Compared with conventional fast store-based block parallel processing (FastStoreBPP) and stream-based block parallel processing (StreamBPP) architectures, the proposed architecture requires fewer registers and less processing time. The processing elements (PEs) used in the FastStoreBPP and StreamBPP architectures are identical to those used in the MultipleFastStoreBPP architecture. From a VLSI architectural viewpoint, a comparison shows that the proposed architecture is an improvement over the others, through efficient use of PEs and registers for storing input feature vectors.
Faith, Daniel P
2008-12-01
New species conservation strategies, including the EDGE of Existence (EDGE) program, have expanded threatened species assessments by integrating information about species' phylogenetic distinctiveness. Distinctiveness has been measured through simple scores that assign shared credit among species for evolutionary heritage represented by the deeper phylogenetic branches. A species with a high score combined with a high extinction probability receives high priority for conservation efforts. Simple hypothetical scenarios for phylogenetic trees and extinction probabilities demonstrate how such scoring approaches can provide inefficient priorities for conservation. An existing probabilistic framework derived from the phylogenetic diversity measure (PD) properly captures the idea of shared responsibility for the persistence of evolutionary history. It avoids static scores, takes into account the status of close relatives through their extinction probabilities, and allows for the necessary updating of priorities in light of changes in species threat status. A hypothetical phylogenetic tree illustrates how changes in extinction probabilities of one or more species translate into changes in expected PD. The probabilistic PD framework provided a range of strategies that moved beyond expected PD to better consider worst-case PD losses. In another example, risk aversion gave higher priority to a conservation program that provided a smaller, but less risky, gain in expected PD. The EDGE program could continue to promote a list of top species conservation priorities through application of probabilistic PD and simple estimates of current extinction probability. The list might be a dynamic one, with all the priority scores updated as extinction probabilities change. Results of recent studies suggest that estimation of extinction probabilities derived from the red list criteria linked to changes in species range sizes may provide estimated probabilities for many different species
Time-dependent earthquake probabilities
NASA Astrophysics Data System (ADS)
Gomberg, J.; Belardinelli, M. E.; Cocco, M.; Reasenberg, P.
2005-05-01
We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have developed a general framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failure of a single fault. In the first application, the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function or PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models.
Time-dependent earthquake probabilities
Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.
2005-01-01
We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.
Walsh, Michael G; Haseeb, M A
2014-01-01
Toxocariasis is increasingly recognized as an important neglected infection of poverty (NIP) in developed countries, and may constitute the most important NIP in the United States (US) given its association with chronic sequelae such as asthma and poor cognitive development. Its potential public health burden notwithstanding, toxocariasis surveillance is minimal throughout the US and so the true burden of disease remains uncertain in many areas. The Third National Health and Nutrition Examination Survey conducted a representative serologic survey of toxocariasis to estimate the prevalence of infection in diverse US subpopulations across different regions of the country. Using the NHANES III surveillance data, the current study applied the predicted probabilities of toxocariasis to the sociodemographic composition of New York census tracts to estimate the local probability of infection across the city. The predicted probability of toxocariasis ranged from 6% among US-born Latino women with a university education to 57% among immigrant men with less than a high school education. The predicted probability of toxocariasis exhibited marked spatial variation across the city, with particularly high infection probabilities in large sections of Queens, and smaller, more concentrated areas of Brooklyn and northern Manhattan. This investigation is the first attempt at small-area estimation of the probability surface of toxocariasis in a major US city. While this study does not define toxocariasis risk directly, it does provide a much needed tool to aid the development of toxocariasis surveillance in New York City.
Carr, D.B.; Tolley, H.D.
1982-12-01
This paper investigates procedures for univariate nonparametric estimation of tail probabilities. Extrapolated values for tail probabilities beyond the data are also obtained based on the shape of the density in the tail. Several estimators which use exponential weighting are described. These are compared in a Monte Carlo study to nonweighted estimators, to the empirical cdf, to an integrated kernel, to a Fourier series estimate, to a penalized likelihood estimate and a maximum likelihood estimate. Selected weighted estimators are shown to compare favorably to many of these standard estimators for the sampling distributions investigated.
NASA Astrophysics Data System (ADS)
De Gregorio, Sofia; Camarda, Marco
2016-04-01
The evaluation of the amount of magma that might be potentially erupted, i.e. the eruptive potential (EP), and the probability of eruptive event occurrence, i.e. eruptive probability (EPR) of active volcano is one of the most compelling and challenging topic addressed by the volcanology community in the last years. The evaluation of the EP in open conduit volcano is generally based on constant magma supply rate deduced by long-term series of eruptive rate. This EP computation gives good results on long-term (centuries) evaluations, but resulted less effective when short-term (years or months) estimations are needed. Actually the rate of magma supply can undergo changes both on long-term and short-term. At steady condition it can be supposed that the regular supply of magma determines an almost constant level of magma in the feeding system (FS) whereas episodic surplus of magma inputs, with respect the regular supply, can cause large variations in the magma level. Follow that the surplus of magma occasionally entered in the FS represents a supply of material that sooner or later will be disposed, i.e. it will be emitted. Afterwards the amount of surplus of magma inward the FS nearly corresponds to the amount of magma that must be erupted in order to restore the equilibrium. Further, larger is the amount of surplus of magma stored in the system higher is the energetic level of the system and its propensity to erupt or in other words its EPR. On the light of the above consideration herein, we present an innovative methodology to evaluate the EP based on the quantification of surplus of magma with respect the regular supply, progressively intruded in the FS. To estimate the surplus of magma supply we used soil CO2 emission data measured monthly at 130 sites in two peripheral areas of Mt Etna Volcano. Indeed as reported by many authors soil CO2 emissions in the areas are linked to magma supply dynamics and more, anomalous discharges of CO2 are ascribable to surplus of
Southerland, Mark T; Vølstad, Jon H; Weber, Edward D; Klauda, Ronald J; Poukish, Charles A; Rowe, Matthew C
2009-03-01
The Clean Water Act presents a daunting task for states by requiring them to assess and restore all their waters. Traditional monitoring has led to two beliefs: (1) ad hoc sampling (i.e., non-random) is adequate if enough sites are sampled and (2) more intensive sampling (e.g., collecting more organisms) at each site is always better. We analyzed the 1,500 Maryland Biological Stream Survey (MBSS) random sites sampled in 2000-2004 to describe the variability of Index of Biotic Integrity (IBI) scores at the site, reach, and watershed scales. Average variability for fish and benthic IBI scores increased with increasing spatial scale, demonstrating that single site IBI scores are not representative at watershed scales and therefore at best 25% of a state's stream length can be representatively sampled with non-random designs. We evaluated the effects on total taxa captured and IBI precision of sampling for twice as many benthic macroinvertebrates at 73 MBSS sites with replicate samples. When sampling costs were fixed, the precision of the IBI decreased as the number of sites had to be reduced by 15%. Only 1% more taxa were found overall when the 73 sites where combined. We concluded that (1) comprehensive assessment of a state's waters should be done using probability-based sampling that allows the condition across all reaches to be inferred statistically and (2) additional site sampling effort should not be incorporated into state biomonitoring when it will reduce the number of sites sampled to the point where overall assessment precision is lower.
Miyauchi, T; Hagimoto, H; Saito, T; Endo, K; Ishii, M; Yamaguchi, T; Kajiwara, A; Matsushita, M
1989-01-01
EEG power amplitude and power ratio data obtained from 15 (3 men and 12 women) patients with Alzheimer's disease (AD) and 8 (2 men and 6 women) with senile dementia of Alzheimer type (SDAT) were compared with similar data from 40 age- and sex-matched normal controls. Compared with the healthy controls, both patient groups demonstrated increased EEG background slowing, and it indicated more slower in AD than in SDAT. Moreover, both groups showed characteristic findings respectively on EEG topography and t-statistic significance probability mapping (SPM). The differences between AD and their controls indicated high slowing with reductions in alpha 2, beta 1 and beta 2 activity. The SPMs of power ratio in theta and alpha 2 bands showed most prominent significance in the right posterior-temporal region and delta and beta bands did in the frontal region. Severe AD indicated only frontal delta slowing compared to mild AD. The differences between SDAT and their controls indicated only mild slowing in delta and theta bands. The SPM of power amplitude showed occipital slowing, whereas the SPM of power ratio showed the slowing in the frontal region. Judging from both topographic findings, these were considered to denote diffuse slow tendency. In summary, these results presumed that in AD, cortical damages followed by EEG slowing with reductions of alpha 2 and beta bands originated rapidly and thereafter developed subcortical (non-specific area in thalamus) changes with frontal delta activity on SPM. On the other hand, in SDAT, diffuse cortico-subcortical damages with diffuse slowing on EEG topography were caused gradually.
NASA Astrophysics Data System (ADS)
Akinci, Aybige; Aochi, Hideo; Herrero, Andre; Pischiutta, Marta; Karanikas, Dimitris
2016-04-01
The city of Istanbul is characterized by one of the highest levels of seismic risk in Europe and the Mediterranean region. The important source of the increased risk in Istanbul is the remarkable probability of the occurrence of a large earthquake, which stands at about 65% during the coming years due to the existing seismic gap and the post-1999 earthquake stress transfer at the western portion of the North Anatolian Fault Zone (NAFZ). In this study, we have simulated hybrid broadband time histories from two selected scenario earthquakes having magnitude M>7.0 in the Marmara Sea within 10-20 km of Istanbul believed to have generated devastating 1509 event in the region. The physics-based rupture scenarios, which may be an indication of potential future events, are adopted to estimate the ground motion characteristics and its variability in the region. Two simulation techniques (a full 3D wave propagation method to generate low-frequency seismograms, <~1 Hz and a stochastic technique to simulate high-frequency seismograms, >1Hz) are used to compute more realistic time series associated with scenario earthquakes having magnitudes Mw >7.0 in the Marmara Sea Region. A dynamic rupture is generated and computed with a boundary integral equation method and the propagation in the medium is realized through a finite difference approach (Aochi and Ulrich, 2015). The high frequency radiation is computed using stochastic finite-fault model approach based on a dynamic corner frequency (Motazedian and Atkinson, 2005; Boore, 2009). The results from the two simulation techniques are then merged by performing a weighted summation at intermediate frequencies to calculate broadband synthetic time series. The hybrid broadband ground motions computed with the proposed approach are validated by comparing peak ground acceleration (PGA), peak ground velocity (PGV), and spectral acceleration (SA) with recently proposed ground motion prediction equations (GMPE) in the region. Our
NASA Astrophysics Data System (ADS)
Barnard, J. M.; Augarde, C. E.
2012-12-01
The simulation of reactions in flow through unsaturated porous media is a more complicated process when using particle tracking based models than in continuum based models. In the fomer particles are reacted on an individual particle-to-particle basis using either deterministic or probabilistic methods. This means that particle tracking methods, especially when simulations of reactions are included, are computationally intensive as the reaction simulations require tens of thousands of nearest neighbour searches per time step. Despite this, particle tracking methods merit further study due to their ability to eliminate numerical dispersion, to simulate anomalous transport and incomplete mixing of reactive solutes. A new model has been developed using discrete time random walk particle tracking methods to simulate reactive mass transport in porous media which includes a variation of colocation probability function based methods of reaction simulation from those presented by Benson & Meerschaert (2008). Model development has also included code acceleration via graphics processing units (GPUs). The nature of particle tracking methods means that they are well suited to parallelization using GPUs. The architecture of GPUs is single instruction - multiple data (SIMD). This means that only one operation can be performed at any one time but can be performed on multiple data simultaneously. This allows for significant speed gains where long loops of independent operations are performed. Computationally expensive code elements, such the nearest neighbour searches required by the reaction simulation, are therefore prime targets for GPU acceleration.
NASA Astrophysics Data System (ADS)
Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua
1997-04-01
Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.
Prospect evaluation as a function of numeracy and probability denominator.
Millroth, Philip; Juslin, Peter
2015-05-01
This study examines how numeracy and probability denominator (a direct-ratio probability, a relative frequency with denominator 100, a relative frequency with denominator 10,000) affect the evaluation of prospects in an expected-value based pricing task. We expected that numeracy would affect the results due to differences in the linearity of number perception and the susceptibility to denominator neglect with different probability formats. An analysis with functional measurement verified that participants integrated value and probability into an expected value. However, a significant interaction between numeracy and probability format and subsequent analyses of the parameters of cumulative prospect theory showed that the manipulation of probability denominator changed participants' psychophysical response to probability and value. Standard methods in decision research may thus confound people's genuine risk attitude with their numerical capacities and the probability format used.
Kumar, Sivakumar Prasanth; Jha, Prakash C; Pandya, Himanshu A; Jasrai, Yogesh T
2014-07-01
Molecular docking plays an important role in the protein target identification by prioritizing probable druggable proteins using docking energies. Due to the limitations of docking scoring schemes, there arises a need for structure-based approaches to acquire confidence in theoretical binding affinities. In this direction, we present here a receptor (protein)-based approach to predict probable protein targets using a small molecule of interest. We adopted a reverse approach wherein the ligand pharmacophore features were used to decipher interaction complementary amino acids of protein cavities (a pseudoreceptor) and expressed as queries to match the cavities or binding sites of the protein dataset. These pseudoreceptor-based pharmacophore queries were used to estimate total probabilities of each protein cavity thereby representing the ligand binding efficiency of the protein. We applied this approach to predict 3 experimental protein targets among 28 Zea mays structural data using 3 co-crystallized ligands as inputs and compared its effectiveness using conventional docking results. We suggest that the combination of total probabilities and docking energies increases the confidence in prioritizing probable protein targets using docking methods. These prediction hypotheses were further supported by DrugScoreX (DSX) pair potential calculations and molecular dynamic simulations.
Industrial Base: Significance of DoD’s Foreign Dependence
1991-01-01
defense industrial base: the U.S. defense industrial base information system and revised DOD guidance for assessing foreign dependence throughout the...DOD’s "Foreign Dependence Is data bases and models is cited as a problem hindering effective indus- Unknown trial base planning.3 Determining if...Assess the procedures to include early consideration of foreign sourcing and depen- Significance of Foreign dency issues. Dependence on the DOD efforts
NASA Astrophysics Data System (ADS)
Papadopoulos, Vissarion; Kalogeris, Ioannis
2016-05-01
The present paper proposes a Galerkin finite element projection scheme for the solution of the partial differential equations (pde's) involved in the probability density evolution method, for the linear and nonlinear static analysis of stochastic systems. According to the principle of preservation of probability, the probability density evolution of a stochastic system is expressed by its corresponding Fokker-Planck (FP) stochastic partial differential equation. Direct integration of the FP equation is feasible only for simple systems with a small number of degrees of freedom, due to analytical and/or numerical intractability. However, rewriting the FP equation conditioned to the random event description, a generalized density evolution equation (GDEE) can be obtained, which can be reduced to a one dimensional pde. Two Galerkin finite element method schemes are proposed for the numerical solution of the resulting pde's, namely a time-marching discontinuous Galerkin scheme and the StreamlineUpwind/Petrov Galerkin (SUPG) scheme. In addition, a reformulation of the classical GDEE is proposed, which implements the principle of probability preservation in space instead of time, making this approach suitable for the stochastic analysis of finite element systems. The advantages of the FE Galerkin methods and in particular the SUPG over finite difference schemes, like the modified Lax-Wendroff, which is the most frequently used method for the solution of the GDEE, are illustrated with numerical examples and explored further.
NASA Astrophysics Data System (ADS)
Zhang, Yushu; Zhou, Jiantao; Chen, Fei; Zhang, Leo Yu; Xiao, Di; Chen, Bin; Liao, Xiaofeng
The existing Block Compressive Sensing (BCS) based image ciphers adopted the same sampling rate for all the blocks, which may lead to the desirable result that after subsampling, significant blocks lose some more-useful information while insignificant blocks still retain some less-useful information. Motivated by this observation, we propose a scalable encryption framework (SEF) based on BCS together with a Sobel Edge Detector and Cascade Chaotic Maps. Our work is firstly dedicated to the design of two new fusion techniques, chaos-based structurally random matrices and chaos-based random convolution and subsampling. The basic idea is to divide an image into some blocks with an equal size and then diagnose their respective significance with the help of the Sobel Edge Detector. For significant block encryption, chaos-based structurally random matrix is applied to significant blocks whereas chaos-based random convolution and subsampling are responsible for the remaining insignificant ones. In comparison with the BCS based image ciphers, the SEF takes lightweight subsampling and severe sensitivity encryption for the significant blocks and severe subsampling and lightweight robustness encryption for the insignificant ones in parallel, thus better protecting significant image regions.
Herts, Brian R; Schneider, Erika; Obuchowski, Nancy; Poggio, Emilio; Jain, Anil; Baker, Mark E
2009-08-01
The objectives of our study were to develop a model to predict the probability of reduced renal function after outpatient contrast-enhanced CT (CECT)--based on patient age, sex, and race and on serum creatinine level before CT or directly based on estimated glomerular filtration rate (GFR) before CT--and to determine the relationship between patients with changes in creatinine level that characterize contrast-induced nephropathy and patients with reduced GFR after CECT. Of 5,187 outpatients who underwent CECT, 963 (18.6%) had serum creatinine levels obtained within 6 months before and 4 days after CECT. The estimated GFR was calculated before and after CT using the four-variable Modification of Diet in Renal Disease (MDRD) Study equation. Pre-CT serum creatinine level, age, race, sex, and pre-CT estimated GFR were tested using multiple-variable logistic regression models to determine the probability of having an estimated GFR of < 60 and < 45 mL/min/1.73 m(2) after CECT. Two thirds of the patients were used to create and one third to test the models. We also determined discordance between patients who met standard definitions of contrast-induced nephropathy and those with a reduced estimated GFR after CECT. Significant (p < 0.002) predictors for a post-CT estimated GFR of < 60 mL/min/1.73 m(2) were age, race, sex, pre-CT serum creatinine level, and pre-CT estimated GFR. Sex, serum creatinine level, and pre-CT estimated GFR were significant factors (p < 0.001) for predicting a post-CT estimated GFR of < 45 mL/min/1.73 m(2). The probability is [exp(y) / (1 + exp(y))], where y = 6.21 - (0.10 x pre-CT estimated GFR) for an estimated GFR of < 60 mL/min/1.73 m(2), and y = 3.66 - (0.087 x pre-CT estimated GFR) for an estimated GFR of < 45 mL/min/1.73 m(2). A discrepancy between those who met contrast-induced nephropathy criteria by creatinine changes and those with a post-CT estimated GFR of < 60 mL/min/1.73 m(2) was detected in 208 of the 963 patients (21.6%). The
Precursor Analysis for Flight- and Ground-Based Anomaly Risk Significance Determination
NASA Technical Reports Server (NTRS)
Groen, Frank
2010-01-01
This slide presentation reviews the precursor analysis for flight and ground based anomaly risk significance. It includes information on accident precursor analysis, real models vs. models, and probabilistic analysis.
NASA Astrophysics Data System (ADS)
Han, W.; Yang, J.
2016-11-01
This paper discusses the group of wave height possibility distribution characteristics of significant wave height in China Sea based on multi-satellite grid data, the grid SWH data merges six satellites (TOPEX/Poseidon, Jason-1/2, ENVISAT, Cryosat-2, HY-2A) corrected satellite altimeter data into the global SWH grid data in 2000∼2015 using Inverse Distance Weighting Method. Comparing the difference of wave height possibility distribution of two schemes that scheme two includes all of 6 satellite data and scheme one includes all of other 5 satellite data except HY-2A in two wave height interval, the first interval is [0,25) m, the second interval is [4,25) m, finding that two schemes have close wave height probability distribution and the probability change trend, there are difference only in interval [0.4, 1.8) m and the possibility in this interval occupies over 70%; then mainly discussing scheme two, finding that the interval of greatest wave height possibility is [0.6, 3) m, and the wave height possibility that the SWH is greater than 4m is less than 0.18%.
Techasrivichien, Teeranee; Darawuttimaprakorn, Niphon; Punpuing, Sureeporn; Musumari, Patou Masika; Lukhele, Bhekumusa Wellington; El-Saaidi, Christina; Suguimoto, S Pilar; Feldman, Mitchell D; Ono-Kihara, Masako; Kihara, Masahiro
2016-02-01
Thailand has undergone rapid modernization with implications for changes in sexual norms. We investigated sexual behavior and attitudes across generations and gender among a probability sample of the general population of Nonthaburi province located near Bangkok in 2012. A tablet-based survey was performed among 2,138 men and women aged 15-59 years identified through a three-stage, stratified, probability proportional to size, clustered sampling. Descriptive statistical analysis was carried out accounting for the effects of multistage sampling. Relationship of age and gender to sexual behavior and attitudes was analyzed by bivariate analysis followed by multivariate logistic regression analysis to adjust for possible confounding. Patterns of sexual behavior and attitudes varied substantially across generations and gender. We found strong evidence for a decline in the age of sexual initiation, a shift in the type of the first sexual partner, and a greater rate of acceptance of adolescent premarital sex among younger generations. The study highlighted profound changes among young women as evidenced by a higher number of lifetime sexual partners as compared to older women. In contrast to the significant gender gap in older generations, sexual profiles of Thai young women have evolved to resemble those of young men with attitudes gradually converging to similar sexual standards. Our data suggest that higher education, being never-married, and an urban lifestyle may have been associated with these changes. Our study found that Thai sexual norms are changing dramatically. It is vital to continue monitoring such changes, considering the potential impact on the HIV/STIs epidemic and unintended pregnancies.
1983-07-26
DeGroot , Morris H. Probability and Statistic. Addison-Wesley Publishing Company, Reading, Massachusetts, 1975. [Gillogly 78] Gillogly, J.J. Performance...distribution [ DeGroot 751 has just begun. The beta distribution has several features that might make it a more reasonable choice. As with the normal-based...1982. [Cooley 65] Cooley, J.M. and Tukey, J.W. An algorithm for the machine calculation of complex Fourier series. Math. Comp. 19, 1965. [ DeGroot 75
NASA Astrophysics Data System (ADS)
Shi, Chenguang; Salous, Sana; Wang, Fei; Zhou, Jianjiang
2016-12-01
In this paper, we investigate the problem of low probability of intercept (LPI)-based adaptive radar waveform optimization in signal-dependent clutter for joint radar and cellular communication systems, where the radar system optimizes the transmitted waveform such that the interference caused to the cellular communication systems is strictly controlled. Assuming that the precise knowledge of the target spectra, the power spectral densities (PSDs) of signal-dependent clutters, the propagation losses of corresponding channels and the communication signals is known by the radar, three different LPI based criteria for radar waveform optimization are proposed to minimize the total transmitted power of the radar system by optimizing the multicarrier radar waveform with a predefined signal-to-interference-plus-noise ratio (SINR) constraint and a minimum required capacity for the cellular communication systems. These criteria differ in the way the communication signals scattered off the target are considered in the radar waveform design: (1) as useful energy, (2) as interference or (3) ignored altogether. The resulting problems are solved analytically and their solutions represent the optimum power allocation for each subcarrier in the multicarrier radar waveform. We show with numerical results that the LPI performance of the radar system can be significantly improved by exploiting the scattered echoes off the target due to cellular communication signals received at the radar receiver.
Shi, Chenguang; Salous, Sana; Wang, Fei; Zhou, Jianjiang
2016-01-01
In this paper, we investigate the problem of low probability of intercept (LPI)-based adaptive radar waveform optimization in signal-dependent clutter for joint radar and cellular communication systems, where the radar system optimizes the transmitted waveform such that the interference caused to the cellular communication systems is strictly controlled. Assuming that the precise knowledge of the target spectra, the power spectral densities (PSDs) of signal-dependent clutters, the propagation losses of corresponding channels and the communication signals is known by the radar, three different LPI based criteria for radar waveform optimization are proposed to minimize the total transmitted power of the radar system by optimizing the multicarrier radar waveform with a predefined signal-to-interference-plus-noise ratio (SINR) constraint and a minimum required capacity for the cellular communication systems. These criteria differ in the way the communication signals scattered off the target are considered in the radar waveform design: (1) as useful energy, (2) as interference or (3) ignored altogether. The resulting problems are solved analytically and their solutions represent the optimum power allocation for each subcarrier in the multicarrier radar waveform. We show with numerical results that the LPI performance of the radar system can be significantly improved by exploiting the scattered echoes off the target due to cellular communication signals received at the radar receiver.
Yuan, Xiguo; Zhang, Junying; Wang, Yue
2010-12-01
One of the most challenging points in studying human common complex diseases is to search for both strong and weak susceptibility single-nucleotide polymorphisms (SNPs) and identify forms of genetic disease models. Currently, a number of methods have been proposed for this purpose. Many of them have not been validated through applications into various genome datasets, so their abilities are not clear in real practice. In this paper, we present a novel SNP association study method based on probability theory, called ProbSNP. The method firstly detects SNPs by evaluating their joint probabilities in combining with disease status and selects those with the lowest joint probabilities as susceptibility ones, and then identifies some forms of genetic disease models through testing multiple-locus interactions among the selected SNPs. The joint probabilities of combined SNPs are estimated by establishing Gaussian distribution probability density functions, in which the related parameters (i.e., mean value and standard deviation) are evaluated based on allele and haplotype frequencies. Finally, we test and validate the method using various genome datasets. We find that ProbSNP has shown remarkable success in the applications to both simulated genome data and real genome-wide data.
Chen, Shyi-Ming; Chen, Shen-Wen
2015-03-01
In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and the probabilities of trends of fuzzy-trend logical relationships. Firstly, the proposed method fuzzifies the historical training data of the main factor and the secondary factor into fuzzy sets, respectively, to form two-factors second-order fuzzy logical relationships. Then, it groups the obtained two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, it calculates the probability of the "down-trend," the probability of the "equal-trend" and the probability of the "up-trend" of the two-factors second-order fuzzy-trend logical relationships in each two-factors second-order fuzzy-trend logical relationship group, respectively. Finally, it performs the forecasting based on the probabilities of the down-trend, the equal-trend, and the up-trend of the two-factors second-order fuzzy-trend logical relationships in each two-factors second-order fuzzy-trend logical relationship group. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) and the NTD/USD exchange rates. The experimental results show that the proposed method outperforms the existing methods.
Chu, Congying; Fan, Lingzhong; Eickhoff, Claudia R.; Liu, Yong; Yang, Yong; Eickhoff, Simon B.; Jiang, Tianzi
2016-01-01
Recent progress in functional neuroimaging has prompted studies of brain activation during various cognitive tasks. Coordinate-based meta-analysis has been utilized to discover the brain regions that are consistently activated across experiments. However, within-experiment co-activation relationships, which can reflect the underlying functional relationships between different brain regions, have not been widely studied. In particular, voxel-wise co-activation, which may be able to provide a detailed configuration of the co-activation network, still needs to be modeled. To estimate the voxel-wise co-activation pattern and deduce the co-activation network, a Co-activation Probability Estimation (CoPE) method was proposed to model within-experiment activations for the purpose of defining the co-activations. A permutation test was adopted as a significance test. Moreover, the co-activations were automatically separated into local and long-range ones, based on distance. The two types of co-activations describe distinct features: the first reflects convergent activations; the second represents co-activations between different brain regions. The validation of CoPE was based on five simulation tests and one real dataset derived from studies of working memory. Both the simulated and the real data demonstrated that CoPE was not only able to find local convergence but also significant long-range co-activation. In particular, CoPE was able to identify a ‘core’ co-activation network in the working memory dataset. As a data-driven method, the CoPE method can be used to mine underlying co-activation relationships across experiments in future studies. PMID:26037052
Laser-based measurement of transition probabilities of neon 2p 53s-2p 53p transitions
NASA Astrophysics Data System (ADS)
Fujimoto, Takashi; Goto, Chiaki; Uetani, Yasunori; Fukuda, Kuniya
1985-01-01
By using the magic-angle, pulsed-excitation method in the presence of a magnetic field, the authors have measured the branching ratios for 2p 53s-2p 53p transitions in neon. By combining values for the lifetime of the upper levels with the branching ratios, they have determined the transition probabilities of 31 transitions. The results are in good agreement with those from emission spectroscopy of a high-pressure are plasma by Bridges and Wiese.
Lindhiem, Oliver; Yu, Lan; Grasso, Damion J; Kolko, David J; Youngstrom, Eric A
2015-04-01
This study adapts the Posterior Probability of Diagnosis (PPOD) Index for use with screening data. The original PPOD Index, designed for use in the context of comprehensive diagnostic assessments, is overconfident when applied to screening data. To correct for this overconfidence, we describe a simple method for adjusting the PPOD Index to improve its calibration when used for screening. Specifically, we compare the adjusted PPOD Index to the original index and naïve Bayes probability estimates on two dimensions of accuracy, discrimination and calibration, using a clinical sample of children and adolescents (N = 321) whose caregivers completed the Vanderbilt Assessment Scale to screen for attention-deficit/hyperactivity disorder and who subsequently completed a comprehensive diagnostic assessment. Results indicated that the adjusted PPOD Index, original PPOD Index, and naïve Bayes probability estimates are comparable using traditional measures of accuracy (sensitivity, specificity, and area under the curve), but the adjusted PPOD Index showed superior calibration. We discuss the importance of calibration for screening and diagnostic support tools when applied to individual patients.
The purpose of this manuscript is to describe the practical strategies developed for the implementation of the Minnesota Children's Pesticide Exposure Study (MNCPES), which is one of the first probability-based samples of multi-pathway and multi-pesticide exposures in children....
Probability workshop to be better in probability topic
NASA Astrophysics Data System (ADS)
Asmat, Aszila; Ujang, Suriyati; Wahid, Sharifah Norhuda Syed
2015-02-01
The purpose of the present study was to examine whether statistics anxiety and attitudes towards probability topic among students in higher education level have an effect on their performance. 62 fourth semester science students were given statistics anxiety questionnaires about their perception towards probability topic. Result indicated that students' performance in probability topic is not related to anxiety level, which means that the higher level in statistics anxiety will not cause lower score in probability topic performance. The study also revealed that motivated students gained from probability workshop ensure that their performance in probability topic shows a positive improvement compared before the workshop. In addition there exists a significance difference in students' performance between genders with better achievement among female students compared to male students. Thus, more initiatives in learning programs with different teaching approaches is needed to provide useful information in improving student learning outcome in higher learning institution.
PROBABILITY SURVEYS , CONDITIONAL PROBABILITIES AND ECOLOGICAL RISK ASSESSMENT
We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...
PROBABILITY SURVEYS, CONDITIONAL PROBABILITIES, AND ECOLOGICAL RISK ASSESSMENT
We show that probability-based environmental resource monitoring programs, such as U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Asscssment Program EMAP) can be analyzed with a conditional probability analysis (CPA) to conduct quantitative probabi...
PROBABILITY SURVEYS , CONDITIONAL PROBABILITIES AND ECOLOGICAL RISK ASSESSMENT
We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...
Probability Surveys, Conditional Probability, and Ecological Risk Assessment
We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency’s (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...
Probability Surveys, Conditional Probability, and Ecological Risk Assessment
We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency’s (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...
ERIC Educational Resources Information Center
Koo, Reginald; Jones, Martin L.
2011-01-01
Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.
ERIC Educational Resources Information Center
Koo, Reginald; Jones, Martin L.
2011-01-01
Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.
Capture probabilities for secondary resonances
NASA Technical Reports Server (NTRS)
Malhotra, Renu
1990-01-01
A perturbed pendulum model is used to analyze secondary resonances, and it is shown that a self-similarity between secondary and primary resonances exists. Henrard's (1982) theory is used to obtain formulas for the capture probability into secondary resonances. The tidal evolution of Miranda and Umbriel is considered as an example, and significant probabilities of capture into secondary resonances are found.
Nuclear spin of odd-odd α emitters based on the behavior of α -particle preformation probability
NASA Astrophysics Data System (ADS)
Ismail, M.; Adel, A.; Botros, M. M.
2016-05-01
The preformation probabilities of an α cluster inside radioactive parent nuclei for both odd-even and odd-odd nuclei are investigated. The calculations cover the isotopic chains from Ir to Ac in the mass regions 166 ≤A ≤215 and 77 ≤Z ≤89 . The calculations are employed in the framework of the density-dependent cluster model. A realistic density-dependent nucleon-nucleon (N N ) interaction with a finite-range exchange part is used to calculate the microscopic α -nucleus potential in the well-established double-folding model. The main effect of antisymmetrization under exchange of nucleons between the α and daughter nuclei has been included in the folding model through the finite-range exchange part of the N N interaction. The calculated potential is then implemented to find both the assault frequency and the penetration probability of the α particle by means of the Wentzel-Kramers-Brillouin approximation in combination with the Bohr-Sommerfeld quantization condition. The correlation of the α -particle preformation probability and the neutron and proton level sequences of the parent nucleus as obtained in our previous work is extended to odd-even and odd-odd nuclei to determine the nuclear spin and parities. Two spin coupling rules are used, namely, strong and weak rules to determine the nuclear spin for odd-odd isotopes. This work can be a useful reference for theoretical calculation of undetermined nuclear spin of odd-odd nuclei in the future.
NASA Astrophysics Data System (ADS)
Koido, Tetsuya; Tomarikawa, Ko; Yonemura, Shigeru; Tokumasu, Takashi
2011-05-01
Molecular Dynamics (MD) was used to simulate dissociative adsorption of a hydrogen molecule on the Pt(111) surface considering the movement of the surface atoms and gas molecules. The Embedded Atom Method (EAM) was applied to represent the interaction potential. The parameters of the EAM potential were determined such that the values of the dissociation barrier at different sites estimated by the EAM potential agreed with that of DFT calculation results. A number of MD simulations of gas molecules impinging on a Pt(111) surface were carried out randomly changing initial orientations, incident azimuth angles, and impinging positions on the surface with fixed initial translational energy, initial rotational energy, and incident polar angle. The number of collisions in which the gas molecule was dissociated were counted to compute the dissociation probability. The dissociation probability was analyzed and expressed by a mathematical function involving the initial conditions of the impinging molecule, namely the translational energy, rotational energy, and incident polar angle. Furthermore, the utility of the model was verified by comparing its results with raw MD simulation results of molecular beam experiments.
Finn, Patrick
2003-01-01
An evidence-based framework can be described as an empirically-driven, measurement-based, client-sensitive approach for selecting treatments. It is believed that using such a framework is more likely to result in a clinically significant outcome. For this paper, a clinically significant outcome was defined as a meaningful treatment change. It was suggested that there are at least three groups for whom a treatment's outcome is meaningful. These groups include clinicians/clinical researchers, the clients, and relevant others who have some interest in the outcome (e.g., parents of a child who stutters). The meaning and measurement of clinical significance was discussed for each of these three groups, based on research from the behavioral stuttering treatment literature. The reader will learn about and be able to (1) broadly define a clinically significant outcome and identify some of the groups who are interested in such an outcome and (2) describe how clinical significance has been evaluated in stuttering treatment within an evidence-based framework.
Hara, Toshihide; Sato, Keiko; Ohya, Masanori
2010-05-08
Sequence alignment is one of the most important techniques to analyze biological systems. It is also true that the alignment is not complete and we have to develop it to look for more accurate method. In particular, an alignment for homologous sequences with low sequence similarity is not in satisfactory level. Usual methods for aligning protein sequences in recent years use a measure empirically determined. As an example, a measure is usually defined by a combination of two quantities (1) and (2) below: (1) the sum of substitutions between two residue segments, (2) the sum of gap penalties in insertion/deletion region. Such a measure is determined on the assumption that there is no an intersite correlation on the sequences. In this paper, we improve the alignment by taking the correlation of consecutive residues. We introduced a new method of alignment, called MTRAP by introducing a metric defined on compound systems of two sequences. In the benchmark tests by PREFAB 4.0 and HOMSTRAD, our pairwise alignment method gives higher accuracy than other methods such as ClustalW2, TCoffee, MAFFT. Especially for the sequences with sequence identity less than 15%, our method improves the alignment accuracy significantly. Moreover, we also showed that our algorithm works well together with a consistency-based progressive multiple alignment by modifying the TCoffee to use our measure. We indicated that our method leads to a significant increase in alignment accuracy compared with other methods. Our improvement is especially clear in low identity range of sequences. The source code is available at our web page, whose address is found in the section "Availability and requirements".
NASA Astrophysics Data System (ADS)
Siettos, Constantinos I.; Anastassopoulou, Cleo; Russo, Lucia; Grigoras, Christos; Mylonakis, Eleftherios
2016-06-01
Based on multiscale agent-based computations we estimated the per-contact probability of transmission by age of the Ebola virus disease (EVD) that swept through Liberia from May 2014 to March 2015. For the approximation of the epidemic dynamics we have developed a detailed agent-based model with small-world interactions between individuals categorized by age. For the estimation of the structure of the evolving contact network as well as the per-contact transmission probabilities by age group we exploited the so called Equation-Free framework. Model parameters were fitted to official case counts reported by the World Health Organization (WHO) as well as to recently published data of key epidemiological variables, such as the mean time to death, recovery and the case fatality rate.
Clark, Samuel J.; Kahn, Kathleen; Houle, Brian; Arteche, Adriane; Collinson, Mark A.; Tollman, Stephen M.; Stein, Alan
2013-01-01
Background There is evidence that a young child's risk of dying increases following the mother's death, but little is known about the risk when the mother becomes very ill prior to her death. We hypothesized that children would be more likely to die during the period several months before their mother's death, as well as for several months after her death. Therefore we investigated the relationship between young children's likelihood of dying and the timing of their mother's death and, in particular, the existence of a critical period of increased risk. Methods and Findings Data from a health and socio-demographic surveillance system in rural South Africa were collected on children 0–5 y of age from 1 January 1994 to 31 December 2008. Discrete time survival analysis was used to estimate children's probability of dying before and after their mother's death, accounting for moderators. 1,244 children (3% of sample) died from 1994 to 2008. The probability of child death began to rise 6–11 mo prior to the mother's death and increased markedly during the 2 mo immediately before the month of her death (odds ratio [OR] 7.1 [95% CI 3.9–12.7]), in the month of her death (OR 12.6 [6.2–25.3]), and during the 2 mo following her death (OR 7.0 [3.2–15.6]). This increase in the probability of dying was more pronounced for children whose mothers died of AIDS or tuberculosis compared to other causes of death, but the pattern remained for causes unrelated to AIDS/tuberculosis. Infants aged 0–6 mo at the time of their mother's death were nine times more likely to die than children aged 2–5 y. The limitations of the study included the lack of knowledge about precisely when a very ill mother will die, a lack of information about child nutrition and care, and the diagnosis of AIDS deaths by verbal autopsy rather than serostatus. Conclusions Young children in lower income settings are more likely to die not only after their mother's death but also in the months before, when
Probability of causation approach
Jose, D.E.
1988-08-01
Probability of causation (PC) is sometimes viewed as a great improvement by those persons who are not happy with the present rulings of courts in radiation cases. The author does not share that hope and expects that PC will not play a significant role in these issues for at least the next decade. If it is ever adopted in a legislative compensation scheme, it will be used in a way that is unlikely to please most scientists. Consequently, PC is a false hope for radiation scientists, and its best contribution may well lie in some of the spin-off effects, such as an influence on medical practice.
A Dialogical, Story-Based Evaluation Tool: The Most Significant Change Technique.
ERIC Educational Resources Information Center
Dart, Jessica; Davies, Rick
2003-01-01
Describes the Most Significant Change (MSC) technique, a dialogical story-based technique that aims to facilitate program improvement by focusing the direction of work towards explicitly valued directions and away from less valued directions. Illustrates the use of MSC in an Australian case study. (SLD)
ERIC Educational Resources Information Center
Lemons, Christopher J.; Kearns, Devin M.; Davidson, Kimberly A.
2014-01-01
Students with persistent and severe reading difficulties pose a significant challenge for educators. The primary purpose of this article is to provide a detailed demonstration of data-based individualization (DBI) implementation in the area of reading. Also provided are various resources a teacher may access to learn more about the instructional…
Shrimali, Rajeev; Ahmad, Shamim; Berrong, Zuzana; Okoev, Grigori; Matevosyan, Adelaida; Razavi, Ghazaleh Shoja E; Petit, Robert; Gupta, Seema; Mkrtichyan, Mikayel; Khleif, Samir N
2017-01-01
We previously demonstrated that in addition to generating an antigen-specific immune response, Listeria monocytogenes (Lm)-based immunotherapy significantly reduces the ratio of regulatory T cells (Tregs)/CD4(+) and myeloid-derived suppressor cells (MDSCs) in the tumor microenvironment. Since Lm-based immunotherapy is able to inhibit the immune suppressive environment, we hypothesized that combining this treatment with agonist antibody to a co-stimulatory receptor that would further boost the effector arm of immunity will result in significant improvement of anti-tumor efficacy of treatment. Here we tested the immune and therapeutic efficacy of Listeria-based immunotherapy combination with agonist antibody to glucocorticoid-induced tumor necrosis factor receptor-related protein (GITR) in TC-1 mouse tumor model. We evaluated the potency of combination on tumor growth and survival of treated animals and profiled tumor microenvironment for effector and suppressor cell populations. We demonstrate that combination of Listeria-based immunotherapy with agonist antibody to GITR synergizes to improve immune and therapeutic efficacy of treatment in a mouse tumor model. We show that this combinational treatment leads to significant inhibition of tumor-growth, prolongs survival and leads to complete regression of established tumors in 60% of treated animals. We determined that this therapeutic benefit of combinational treatment is due to a significant increase in tumor infiltrating effector CD4(+) and CD8(+) T cells along with a decrease of inhibitory cells. To our knowledge, this is the first study that exploits Lm-based immunotherapy combined with agonist anti-GITR antibody as a potent treatment strategy that simultaneously targets both the effector and suppressor arms of the immune system, leading to significantly improved anti-tumor efficacy. We believe that our findings depicted in this manuscript provide a promising and translatable strategy that can enhance the overall
Shrimali, Rajeev; Ahmad, Shamim; Berrong, Zuzana; Okoev, Grigori; Matevosyan, Adelaida; Razavi, Ghazaleh Shoja E; Petit, Robert; Gupta, Seema; Mkrtichyan, Mikayel; Khleif, Samir N
2017-08-15
We previously demonstrated that in addition to generating an antigen-specific immune response, Listeria monocytogenes (Lm)-based immunotherapy significantly reduces the ratio of regulatory T cells (Tregs)/CD4(+) and myeloid-derived suppressor cells (MDSCs) in the tumor microenvironment. Since Lm-based immunotherapy is able to inhibit the immune suppressive environment, we hypothesized that combining this treatment with agonist antibody to a co-stimulatory receptor that would further boost the effector arm of immunity will result in significant improvement of anti-tumor efficacy of treatment. Here we tested the immune and therapeutic efficacy of Listeria-based immunotherapy combination with agonist antibody to glucocorticoid-induced tumor necrosis factor receptor-related protein (GITR) in TC-1 mouse tumor model. We evaluated the potency of combination on tumor growth and survival of treated animals and profiled tumor microenvironment for effector and suppressor cell populations. We demonstrate that combination of Listeria-based immunotherapy with agonist antibody to GITR synergizes to improve immune and therapeutic efficacy of treatment in a mouse tumor model. We show that this combinational treatment leads to significant inhibition of tumor-growth, prolongs survival and leads to complete regression of established tumors in 60% of treated animals. We determined that this therapeutic benefit of combinational treatment is due to a significant increase in tumor infiltrating effector CD4(+) and CD8(+) T cells along with a decrease of inhibitory cells. To our knowledge, this is the first study that exploits Lm-based immunotherapy combined with agonist anti-GITR antibody as a potent treatment strategy that simultaneously targets both the effector and suppressor arms of the immune system, leading to significantly improved anti-tumor efficacy. We believe that our findings depicted in this manuscript provide a promising and translatable strategy that can enhance the overall
Probability for Weather and Climate
NASA Astrophysics Data System (ADS)
Smith, L. A.
2013-12-01
Over the last 60 years, the availability of large-scale electronic computers has stimulated rapid and significant advances both in meteorology and in our understanding of the Earth System as a whole. The speed of these advances was due, in large part, to the sudden ability to explore nonlinear systems of equations. The computer allows the meteorologist to carry a physical argument to its conclusion; the time scales of weather phenomena then allow the refinement of physical theory, numerical approximation or both in light of new observations. Prior to this extension, as Charney noted, the practicing meteorologist could ignore the results of theory with good conscience. Today, neither the practicing meteorologist nor the practicing climatologist can do so, but to what extent, and in what contexts, should they place the insights of theory above quantitative simulation? And in what circumstances can one confidently estimate the probability of events in the world from model-based simulations? Despite solid advances of theory and insight made possible by the computer, the fidelity of our models of climate differs in kind from the fidelity of models of weather. While all prediction is extrapolation in time, weather resembles interpolation in state space, while climate change is fundamentally an extrapolation. The trichotomy of simulation, observation and theory which has proven essential in meteorology will remain incomplete in climate science. Operationally, the roles of probability, indeed the kinds of probability one has access too, are different in operational weather forecasting and climate services. Significant barriers to forming probability forecasts (which can be used rationally as probabilities) are identified. Monte Carlo ensembles can explore sensitivity, diversity, and (sometimes) the likely impact of measurement uncertainty and structural model error. The aims of different ensemble strategies, and fundamental differences in ensemble design to support of
Bakbergenuly, Ilyas; Kulinskaya, Elena; Morgenthaler, Stephan
2016-07-01
We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group-level studies or in meta-analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log-odds and arcsine transformations of the estimated probability p̂, both for single-group studies and in combining results from several groups or studies in meta-analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta-analysis and result in abysmal coverage of the combined effect for large K. We also propose bias-correction for the arcsine transformation. Our simulations demonstrate that this bias-correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta-analyses of prevalence. © 2016 The Authors. Biometrical Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.
NASA Astrophysics Data System (ADS)
Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming
2013-03-01
The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory.
Probability and Relative Frequency
NASA Astrophysics Data System (ADS)
Drieschner, Michael
2016-01-01
The concept of probability seems to have been inexplicable since its invention in the seventeenth century. In its use in science, probability is closely related with relative frequency. So the task seems to be interpreting that relation. In this paper, we start with predicted relative frequency and show that its structure is the same as that of probability. I propose to call that the `prediction interpretation' of probability. The consequences of that definition are discussed. The "ladder"-structure of the probability calculus is analyzed. The expectation of the relative frequency is shown to be equal to the predicted relative frequency. Probability is shown to be the most general empirically testable prediction.
Monoclonal gammopathy of undetermined significance and risk of infections: a population-based study.
Kristinsson, Sigurdur Y; Tang, Min; Pfeiffer, Ruth M; Björkholm, Magnus; Goldin, Lynn R; Blimark, Cecilie; Mellqvist, Ulf-Henrik; Wahlin, Anders; Turesson, Ingemar; Landgren, Ola
2012-06-01
No comprehensive evaluation has been made to assess the risk of viral and bacterial infections among patients with monoclonal gammopathy of undetermined significance. Using population-based data from Sweden, we estimated risk of infections among 5,326 monoclonal gammopathy of undetermined significance patients compared to 20,161 matched controls. Patients with monoclonal gammopathy of undetermined significance had a 2-fold increased risk (P<0.05) of developing any infection at 5- and 10-year follow up. More specifically, patients with monoclonal gammopathy of undetermined significance had an increased risk (P<0.05) of bacterial (pneumonia, osteomyelitis, septicemia, pyelonephritis, cellulitis, endocarditis, and meningitis), and viral (influenza and herpes zoster) infections. Patients with monoclonal gammopathy of undetermined significance with M-protein concentrations over 2.5 g/dL at diagnosis had highest risks of infections. However, the risk was also increased (P<0.05) among those with concentrations below 0.5 g/dL. Patients with monoclonal gammopathy of undetermined significance who developed infections had no excess risk of developing multiple myeloma, Waldenström macroglobulinemia or related malignancy. Our findings provide novel insights into the mechanisms behind infections in patients with plasma cell dyscrasias, and may have clinical implications.
ERIC Educational Resources Information Center
Pale, Joseph W.
2016-01-01
Teacher based is the usual instructional method used by most teachers in high school. Traditionally, teachers direct the learning and students work individually and assume a receptive role in their education. Student based learning approach is an instructional use of small groups of students working together to accomplish shared goals to increase…
Understanding text-based persuasion and support tactics of concerned significant others.
van Stolk-Cooke, Katherine; Hayes, Marie; Baumel, Amit; Muench, Frederick
2015-01-01
The behavior of concerned significant others (CSOs) can have a measurable impact on the health and wellness of individuals attempting to meet behavioral and health goals, and research is needed to better understand the attributes of text-based CSO language when encouraging target significant others (TSOs) to achieve those goals. In an effort to inform the development of interventions for CSOs, this study examined the language content of brief text-based messages generated by CSOs to motivate TSOs to achieve a behavioral goal. CSOs generated brief text-based messages for TSOs for three scenarios: (1) to help TSOs achieve the goal, (2) in the event that the TSO is struggling to meet the goal, and (3) in the event that the TSO has given up on meeting the goal. Results indicate that there was a significant relationship between the tone and compassion of messages generated by CSOs, the CSOs' perceptions of TSO motivation, and their expectation of a grateful or annoyed reaction by the TSO to their feedback or support. Results underscore the importance of attending to patterns in language when CSOs communicate with TSOs about goal achievement or failure, and how certain variables in the CSOs' perceptions of their TSOs affect these characteristics.
Understanding text-based persuasion and support tactics of concerned significant others
van Stolk-Cooke, Katherine; Hayes, Marie; Baumel, Amit
2015-01-01
The behavior of concerned significant others (CSOs) can have a measurable impact on the health and wellness of individuals attempting to meet behavioral and health goals, and research is needed to better understand the attributes of text-based CSO language when encouraging target significant others (TSOs) to achieve those goals. In an effort to inform the development of interventions for CSOs, this study examined the language content of brief text-based messages generated by CSOs to motivate TSOs to achieve a behavioral goal. CSOs generated brief text-based messages for TSOs for three scenarios: (1) to help TSOs achieve the goal, (2) in the event that the TSO is struggling to meet the goal, and (3) in the event that the TSO has given up on meeting the goal. Results indicate that there was a significant relationship between the tone and compassion of messages generated by CSOs, the CSOs’ perceptions of TSO motivation, and their expectation of a grateful or annoyed reaction by the TSO to their feedback or support. Results underscore the importance of attending to patterns in language when CSOs communicate with TSOs about goal achievement or failure, and how certain variables in the CSOs’ perceptions of their TSOs affect these characteristics. PMID:26312172
NASA Astrophysics Data System (ADS)
Courtade, Ginevra Rose
Federal mandates (A Nation at Risk, 1983 and Project 2061: Science for all Americans, 1985) as well as the National Science Education Standards (NRC, 1996) call for science education for all students. Recent educational laws (IDEA, 1997; NCLB, 2002) require access to and assessment of the general curriculum, including science, for all students with disabilities. Although some research exists on teaching academics to students with significant disabilities, the research on teaching science is especially limited (Browder, Spooner, Ahlgrim-Delzell, Harris, & Wakeman, 2006; Browder, Wakeman, et al., 2006; Courtade, et al., 2006). The purpose of this investigation was to determine if training teachers of students with significant disabilities to teach science concepts using a guided inquiry-based method would change the way science was instructed in the classroom. Further objectives of this study were to determine if training the teachers would increase students' participation and achievement in science. The findings of this study demonstrated a functional relationship between the inquiry-based science instruction training and teacher's ability to instruct students with significant disabilities in science using inquiry-based science instruction. The findings of this study also indicated a functional relationship between the inquiry-based science instruction training and acquisition of student inquiry skills. Also, findings indicated an increase in the number of science content standards being addressed after the teachers received the training. Some students were also able to acquire new science terms after their teachers taught using inquiry-based instruction. Finally, social validity measures indicated a high degree of satisfaction with the intervention and its intended outcomes.
Prescott, Jason D.; Tran, Thuy B.; Postlewait, Lauren M.; Maithel, Shishir K.; Wang, Tracy S.; Glenn, Jason A.; Hatzaras, Ioannis; Shenoy, Rivfka; Phay, John E.; Keplinger, Kara; Fields, Ryan C.; Jin, Linda X.; Weber, Sharon M.; Salem, Ahmed; Sicklick, Jason K.; Gad, Shady; Yopp, Adam C.; Mansour, John C.; Duh, Quan-Yang; Seiser, Natalie; Solorzano, Carmen C.; Kiernan, Colleen M.; Votanopoulos, Konstantinos I.; Levine, Edward A.; Poultsides, George A.; Pawlik, Timothy M.
2016-01-01
Objective To evaluate conditional disease-free survival (CDFS) for patients who underwent curative intent surgery for adrenocortical carcinoma (ACC). Background ACC is a rare but aggressive tumor. Survival estimates are usually reported as survival from the time of surgery. CDFS estimates may be more clinically relevant by accounting for the changing likelihood of disease-free survival (DFS) according to time elapsed after surgery. Methods CDFS was assessed using a multi-institutional cohort of patients. Cox proportional hazards models were used to evaluate factors associated with DFS. Three-year CDFS (CDFS3) estimates at “x” year after surgery were calculated as follows: CDFS3=DFS(x+3)/DFS(x). Results One hundred ninety-two patients were included in the study cohort; median patient age was 52 years. On presentation, 36% of patients had a functional tumor and median size was 11.5 cm. Most patients underwent R0 resection (75%) and 9% had N1 disease. Overall 1-, 3-, and 5-year DFS was 59%, 34%, and 22%, respectively. Using CDFS estimates, the probability of remaining disease free for an additional 3 years given that the patient had survived without disease at 1, 3, and 5 years, was 43%, 53%, and 70%, respectively. Patients with less favorable prognosis at baseline demonstrated the greatest increase in CDFS3 over time (eg, capsular invasion: 28%–88%, Δ60% vs no capsular invasion: 51%–87%, Δ36%). Conclusions DFS estimates for patients with ACC improved dramatically over time, in particular among patients with initial worse prognoses. CDFS estimates may provide more clinically relevant information about the changing likelihood of DFS over time. PMID:28009746
McCullagh, Gregory B; Bishop, Cory D; Wyeth, Russell C
2014-12-01
Tritonia diomedea (synonymous with Tritonia tetraquetra) navigates in turbulent odour plumes, crawling upstream towards prey and downstream to avoid predators. This is probably accomplished by odour-gated rheotaxis, but other possibilities have not been excluded. Our goal was to test whether T. diomedea uses odour-gated rheotaxis and to simultaneously determine which of the cephalic sensory organs (rhinophores and oral veil) are required for navigation. In a first experiment, slugs showed no coherent responses to streams of odour directed at single rhinophores. In a second experiment, navigation in prey and predator odour plumes was compared between animals with unilateral rhinophore lesions, denervated oral veils, or combined unilateral rhinophore lesions and denervated oral veils. In all treatments, animals navigated in a similar manner to that of control and sham-operated animals, indicating that a single rhinophore provides sufficient sensory input for navigation (assuming that a distributed flow measurement system would also be affected by the denervations). Amongst various potential navigational strategies, only odour-gated positive rheotaxis can produce the navigation tracks we observed in prey plumes while receiving input from a single sensor. Thus, we provide strong evidence that T. diomedea uses odour-gated rheotaxis in attractive odour plumes, with odours and flow detected by the rhinophores. In predator plumes, slugs turned downstream to varying degrees rather than orienting directly downstream for crawling, resulting in greater dispersion for negative rheotaxis in aversive plumes. These conclusions are the first explicit confirmation of odour-gated rheotaxis as a navigational strategy in gastropods and are also a foundation for exploring the neural circuits that mediate odour-gated rheotaxis. © 2014. Published by The Company of Biologists Ltd.
New Classification Method Based on Support-Significant Association Rules Algorithm
NASA Astrophysics Data System (ADS)
Li, Guoxin; Shi, Wen
One of the most well-studied problems in data mining is mining for association rules. There was also research that introduced association rule mining methods to conduct classification tasks. These classification methods, based on association rule mining, could be applied for customer segmentation. Currently, most of the association rule mining methods are based on a support-confidence structure, where rules satisfied both minimum support and minimum confidence were returned as strong association rules back to the analyzer. But, this types of association rule mining methods lack of rigorous statistic guarantee, sometimes even caused misleading. A new classification model for customer segmentation, based on association rule mining algorithm, was proposed in this paper. This new model was based on the support-significant association rule mining method, where the measurement of confidence for association rule was substituted by the significant of association rule that was a better evaluation standard for association rules. Data experiment for customer segmentation from UCI indicated the effective of this new model.
Jang, Cheng-Shin
2015-05-01
Accurately classifying the spatial features of the water temperatures and discharge rates of hot springs is crucial for environmental resources use and management. This study spatially characterized classifications of the water temperatures and discharge rates of hot springs in the Tatun Volcanic Region of Northern Taiwan by using indicator kriging (IK). The water temperatures and discharge rates of the springs were first assigned to high, moderate, and low categories according to the two thresholds of the proposed spring classification criteria. IK was then used to model the occurrence probabilities of the water temperatures and discharge rates of the springs and probabilistically determine their categories. Finally, nine combinations were acquired from the probability-based classifications for the spatial features of the water temperatures and discharge rates of the springs. Moreover, various combinations of spring water features were examined according to seven subzones of spring use in the study region. The research results reveal that probability-based classifications using IK provide practicable insights related to propagating the uncertainty of classifications according to the spatial features of the water temperatures and discharge rates of the springs. The springs in the Beitou (BT), Xingyi Road (XYR), Zhongshanlou (ZSL), and Lengshuikeng (LSK) subzones are suitable for supplying tourism hotels with a sufficient quantity of spring water because they have high or moderate discharge rates. Furthermore, natural hot springs in riverbeds and valleys should be developed in the Dingbeitou (DBT), ZSL, Xiayoukeng (XYK), and Macao (MC) subzones because of low discharge rates and low or moderate water temperatures.
The National Aquatic Resource Surveys (NARS) use probability-survey designs to assess the condition of the nation’s waters. In probability surveys (also known as sample-surveys or statistical surveys), sampling sites are selected randomly.
ERIC Educational Resources Information Center
Bailey, David H.
2000-01-01
Some of the most impressive-sounding criticisms of the conventional theory of biological evolution involve probability. Presents a few examples of how probability should and should not be used in discussing evolution. (ASK)
Measuring local context as context-word probabilities.
Hahn, Lance W
2012-06-01
Context enables readers to quickly recognize a related word but disturbs recognition of unrelated words. The relatedness of a final word to a sentence context has been estimated as the probability (cloze probability) that a participant will complete a sentence with a word. In four studies, I show that it is possible to estimate local context-word relatedness based on common language usage. Conditional probabilities were calculated for sentences with published cloze probabilities. Four-word contexts produced conditional probabilities significantly correlated with cloze probabilities, but usage statistics were unavailable for some sentence contexts. The present studies demonstrate that a composite context measure based on conditional probabilities for one- to four-word contexts and the presence of a final period represents all of the sentences and maintains significant correlations (.25, .52, .53) with cloze probabilities. Finally, the article provides evidence for the effectiveness of this measure by showing that local context varies in ways that are similar to the N400 effect and that are consistent with a role for local context in reading. The Supplemental materials include local context measures for three cloze probability data sets.
The impacts of problem gambling on concerned significant others accessing web-based counselling.
Dowling, Nicki A; Rodda, Simone N; Lubman, Dan I; Jackson, Alun C
2014-08-01
The 'concerned significant others' (CSOs) of people with problem gambling frequently seek professional support. However, there is surprisingly little research investigating the characteristics or help-seeking behaviour of these CSOs, particularly for web-based counselling. The aims of this study were to describe the characteristics of CSOs accessing the web-based counselling service (real time chat) offered by the Australian national gambling web-based counselling site, explore the most commonly reported CSO impacts using a new brief scale (the Problem Gambling Significant Other Impact Scale: PG-SOIS), and identify the factors associated with different types of CSO impact. The sample comprised all 366 CSOs accessing the service over a 21 month period. The findings revealed that the CSOs were most often the intimate partners of problem gamblers and that they were most often females aged under 30 years. All CSOs displayed a similar profile of impact, with emotional distress (97.5%) and impacts on the relationship (95.9%) reported to be the most commonly endorsed impacts, followed by impacts on social life (92.1%) and finances (91.3%). Impacts on employment (83.6%) and physical health (77.3%) were the least commonly endorsed. There were few significant differences in impacts between family members (children, partners, parents, and siblings), but friends consistently reported the lowest impact scores. Only prior counselling experience and Asian cultural background were consistently associated with higher CSO impacts. The findings can serve to inform the development of web-based interventions specifically designed for the CSOs of problem gamblers.
ERIC Educational Resources Information Center
Edwards, William F.; Shiflett, Ray C.; Shultz, Harris
2008-01-01
The mathematical model used to describe independence between two events in probability has a non-intuitive consequence called dependent spaces. The paper begins with a very brief history of the development of probability, then defines dependent spaces, and reviews what is known about finite spaces with uniform probability. The study of finite…
Bowden, Stephen C; Harrison, Elise J; Loring, David W
2014-01-01
Meehl's (1973, Psychodiagnosis: Selected papers. Minneapolis: University of Minnesota Press) distinction between statistical and clinical significance holds special relevance for evidence-based neuropsychological practice. Meehl argued that despite attaining statistical significance, many published findings have limited practical value since they do not inform clinical care. In the context of an ever expanding clinical research literature, accessible methods to evaluate clinical impact are needed. The method of Critically Appraised Topics (Straus, Richardson, Glasziou, & Haynes, 2011, Evidence-based medicine: How to practice and teach EBM (4th ed.). Edinburgh: Elsevier Churchill-Livingstone) was developed to provide clinicians with a "toolkit" to facilitate implementation of evidence-based practice. We illustrate the Critically Appraised Topics method using a dementia screening example. We argue that the skills practiced through critical appraisal provide clinicians with methods to: (1) evaluate the clinical relevance of new or unfamiliar research findings with a focus on patient benefit, (2) help focus of research quality, and (3) incorporate evaluation of clinical impact into educational and professional development activities.
Anusavice, Kenneth J; Jadaan, Osama M; Esquivel-Upshaw, Josephine F
2013-11-01
Recent reports on bilayer ceramic crown prostheses suggest that fractures of the veneering ceramic represent the most common reason for prosthesis failure. The aims of this study were to test the hypotheses that: (1) an increase in core ceramic/veneer ceramic thickness ratio for a crown thickness of 1.6mm reduces the time-dependent fracture probability (Pf) of bilayer crowns with a lithium-disilicate-based glass-ceramic core, and (2) oblique loading, within the central fossa, increases Pf for 1.6-mm-thick crowns compared with vertical loading. Time-dependent fracture probabilities were calculated for 1.6-mm-thick, veneered lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation in the central fossa area. Time-dependent fracture probability analyses were computed by CARES/Life software and finite element analysis, using dynamic fatigue strength data for monolithic discs of a lithium-disilicate glass-ceramic core (Empress 2), and ceramic veneer (Empress 2 Veneer Ceramic). Predicted fracture probabilities (Pf) for centrally loaded 1.6-mm-thick bilayer crowns over periods of 1, 5, and 10 years are 1.2%, 2.7%, and 3.5%, respectively, for a core/veneer thickness ratio of 1.0 (0.8mm/0.8mm), and 2.5%, 5.1%, and 7.0%, respectively, for a core/veneer thickness ratio of 0.33 (0.4mm/1.2mm). CARES/Life results support the proposed crown design and load orientation hypotheses. The application of dynamic fatigue data, finite element stress analysis, and CARES/Life analysis represent an optimal approach to optimize fixed dental prosthesis designs produced from dental ceramics and to predict time-dependent fracture probabilities of ceramic-based fixed dental prostheses that can minimize the risk for clinical failures. Copyright © 2013 Academy of Dental Materials. All rights reserved.
ERIC Educational Resources Information Center
Trigueros, Maria; Lozano, Maria Dolores; Lage, Ana Elisa
2006-01-01
"Enciclomedia" is a Mexican project for primary school teaching using computers in the classroom. Within this project, and following an enactivist theoretical perspective and methodology, we have designed a computer-based package called "Dados", which, together with teaching guides, is intended to support the teaching and…
ERIC Educational Resources Information Center
Trigueros, Maria; Lozano, Maria Dolores; Lage, Ana Elisa
2006-01-01
"Enciclomedia" is a Mexican project for primary school teaching using computers in the classroom. Within this project, and following an enactivist theoretical perspective and methodology, we have designed a computer-based package called "Dados", which, together with teaching guides, is intended to support the teaching and…
Li, Yongfang; Ye, Feng; Wang, Anwei; Wang, Da; Yang, Boyi; Zheng, Quanmei; Sun, Guifan; Gao, Xinghua
2016-01-01
In addition to naturally occurring arsenic, man-made arsenic-based compounds are other sources of arsenic exposure. In 2013, our group identified 12 suspected arsenicosis patients in a household (32 living members). Of them, eight members were diagnosed with skin cancer. Interestingly, all of these patients had lived in the household prior to 1989. An investigation revealed that approximately 2 tons of arsenic-based pesticides had been previously placed near a well that had supplied drinking water to the family from 1973 to 1989. The current arsenic level in the well water was 620 μg/L. No other high arsenic wells were found near the family’s residence. Based on these findings, it is possible to infer that the skin lesions exhibited by these family members were caused by long-term exposure to well water contaminated with arsenic-based pesticides. Additionally, biochemical analysis showed that the individuals exposed to arsenic had higher levels of aspartate aminotransferase and γ-glutamyl transpeptidase than those who were not exposed. These findings might indicate the presence of liver dysfunction in the arsenic-exposed individuals. This report elucidates the effects of arsenical compounds on the occurrence of high levels of arsenic in the environment and emphasizes the severe human health impact of arsenic exposure. PMID:26784217
Li, Yongfang; Ye, Feng; Wang, Anwei; Wang, Da; Yang, Boyi; Zheng, Quanmei; Sun, Guifan; Gao, Xinghua
2016-01-16
In addition to naturally occurring arsenic, man-made arsenic-based compounds are other sources of arsenic exposure. In 2013, our group identified 12 suspected arsenicosis patients in a household (32 living members). Of them, eight members were diagnosed with skin cancer. Interestingly, all of these patients had lived in the household prior to 1989. An investigation revealed that approximately 2 tons of arsenic-based pesticides had been previously placed near a well that had supplied drinking water to the family from 1973 to 1989. The current arsenic level in the well water was 620 μg/L. No other high arsenic wells were found near the family's residence. Based on these findings, it is possible to infer that the skin lesions exhibited by these family members were caused by long-term exposure to well water contaminated with arsenic-based pesticides. Additionally, biochemical analysis showed that the individuals exposed to arsenic had higher levels of aspartate aminotransferase and γ-glutamyl transpeptidase than those who were not exposed. These findings might indicate the presence of liver dysfunction in the arsenic-exposed individuals. This report elucidates the effects of arsenical compounds on the occurrence of high levels of arsenic in the environment and emphasizes the severe human health impact of arsenic exposure.
Ronald E. McRoberts
2010-01-01
Estimates of forest area are among the most common and useful information provided by national forest inventories. The estimates are used for local and national purposes and for reporting to international agreements such as the MontrÃ©al Process, the Ministerial Conference on the Protection of Forests in Europe, and the Kyoto Protocol. The estimates are usually based on...
Explosion probability of unexploded ordnance: expert beliefs.
MacDonald, Jacqueline Anne; Small, Mitchell J; Morgan, M G
2008-08-01
This article reports on a study to quantify expert beliefs about the explosion probability of unexploded ordnance (UXO). Some 1,976 sites at closed military bases in the United States are contaminated with UXO and are slated for cleanup, at an estimated cost of $15-140 billion. Because no available technology can guarantee 100% removal of UXO, information about explosion probability is needed to assess the residual risks of civilian reuse of closed military bases and to make decisions about how much to invest in cleanup. This study elicited probability distributions for the chance of UXO explosion from 25 experts in explosive ordnance disposal, all of whom have had field experience in UXO identification and deactivation. The study considered six different scenarios: three different types of UXO handled in two different ways (one involving children and the other involving construction workers). We also asked the experts to rank by sensitivity to explosion 20 different kinds of UXO found at a case study site at Fort Ord, California. We found that the experts do not agree about the probability of UXO explosion, with significant differences among experts in their mean estimates of explosion probabilities and in the amount of uncertainty that they express in their estimates. In three of the six scenarios, the divergence was so great that the average of all the expert probability distributions was statistically indistinguishable from a uniform (0, 1) distribution-suggesting that the sum of expert opinion provides no information at all about the explosion risk. The experts' opinions on the relative sensitivity to explosion of the 20 UXO items also diverged. The average correlation between rankings of any pair of experts was 0.41, which, statistically, is barely significant (p= 0.049) at the 95% confidence level. Thus, one expert's rankings provide little predictive information about another's rankings. The lack of consensus among experts suggests that empirical studies
Dimopoulos, Meletios A; Roussou, Maria; Gavriatopoulou, Maria; Psimenou, Erasmia; Eleutherakis-Papaiakovou, Evangelos; Migkou, Magdalini; Matsouka, Charis; Mparmparousi, Despoina; Gika, Dimitra; Kafantari, Eftychia; Ziogas, Dimitrios; Fotiou, Despoina; Panagiotidis, Ioannis; Terpos, Evangelos; Kastritis, Efstathios
2016-05-01
Renal failure (RF) is a common and severe complication of symptomatic myeloma, associated with significant morbidity and mortality. Such patients are commonly excluded from clinical trials. Bortezomib/dexamethasone (VD)-based regimens are the backbone of the treatment of newly diagnosed MM patients who present with severe RF even those requiring dialysis. We analyzed the outcomes of 83 consecutive bortezomib-treated patients with severe RF (eGFR < 30 ml/min/1.73 m(2) ), of which 31 (37%) required dialysis. By IMWG renal response criteria, 54 (65%) patients achieved at least MRrenal, including CRrenal in 35% and PRrenal in 12%. Triplet combinations (i.e., VD plus a third agent) versus VD alone were associated with higher rates of renal responses (72 vs. 50%; P = 0.06). Fifteen of the 31 (48%) patients became dialysis independent within a median of 217 days (range 11-724). Triplets were associated with a higher probability of dialysis discontinuation (57 vs. 35%). Serum free light chain (sFLC) level ≥11,550 mg/L was associated with lower rates of major renal response, longer time to major renal response, lower probability, and longer time to dialysis discontinuation. Rapid myeloma response (≥PR within the first month) was also associated with higher rates of renal response. Patients who became dialysis-independent had longer survival than those remaining on dialysis. In conclusion, VD-based triplets are associated with a significant probability of renal response and dialysis discontinuation, improving the survival of patients who became dialysis independent. Rapid disease response is important for renal recovery and sFLCs are predictive of the probability and of the time required for renal response. © 2016 Wiley Periodicals, Inc.
Mályusz, Victoria; Schmidt, Maren; Simeoni, Eva; Poetsch, Micaela; Schwark, Thorsten; Oehmichen, Manfred; von Wurmb-Schwark, Nicole
2007-01-01
Autosomal STR typing alone seems to be no sufficient tool for resolving deficiency cases (e.g. cases of questioned paternity or half-sibships). Therefore, we investigated whether the additional analysis of RFLP single locus probes can improve the solution of such complicated kinship cases. We analyzed 207 children and men from 101 families using the AmpFlSTRIdentifiler multiplex PCR kit and three RFLP single locus probes. A comparison between each child and all unrelated men resulted in 11,023 man / child pairs. Less than three excluding STRs were found in 125 child / unrelated man pairs (1.13%). Additional analysis of RFLP results reduced the number of ambiguous cases to 35. Half-sibling pairs were simulated using STR results from 20 cases with high paternity probabilities (group 1) and relatively low paternity probabilities (group 2). Using a commercially available computer program we calculated probabilities for 778 half-sibling pairs. In 35 pairs (4.49%) half-sibling probabilities over 90.0% could be calculated. Additional investigation of RFLP single locus probes did not lead to a more reliable evaluation of these results. The combined investigation of autosomal STRs and RFLP single locus probes can satisfactorily solve deficient paternities but does not contribute to the solution of questioned half-sibships.
Dynamical Simulation of Probabilities
NASA Technical Reports Server (NTRS)
Zak, Michail
1996-01-01
It has been demonstrated that classical probabilities, and in particular, probabilistic Turing machine, can be simulated by combining chaos and non-Lipschitz dynamics, without utilization of any man-made devices(such as random number generators). Self-orgainizing properties of systems coupling simulated and calculated probabilities and their link to quantum computations are discussed. Special attention was focused upon coupled stochastic processes, defined in terms of conditional probabilities, for which joint probability does not exist. Simulations of quantum probabilities are also discussed.
Dynamical Simulation of Probabilities
NASA Technical Reports Server (NTRS)
Zak, Michail
1996-01-01
It has been demonstrated that classical probabilities, and in particular, probabilistic Turing machine, can be simulated by combining chaos and non-Lipschitz dynamics, without utilization of any man-made devices(such as random number generators). Self-orgainizing properties of systems coupling simulated and calculated probabilities and their link to quantum computations are discussed. Special attention was focused upon coupled stochastic processes, defined in terms of conditional probabilities, for which joint probability does not exist. Simulations of quantum probabilities are also discussed.
A Non-Parametric Surrogate-based Test of Significance for T-Wave Alternans Detection
Nemati, Shamim; Abdala, Omar; Bazán, Violeta; Yim-Yeh, Susie; Malhotra, Atul; Clifford, Gari
2010-01-01
We present a non-parametric adaptive surrogate test that allows for the differentiation of statistically significant T-Wave Alternans (TWA) from alternating patterns that can be solely explained by the statistics of noise. The proposed test is based on estimating the distribution of noise induced alternating patterns in a beat sequence from a set of surrogate data derived from repeated reshuffling of the original beat sequence. Thus, in assessing the significance of the observed alternating patterns in the data no assumptions are made about the underlying noise distribution. In addition, since the distribution of noise-induced alternans magnitudes is calculated separately for each sequence of beats within the analysis window, the method is robust to data non-stationarities in both noise and TWA. The proposed surrogate method for rejecting noise was compared to the standard noise rejection methods used with the Spectral Method (SM) and the Modified Moving Average (MMA) techniques. Using a previously described realistic multi-lead model of TWA, and real physiological noise, we demonstrate the proposed approach reduces false TWA detections, while maintaining a lower missed TWA detection compared with all the other methods tested. A simple averaging-based TWA estimation algorithm was coupled with the surrogate significance testing and was evaluated on three public databases; the Normal Sinus Rhythm Database (NRSDB), the Chronic Heart Failure Database (CHFDB) and the Sudden Cardiac Death Database (SCDDB). Differences in TWA amplitudes between each database were evaluated at matched heart rate (HR) intervals from 40 to 120 beats per minute (BPM). Using the two-sample Kolmogorov-Smirnov test, we found that significant differences in TWA levels exist between each patient group at all decades of heart rates. The most marked difference was generally found at higher heart rates, and the new technique resulted in a larger margin of separability between patient populations than
Linear positivity and virtual probability
NASA Astrophysics Data System (ADS)
Hartle, James B.
2004-08-01
We investigate the quantum theory of closed systems based on the linear positivity decoherence condition of Goldstein and Page. The objective of any quantum theory of a closed system, most generally the universe, is the prediction of probabilities for the individual members of sets of alternative coarse-grained histories of the system. Quantum interference between members of a set of alternative histories is an obstacle to assigning probabilities that are consistent with the rules of probability theory. A quantum theory of closed systems therefore requires two elements: (1) a condition specifying which sets of histories may be assigned probabilities and (2) a rule for those probabilities. The linear positivity condition of Goldstein and Page is the weakest of the general conditions proposed so far. Its general properties relating to exact probability sum rules, time neutrality, and conservation laws are explored. Its inconsistency with the usual notion of independent subsystems in quantum mechanics is reviewed. Its relation to the stronger condition of medium decoherence necessary for classicality is discussed. The linear positivity of histories in a number of simple model systems is investigated with the aim of exhibiting linearly positive sets of histories that are not decoherent. The utility of extending the notion of probability to include values outside the range of 0-1 is described. Alternatives with such virtual probabilities cannot be measured or recorded, but can be used in the intermediate steps of calculations of real probabilities. Extended probabilities give a simple and general way of formulating quantum theory. The various decoherence conditions are compared in terms of their utility for characterizing classicality and the role they might play in further generalizations of quantum mechanics.
Denison, Stephanie; Trikutam, Pallavi; Xu, Fei
2014-08-01
A rich tradition in developmental psychology explores physical reasoning in infancy. However, no research to date has investigated whether infants can reason about physical objects that behave probabilistically, rather than deterministically. Physical events are often quite variable, in that similar-looking objects can be placed in similar contexts with different outcomes. Can infants rapidly acquire probabilistic physical knowledge, such as some leaves fall and some glasses break by simply observing the statistical regularity with which objects behave and apply that knowledge in subsequent reasoning? We taught 11-month-old infants physical constraints on objects and asked them to reason about the probability of different outcomes when objects were drawn from a large distribution. Infants could have reasoned either by using the perceptual similarity between the samples and larger distributions or by applying physical rules to adjust base rates and estimate the probabilities. Infants learned the physical constraints quickly and used them to estimate probabilities, rather than relying on similarity, a version of the representativeness heuristic. These results indicate that infants can rapidly and flexibly acquire physical knowledge about objects following very brief exposure and apply it in subsequent reasoning.
NASA Astrophysics Data System (ADS)
Hao, Zengchao; Hao, Fanghua; Singh, Vijay P.; Sun, Alexander Y.; Xia, Youlong
2016-11-01
Prediction of drought plays an important role in drought preparedness and mitigation, especially because of large impacts of drought and increasing demand for water resources. An important aspect for improving drought prediction skills is the identification of drought predictability sources. In general, a drought originates from precipitation deficit and thus the antecedent meteorological drought may provide predictive information for other types of drought. In this study, a hydrological drought (represented by Standardized Runoff Index (SRI)) prediction method is proposed based on the meta-Gaussian model taking into account the persistence and its prior meteorological drought condition (represented by Standardized Precipitation Index (SPI)). Considering the inherent nature of standardized drought indices, the meta-Gaussian model arises as a suitable model for constructing the joint distribution of multiple drought indices. Accordingly, the conditional distribution of hydrological drought can be derived analytically, which enables the probabilistic prediction of hydrological drought in the target period and uncertainty quantifications. Based on monthly precipitation and surface runoff of climate divisions of Texas, U.S., 1-month and 2-month lead predictions of hydrological drought are illustrated and compared to the prediction from Ensemble Streamflow Prediction (ESP). Results, based on 10 climate divisions in Texas, show that the proposed meta-Gaussian model provides useful drought prediction information with performance depending on regions and seasons.
Cluster membership probability: polarimetric approach
NASA Astrophysics Data System (ADS)
Medhi, Biman J.; Tamura, Motohide
2013-04-01
Interstellar polarimetric data of the six open clusters Hogg 15, NGC 6611, NGC 5606, NGC 6231, NGC 5749 and NGC 6250 have been used to estimate the membership probability for the stars within them. For proper-motion member stars, the membership probability estimated using the polarimetric data is in good agreement with the proper-motion cluster membership probability. However, for proper-motion non-member stars, the membership probability estimated by the polarimetric method is in total disagreement with the proper-motion cluster membership probability. The inconsistencies in the determined memberships may be because of the fundamental differences between the two methods of determination: one is based on stellar proper motion in space and the other is based on selective extinction of the stellar output by the asymmetric aligned dust grains present in the interstellar medium. The results and analysis suggest that the scatter of the Stokes vectors q (per cent) and u (per cent) for the proper-motion member stars depends on the interstellar and intracluster differential reddening in the open cluster. It is found that this method could be used to estimate the cluster membership probability if we have additional polarimetric and photometric information for a star to identify it as a probable member/non-member of a particular cluster, such as the maximum wavelength value (λmax), the unit weight error of the fit (σ1), the dispersion in the polarimetric position angles (overline{ɛ }), reddening (E(B - V)) or the differential intracluster reddening (ΔE(B - V)). This method could also be used to estimate the membership probability of known member stars having no membership probability as well as to resolve disagreements about membership among different proper-motion surveys.
Lee, Min-Jing; Yang, Kang-Chung; Shyu, Yu-Chiau; Yuan, Shin-Sheng; Yang, Chun-Ju; Lee, Sheng-Yu; Lee, Tung-Liang; Wang, Liang-Jen
2016-01-01
The purpose of this study is to determine the risk of developing depressive disorders by evaluating children with attention-deficit/hyperactivity disorder (ADHD) in comparison to controls that do not have ADHD, as well as to analyze whether the medications used to treat ADHD, methylphenidate (MPH) and atomoxetine (ATX), influence the risk of depression. A group of patients newly diagnosed with ADHD (n=71,080) and age- and gender-matching controls (n=71,080) were chosen from Taiwan's National Health Insurance database during the period of January 2000 to December 2011. Both the patients and controls were monitored through December 31, 2011. We also explore the potential influence of the length of MPH and ATX treatment on developing depressive disorders. The ADHD patients showed a significantly increased probability of developing a depressive disorder when compared to the control group (ADHD: 5.3% vs. 0.7%; aHR, 7.16, 99% CI: 6.28-8.16). Regarding treatment with MPH, a longer MPH use demonstrates significant protective effects against developing a depressive disorder (aOR, 0.91, 99%CI: 0.88-0.94). However, the duration of ATX treatment could not be significantly correlated with the probability of developing a depressive disorder. The database employed in this study lacks of comprehensive clinical information for the patients with ADHD. Potential moderating factors between ADHD and depression were not considered in-depth in this study. The results of this study reveal that youths diagnosed with ADHD have a greater risk of developing depressive disorders. Long-term treatment with MPH correlated to the reduced probability of developing a depressive disorder among youths with ADHD. Copyright © 2015 Elsevier B.V. All rights reserved.
Toda, Shinji; Stein, Ross S.
2002-01-01
The Parkfield-Cholame section of the San Andreas fault, site of an unfulfilled earthquake forecast in 1985, is the best monitored section of the world's most closely watched fault. In 1983, the M = 6.5 Coalinga and M = 6.0 Nuñez events struck 25 km northeast of Parkfield. Seismicity rates climbed for 18 months along the creeping section of the San Andreas north of Parkfield and dropped for 6 years along the locked section to the south. Right-lateral creep also slowed or reversed from Parkfield south. Here we calculate that the Coalinga sequence increased the shear and Coulomb stress on the creeping section, causing the rate of small shocks to rise until the added stress was shed by additional slip. However, the 1983 events decreased the shear and Coulomb stress on the Parkfield segment, causing surface creep and seismicity rates to drop. We use these observations to cast the likelihood of a Parkfield earthquake into an interaction-based probability, which includes both the renewal of stress following the 1966 Parkfield earthquake and the stress transfer from the 1983 Coalinga events. We calculate that the 1983 shocks dropped the 10-year probability of a M ∼ 6 Parkfield earthquake by 22% (from 54 ± 22% to 42 ± 23%) and that the probability did not recover until about 1991, when seismicity and creep resumed. Our analysis may thus explain why the Parkfield earthquake did not strike in the 1980s, but not why it was absent in the 1990s. We calculate a 58 ± 17% probability of a M ∼ 6 Parkfield earthquake during 2001–2011.
Sekhar, Deepa L; Zalewski, Thomas R; Beiler, Jessica S; Czarnecki, Beth; Barr, Ashley L; King, Tonya S; Paul, Ian M
2016-12-01
High frequency hearing loss (HFHL), often related to hazardous noise, affects one in six U.S. adolescents. Yet, only 20 states include school-based hearing screens for adolescents. Only six states test multiple high frequencies. Study objectives were to (1) compare the sensitivity of state school-based hearing screens for adolescents to gold standard sound-treated booth testing and (2) consider the effect of adding multiple high frequencies and two-step screening on sensitivity/specificity. Of 134 eleventh-grade participants (2013-2014), 43 of the 134 (32%) did not pass sound-treated booth testing, and 27 of the 43 (63%) had HFHL. Sensitivity/specificity of the most common protocol (1,000, 2,000, 4,000 Hz at 20 dB HL) for these hearing losses was 25.6% (95% confidence interval [CI] = [13.5, 41.2]) and 85.7% (95% CI [76.8, 92.2]), respectively. A protocol including 500, 1,000, 2,000, 4,000, 6,000 Hz at 20 dB HL significantly improved sensitivity to 76.7% (95% CI [61.4, 88.2]), p < .001. Two-step screening maintained specificity (84.6%, 95% CI [75.5, 91.3]). Adolescent school-based hearing screen sensitivity improves with high frequencies.
Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A
2016-03-15
The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle.
NASA Astrophysics Data System (ADS)
Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry
2015-11-01
In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.
NASA Astrophysics Data System (ADS)
Dong, Sheng; Fan, Dunqiu; Tao, Shanshan
2012-12-01
Return periods calculated for different environmental conditions are key parameters for ocean platform design. Many codes for offshore structure design give no consideration about the correlativity among multi-loads and over-estimate design values. This frequently leads to not only higher investment but also distortion of structural reliability analysis. The definition of design return period in existing codes and industry criteria in China are summarized. Then joint return periods of different ocean environmental parameters are determined from the view of service term and danger risk. Based on a bivariate equivalent maximum entropy distribution, joint design parameters are estimated for the concomitant wave height and wind speed at a site in the Bohai Sea. The calculated results show that even if the return period of each environmental factor, such as wave height or wind speed, is small, their combinations can lead to larger joint return periods. Proper design criteria for joint return period associated with concomitant environmental conditions will reduce structural size and lead to lower investment of ocean platforms for the exploitation of marginal oil field.
Probability and radical behaviorism
Espinosa, James M.
1992-01-01
The concept of probability appears to be very important in the radical behaviorism of Skinner. Yet, it seems that this probability has not been accurately defined and is still ambiguous. I give a strict, relative frequency interpretation of probability and its applicability to the data from the science of behavior as supplied by cumulative records. Two examples of stochastic processes are given that may model the data from cumulative records that result under conditions of continuous reinforcement and extinction, respectively. PMID:22478114
NASA Astrophysics Data System (ADS)
Zhang, Dan; Chen, Peng; Zhang, Qi; Li, Xianghu
2017-10-01
Investigation of concurrent hydrological drought events is helpful for understanding the inherent mechanism of hydrological extremes and designing corresponding adaptation strategy. This study investigates concurrent hydrological drought in the Poyang lake-catchment-river system from 1960 to 2013 based on copula functions. The standard water level index (SWI) and the standard runoff index (SRI) are employed to identify hydrological drought in the lake-catchment-river system. The appropriate marginal distributions and copulas are selected by the corrected Akaike Information Criterion and Bayesian copulas selection method. The probability of hydrological drought in Poyang Lake in any given year is 16.6% (return period of 6 years), and droughts occurred six times from 2003 to 2013. Additionally, the joint probability of concurrent drought events between the lake and catchment is 10.1% (return period of 9.9 years). Since 2003, concurrent drought has intensified in spring due to frequent hydrological drought in the catchment. The joint probability of concurrent drought between the lake and the Yangtze River is 11.5% (return period of 8.7 years). This simultaneous occurrence intensified in spring, summer and autumn from 2003 to 2013 due to the weakened blocking effect of the Yangtze River. Notably, although the lake drought intensified in winter during the past decade, hydrological drought in the catchment and the Yangtze River did not intensify simultaneously. Thus, this winter intensification might be caused by human activities in the lake region. The results of this study demonstrate that the Poyang lake-catchment-river system has been drying since 2003 based on a statistical approach. An adaptation strategy should be urgently established to mitigate the worsening situation in the Poyang lake-catchment-river system.
Neugebauer, Romain; Schmittdiel, Julie A; van der Laan, Mark J
2016-05-01
Consistent estimation of causal effects with inverse probability weighting estimators is known to rely on consistent estimation of propensity scores. To alleviate the bias expected from incorrect model specification for these nuisance parameters in observational studies, data-adaptive estimation and in particular an ensemble learning approach known as Super Learning has been proposed as an alternative to the common practice of estimation based on arbitrary model specification. While the theoretical arguments against the use of the latter haphazard estimation strategy are evident, the extent to which data-adaptive estimation can improve inferences in practice is not. Some practitioners may view bias concerns over arbitrary parametric assumptions as academic considerations that are inconsequential in practice. They may also be wary of data-adaptive estimation of the propensity scores for fear of greatly increasing estimation variability due to extreme weight values. With this report, we aim to contribute to the understanding of the potential practical consequences of the choice of estimation strategy for the propensity scores in real-world comparative effectiveness research. We implement secondary analyses of Electronic Health Record data from a large cohort of type 2 diabetes patients to evaluate the effects of four adaptive treatment intensification strategies for glucose control (dynamic treatment regimens) on subsequent development or progression of urinary albumin excretion. Three Inverse Probability Weighting estimators are implemented using both model-based and data-adaptive estimation strategies for the propensity scores. Their practical performances for proper confounding and selection bias adjustment are compared and evaluated against results from previous randomized experiments. Results suggest both potential reduction in bias and increase in efficiency at the cost of an increase in computing time when using Super Learning to implement Inverse Probability
Mosca, Ettore; Bersanelli, Matteo; Gnocchi, Matteo; Moscatelli, Marco; Castellani, Gastone; Milanesi, Luciano; Mezzelani, Alessandra
2017-01-01
Autism spectrum disorder (ASD) is marked by a strong genetic heterogeneity, which is underlined by the low overlap between ASD risk gene lists proposed in different studies. In this context, molecular networks can be used to analyze the results of several genome-wide studies in order to underline those network regions harboring genetic variations associated with ASD, the so-called “disease modules.” In this work, we used a recent network diffusion-based approach to jointly analyze multiple ASD risk gene lists. We defined genome-scale prioritizations of human genes in relation to ASD genes from multiple studies, found significantly connected gene modules associated with ASD and predicted genes functionally related to ASD risk genes. Most of them play a role in synapsis and neuronal development and function; many are related to syndromes that can be in comorbidity with ASD and the remaining are involved in epigenetics, cell cycle, cell adhesion and cancer. PMID:28993790
Xu, Bo; Zhang, Ying; Zhao, Zongjiang; Yoshida, Yutaka; Magdeldin, Sameh; Fujinaka, Hidehiko; Ismail, Tamer Ahmed; Yaoita, Eishin; Yamamoto, Tadashi
2011-06-10
In the field of bottom-up proteomics, heavy contamination of human keratins could hinder the comprehensive protein identification, especially for the detection of low abundance proteins. In this study, we examined the keratin contamination in the four major experimental procedures in gel-based proteomic analysis including gel preparation, gel electrophoresis, gel staining, and in-gel digestion. We found that in-gel digestion procedure might be of importance corresponding to keratin contaminants compared to the other three ones. The human keratin contamination was reduced significantly by using an electrostatic eliminator during in-gel digestion, suggesting that static electricity built up on insulated experimental materials might be one of the essential causes of keratin contamination. We herein proposed a series of methods for improving experimental conditions and sample treatment in order to minimize the keratin contamination in an economical and practical way. Copyright © 2011 Elsevier B.V. All rights reserved.
Li, Qingliang; Bender, Andreas; Pei, Jianfeng; Lai, Luhua
2007-01-01
Probabilistic support vector machine (SVM) in combination with ECFP_4 (Extended Connectivity Fingerprints) were applied to establish a druglikeness filter for molecules. Here, the World Drug Index (WDI) and the Available Chemical Directory (ACD) were used as surrogates for druglike and nondruglike molecules, respectively. Compared with published methods using the same data sets, the classifier significantly improved the prediction accuracy, especially when using a larger data set of 341 601 compounds, which further pushed the correct classification rates up to 92.73%. On the other hand, most characteristic features for drugs and nondrugs found by the current method were visualized, which might be useful as guiding fragments for de novo drug design and fragment based drug design.
NASA Astrophysics Data System (ADS)
Jafari, H.; Heidarzadeh, H.; Rostami, A.; Rostami, G.; Dolatyari, M.
2017-01-01
A photoconductive fractal antenna significantly improves the performance of photomixing-based continuous wave (CW) terahertz (THz) systems. An analysis has been carried out for the generation of CW-THz radiation by photomixer photoconductive antenna technique. To increase the active area for generation and hence the THz radiation power we used interdigitated electrodes that are coupled with a fractal tree antenna. In this paper, both semiconductor and electromagnetic problems are considered. Here, photomixer devices with Thue-Morse fractal tree antennas in two configurations (narrow and wide) are discussed. This new approach gives better performance, especially in the increasing of THz output power of photomixer devices, when compared with the conventional structures. In addition, applying the interdigitated electrodes improved THz photocurrent, considerably. It produces THz radiation power several times higher than the photomixers with simple gap.
Probability of satellite collision
NASA Technical Reports Server (NTRS)
Mccarter, J. W.
1972-01-01
A method is presented for computing the probability of a collision between a particular artificial earth satellite and any one of the total population of earth satellites. The collision hazard incurred by the proposed modular Space Station is assessed using the technique presented. The results of a parametric study to determine what type of satellite orbits produce the greatest contribution to the total collision probability are presented. Collision probability for the Space Station is given as a function of Space Station altitude and inclination. Collision probability was also parameterized over miss distance and mission duration.
NASA Astrophysics Data System (ADS)
Laktineh, Imad
2010-04-01
This ourse constitutes a brief introduction to probability applications in high energy physis. First the mathematical tools related to the diferent probability conepts are introduced. The probability distributions which are commonly used in high energy physics and their characteristics are then shown and commented. The central limit theorem and its consequences are analysed. Finally some numerical methods used to produce diferent kinds of probability distribution are presented. The full article (17 p.) corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.
STATISTICAL ANALYSIS, REPORTS), (*PROBABILITY, REPORTS), INFORMATION THEORY, DIFFERENTIAL EQUATIONS, STATISTICAL PROCESSES, STOCHASTIC PROCESSES, MULTIVARIATE ANALYSIS, DISTRIBUTION THEORY , DECISION THEORY, MEASURE THEORY, OPTIMIZATION
A network-based method to assess the statistical significance of mild co-regulation effects.
Horvát, Emőke-Ágnes; Zhang, Jitao David; Uhlmann, Stefan; Sahin, Özgür; Zweig, Katharina Anna
2013-01-01
Recent development of high-throughput, multiplexing technology has initiated projects that systematically investigate interactions between two types of components in biological networks, for instance transcription factors and promoter sequences, or microRNAs (miRNAs) and mRNAs. In terms of network biology, such screening approaches primarily attempt to elucidate relations between biological components of two distinct types, which can be represented as edges between nodes in a bipartite graph. However, it is often desirable not only to determine regulatory relationships between nodes of different types, but also to understand the connection patterns of nodes of the same type. Especially interesting is the co-occurrence of two nodes of the same type, i.e., the number of their common neighbours, which current high-throughput screening analysis fails to address. The co-occurrence gives the number of circumstances under which both of the biological components are influenced in the same way. Here we present SICORE, a novel network-based method to detect pairs of nodes with a statistically significant co-occurrence. We first show the stability of the proposed method on artificial data sets: when randomly adding and deleting observations we obtain reliable results even with noise exceeding the expected level in large-scale experiments. Subsequently, we illustrate the viability of the method based on the analysis of a proteomic screening data set to reveal regulatory patterns of human microRNAs targeting proteins in the EGFR-driven cell cycle signalling system. Since statistically significant co-occurrence may indicate functional synergy and the mechanisms underlying canalization, and thus hold promise in drug target identification and therapeutic development, we provide a platform-independent implementation of SICORE with a graphical user interface as a novel tool in the arsenal of high-throughput screening analysis.
A Network-Based Method to Assess the Statistical Significance of Mild Co-Regulation Effects
Horvát, Emőke-Ágnes; Zhang, Jitao David; Uhlmann, Stefan; Sahin, Özgür; Zweig, Katharina Anna
2013-01-01
Recent development of high-throughput, multiplexing technology has initiated projects that systematically investigate interactions between two types of components in biological networks, for instance transcription factors and promoter sequences, or microRNAs (miRNAs) and mRNAs. In terms of network biology, such screening approaches primarily attempt to elucidate relations between biological components of two distinct types, which can be represented as edges between nodes in a bipartite graph. However, it is often desirable not only to determine regulatory relationships between nodes of different types, but also to understand the connection patterns of nodes of the same type. Especially interesting is the co-occurrence of two nodes of the same type, i.e., the number of their common neighbours, which current high-throughput screening analysis fails to address. The co-occurrence gives the number of circumstances under which both of the biological components are influenced in the same way. Here we present SICORE, a novel network-based method to detect pairs of nodes with a statistically significant co-occurrence. We first show the stability of the proposed method on artificial data sets: when randomly adding and deleting observations we obtain reliable results even with noise exceeding the expected level in large-scale experiments. Subsequently, we illustrate the viability of the method based on the analysis of a proteomic screening data set to reveal regulatory patterns of human microRNAs targeting proteins in the EGFR-driven cell cycle signalling system. Since statistically significant co-occurrence may indicate functional synergy and the mechanisms underlying canalization, and thus hold promise in drug target identification and therapeutic development, we provide a platform-independent implementation of SICORE with a graphical user interface as a novel tool in the arsenal of high-throughput screening analysis. PMID:24039936
NASA Astrophysics Data System (ADS)
Boiselet, Aurelien; Scotti, Oona; Lyon-Caen, Hélène
2014-05-01
-SISCOR Working Group. On the basis of this consensual logic tree, median probability of occurrences of M>=6 events were computed for the region of study. Time-dependent models (Brownian Passage time and Weibull probability distributions) were also explored. The probability of a M>=6.0 event is found to be greater in the western region compared to the eastern part of the Corinth rift, whether a fault-based or a classical seismotectonic approach is used. Percentile probability estimates are also provided to represent the range of uncertainties in the results. The percentile results show that, in general, probability estimates following the classical approach (based on the definition of seismotectonic source zones), cover the median values estimated following the fault-based approach. On the contrary, the fault-based approach in this region is still affected by a high degree of uncertainty, because of the poor constraints on the 3D geometries of the faults and the high uncertainties in their slip rates.
Visualization of the significance of Receiver Operating Characteristics based on confidence ellipses
NASA Astrophysics Data System (ADS)
Sarlis, Nicholas V.; Christopoulos, Stavros-Richard G.
2014-03-01
The Receiver Operating Characteristics (ROC) is used for the evaluation of prediction methods in various disciplines like meteorology, geophysics, complex system physics, medicine etc. The estimation of the significance of a binary prediction method, however, remains a cumbersome task and is usually done by repeating the calculations by Monte Carlo. The FORTRAN code provided here simplifies this problem by evaluating the significance of binary predictions for a family of ellipses which are based on confidence ellipses and cover the whole ROC space. Catalogue identifier: AERY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERY_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 11511 No. of bytes in distributed program, including test data, etc.: 72906 Distribution format: tar.gz Programming language: FORTRAN. Computer: Any computer supporting a GNU FORTRAN compiler. Operating system: Linux, MacOS, Windows. RAM: 1Mbyte Classification: 4.13, 9, 14. Nature of problem: The Receiver Operating Characteristics (ROC) is used for the evaluation of prediction methods in various disciplines like meteorology, geophysics, complex system physics, medicine etc. The estimation of the significance of a binary prediction method, however, remains a cumbersome task and is usually done by repeating the calculations by Monte Carlo. The FORTRAN code provided here simplifies this problem by evaluating the significance of binary predictions for a family of ellipses which are based on confidence ellipses and cover the whole ROC space. Solution method: Using the statistics of random binary predictions for a given value of the predictor threshold ɛt, one can construct the corresponding confidence ellipses. The envelope of these corresponding confidence ellipses is estimated when
Palmisano, Stephen; Allison, Robert S.; Schira, Mark M.; Barry, Robert J.
2015-01-01
This paper discusses four major challenges facing modern vection research. Challenge 1 (Defining Vection) outlines the different ways that vection has been defined in the literature and discusses their theoretical and experimental ramifications. The term vection is most often used to refer to visual illusions of self-motion induced in stationary observers (by moving, or simulating the motion of, the surrounding environment). However, vection is increasingly being used to also refer to non-visual illusions of self-motion, visually mediated self-motion perceptions, and even general subjective experiences (i.e., “feelings”) of self-motion. The common thread in all of these definitions is the conscious subjective experience of self-motion. Thus, Challenge 2 (Significance of Vection) tackles the crucial issue of whether such conscious experiences actually serve functional roles during self-motion (e.g., in terms of controlling or guiding the self-motion). After more than 100 years of vection research there has been surprisingly little investigation into its functional significance. Challenge 3 (Vection Measures) discusses the difficulties with existing subjective self-report measures of vection (particularly in the context of contemporary research), and proposes several more objective measures of vection based on recent empirical findings. Finally, Challenge 4 (Neural Basis) reviews the recent neuroimaging literature examining the neural basis of vection and discusses the hurdles still facing these investigations. PMID:25774143
Mass spectrometry-based protein identification with accurate statistical significance assignment.
Alves, Gelio; Yu, Yi-Kuo
2015-03-01
Assigning statistical significance accurately has become increasingly important as metadata of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of metadata at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry-based proteomics, even though accurate statistics for peptide identification can now be achieved, accurate protein level statistics remain challenging. We have constructed a protein ID method that combines peptide evidences of a candidate protein based on a rigorous formula derived earlier; in this formula the database P-value of every peptide is weighted, prior to the final combination, according to the number of proteins it maps to. We have also shown that this protein ID method provides accurate protein level E-value, eliminating the need of using empirical post-processing methods for type-I error control. Using a known protein mixture, we find that this protein ID method, when combined with the Sorić formula, yields accurate values for the proportion of false discoveries. In terms of retrieval efficacy, the results from our method are comparable with other methods tested. The source code, implemented in C++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.
Sheng, Ke; Cai, Jing; Brookeman, James; Molloy, Janelle; Christopher, John; Read, Paul
2006-09-01
Lung tumor motion trajectories measured by four-dimensional CT or dynamic MRI can be converted to a probability density function (PDF), which describes the probability of the tumor at a certain position, for PDF based treatment planning. Using this method in simulated sequential tomotherapy, we study the dose reduction of normal tissues and more important, the effect of PDF reproducibility on the accuracy of dosimetry. For these purposes, realistic PDFs were obtained from two dynamic MRI scans of a healthy volunteer within a 2 week interval. The first PDF was accumulated from a 300 s scan and the second PDF was calculated from variable scan times from 5 s (one breathing cycle) to 300 s. Optimized beam fluences based on the second PDF were delivered to the hypothetical gross target volume (GTV) of a lung phantom that moved following the first PDF The reproducibility between two PDFs varied from low (78%) to high (94.8%) when the second scan time increased from 5 s to 300 s. When a highly reproducible PDF was used in optimization, the dose coverage of GTV was maintained; phantom lung receiving 10%-20% prescription dose was reduced by 40%-50% and the mean phantom lung dose was reduced by 9.6%. However, optimization based on PDF with low reproducibility resulted in a 50% underdosed GTV. The dosimetric error increased nearly exponentially as the PDF error increased. Therefore, although the dose of the tumor surrounding tissue can be theoretically reduced by PDF based treatment planning, the reliability and applicability of this method highly depend on if a reproducible PDF exists and is measurable. By correlating the dosimetric error and PDF error together, a useful guideline for PDF data acquisition and patient qualification for PDF based planning can be derived.
Deleré, Yvonne; Remschmidt, Cornelius; Leuschner, Josefine; Schuster, Melanie; Fesenfeld, Michaela; Schneider, Achim; Wichmann, Ole; Kaufmann, Andreas M
2014-02-19
Estimates of Human Papillomavirus (HPV) prevalence in a population prior to and after HPV vaccine introduction are essential to evaluate the short-term impact of vaccination. Between 2010 and 2012 we conducted a population-based cross-sectional study in Germany to determine HPV prevalence, genotype distribution and risk factors for HPV-infection in women aged 20-25 years. Women were recruited by a two-step cluster sampling approach. A home-based self-collection of cervicovaginal lavages was used. Specimens were analysed using a general primer GP5+/GP6+-based polymerase chain reaction and genotyped for 18 high-risk and 6 low-risk HPV- strains by Luminex-based multiplexed genotyping. Among 787 included women, 512 were not vaccinated against HPV. In the non-vaccinated population, HPV prevalence of any type was 38.1%, with HPV 16 (19.5%) being the most prevalent genotype. Prevalence of any high-risk type was 34.4%, and in 17.4% of all women, more than one genotype was identified. A higher number of lifetime sexual partners and low educational status were independently associated with HPV-infection. In 223 vaccinated women, prevalence of HPV 16/18 was significantly lower compared to non-vaccinated women (13.9% vs. 22.5%, p = 0.007). When stratifying by age groups, this difference was only significant in women aged 20-21 years, who at time of vaccination were on average younger and had less previous sexual contacts than women aged 22-25 years. We demonstrate a high prevalence of high-risk HPV genotypes in non-vaccinated women living in Germany that can be potentially prevented by vaccination. Probable first vaccination effects on the HPV prevalence were observed in women who were vaccinated at younger age. This finding reinforces the recommendation to vaccinate girls in early adolescence.
Mario, John R
2010-04-15
A probability-based analytical sampling approach for seized containers of cocaine, Cannabis, or heroin, to answer questions of both content weight and identity, is described. It utilizes the Student's t distribution, and, because of the lack of normality in studied populations, the power of the Central Limit Theorem with samples of size 20 to calculate the mean net weights of multiple item drug seizures. Populations studied ranged between 50 and 1200 units. Identity determination is based on chemical testing and sampling using the hypergeometric distribution fit to a program macro - created by the European Network of Forensic Science Institutes (ENFSI) Drugs Working Group. Formal random item selection is effected through use of an Excel-generated list of random numbers. Included, because of their impact on actual practice, are discussions of admissibility, sufficiency of proof, method validation, and harmony with the guidelines of international standardizing bodies.
ERIC Educational Resources Information Center
Barnes, Bernis, Ed.; And Others
This teacher's guide to probability and statistics contains three major sections. The first section on elementary combinatorial principles includes activities, student problems, and suggested teaching procedures for the multiplication principle, permutations, and combinations. Section two develops an intuitive approach to probability through…
Teachers' Understandings of Probability
ERIC Educational Resources Information Center
Liu, Yan; Thompson, Patrick
2007-01-01
Probability is an important idea with a remarkably wide range of applications. However, psychological and instructional studies conducted in the last two decades have consistently documented poor understanding of probability among different populations across different settings. The purpose of this study is to develop a theoretical framework for…
Significance of Bias Correction in Drought Frequency and Scenario Analysis Based on Climate Models
NASA Astrophysics Data System (ADS)
Aryal, Y.; Zhu, J.
2015-12-01
Assessment of future drought characteristics is difficult as climate models usually have bias in simulating precipitation frequency and intensity. To overcome this limitation, output from climate models need to be bias corrected based on the specific purpose of applications. In this study, we examine the significance of bias correction in the context of drought frequency and scenario analysis using output from climate models. In particular, we investigate the performance of three widely used bias correction techniques: (1) monthly bias correction (MBC), (2) nested bias correction (NBC), and (3) equidistance quantile mapping (EQM) The effect of bias correction in future scenario of drought frequency is also analyzed. The characteristics of drought are investigated in terms of frequency and severity in nine representative locations in different climatic regions across the United States using regional climate model (RCM) output from the North American Regional Climate Change Assessment Program (NARCCAP). The Standardized Precipitation Index (SPI) is used as the means to compare and forecast drought characteristics at different timescales. Systematic biases in the RCM precipitation output are corrected against the National Centers for Environmental Prediction (NCEP) North American Regional Reanalysis (NARR) data. The results demonstrate that bias correction significantly decreases the RCM errors in reproducing drought frequency derived from the NARR data. Preserving mean and standard deviation is essential for climate models in drought frequency analysis. RCM biases both have regional and timescale dependence. Different timescale of input precipitation in the bias corrections show similar results. Drought frequency obtained from the RCM future (2040-2070) scenarios is compared with that from the historical simulations. The changes in drought characteristics occur in all climatic regions. The relative changes in drought frequency in future scenario in relation to
Probability distributions for magnetotellurics
Stodt, John A.
1982-11-01
Estimates of the magnetotelluric transfer functions can be viewed as ratios of two complex random variables. It is assumed that the numerator and denominator are governed approximately by a joint complex normal distribution. Under this assumption, probability distributions are obtained for the magnitude, squared magnitude, logarithm of the squared magnitude, and the phase of the estimates. Normal approximations to the distributions are obtained by calculating mean values and variances from error propagation, and the distributions are plotted with their normal approximations for different percentage errors in the numerator and denominator of the estimates, ranging from 10% to 75%. The distribution of the phase is approximated well by a normal distribution for the range of errors considered, while the distribution of the logarithm of the squared magnitude is approximated by a normal distribution for a much larger range of errors than is the distribution of the squared magnitude. The distribution of the squared magnitude is most sensitive to the presence of noise in the denominator of the estimate, in which case the true distribution deviates significantly from normal behavior as the percentage errors exceed 10%. In contrast, the normal approximation to the distribution of the logarithm of the magnitude is useful for errors as large as 75%.
Song, Dezhen; Xu, Yiliang
2010-09-01
We report a new filter to assist the search for rare bird species. Since a rare bird only appears in front of a camera with very low occurrence (e.g., less than ten times per year) for very short duration (e.g., less than a fraction of a second), our algorithm must have a very low false negative rate. We verify the bird body axis information with the known bird flying dynamics from the short video segment. Since a regular extended Kalman filter (EKF) cannot converge due to high measurement error and limited data, we develop a novel probable observation data set (PODS)-based EKF method. The new PODS-EKF searches the measurement error range for all probable observation data that ensures the convergence of the corresponding EKF in short time frame. The algorithm has been extensively tested using both simulated inputs and real video data of four representative bird species. In the physical experiments, our algorithm has been tested on rock pigeons and red-tailed hawks with 119 motion sequences. The area under the ROC curve is 95.0%. During the one-year search of ivory-billed woodpeckers, the system reduces the raw video data of 29.41 TB to only 146.7 MB (reduction rate 99.9995%).
Possa, Gabriela; de Castro, Michelle Alessandra; Marchioni, Dirce Maria Lobo; Fisberg, Regina Mara; Fisberg, Mauro
2015-08-01
The aim of this population-based cross-sectional health survey (N = 532) was to investigate the factors associated with the probability and amounts of yogurt intake in Brazilian adults and the elderly. A structured questionnaire was used to obtain data on demographics, socioeconomic information, presence of morbidities and lifestyle and anthropometric characteristics. Food intake was evaluated using two nonconsecutive 24-hour dietary recalls and a Food Frequency Questionnaire. Approximately 60% of the subjects were classified as yogurt consumers. In the logistic regression model, yogurt intake was associated with smoking (odds ratio [OR], 1.98), female sex (OR, 2.12), and age 20 to 39 years (OR, 3.11). Per capita family income and being a nonsmoker were factors positively associated with the amount of yogurt consumption (coefficients, 0.61 and 3.73, respectively), whereas the level of education of the head of household was inversely associated (coefficient, 0.61). In this study, probability and amounts of yogurt intake are differently affected by demographic, socioeconomic, and lifestyle factors in adults and the elderly.
Robson, Barry; Boray, Srinidhi
2015-11-01
We extend Q-UEL, our universal exchange language for interoperability and inference in healthcare and biomedicine, to the more traditional fields of public health surveys. These are the type associated with screening, epidemiological and cross-sectional studies, and cohort studies in some cases similar to clinical trials. There is the challenge that there is some degree of split between frequentist notions of probability as (a) classical measures based only on the idea of counting and proportion and on classical biostatistics as used in the above conservative disciplines, and (b) more subjectivist notions of uncertainty, belief, reliability, or confidence often used in automated inference and decision support systems. Samples in the above kind of public health survey are typically small compared with our earlier "Big Data" mining efforts. An issue addressed here is how much imp